AI Inference

AI Inference

Transforming data, shaping the future

AI inference is the crucial step in which our trained models apply their acquired knowledge to make precise decisions or predictions based on new data. By training our AI models on large data sets, they gain the ability to recognize patterns and process information. In the inference phase, these models use the learned knowledge to generate relevant insights or solve specific problems. This process allows our AI to respond to queries in real time and deliver informed decisions. In turn, we can apply these responses to practical applications in various fields.

MIC-711D-ON
Highlights
Server (actively cooled) in mini-format with graphics chip for AI inference.
The development kit does not have a chassis cover.
Incl. 4 GB RAM
upgradeable up to 8 GB RAM (1 DIMMs)
Incl. 128 GB M.2 NVMe
2x Nano SIM card holder
51 mm (H)
125 mm (W)
125 mm (D)
1 x Gbit/s LAN (RJ-45)

Price incl. Arm Cortex-A78AE (6 cores)

starting at 450 
MIC-711D-OX
Highlights
Server (actively cooled) in mini-format with graphics chip for AI inference.
The development kit does not have a chassis cover.
Incl. 8 GB RAM
upgradeable up to 16 GB RAM (1 DIMMs)
Incl. 128 GB M.2 NVMe
2x Nano SIM card holder
51 mm (H)
125 mm (W)
125 mm (D)
1 x Gbit/s LAN (RJ-45)

Price incl. Arm Cortex-A78AE (6 cores)

starting at 715 
MIC-711-ON
Highlights
Silent server in mini format with graphics chip for AI inference.
Incl. 4 GB RAM
upgradeable up to 8 GB RAM (1 DIMMs)
Incl. 128 GB M.2 NVMe
2x Nano SIM card holder
46 mm (H)
130 mm (W)
130 mm (D)
1 x Gbit/s LAN (RJ-45)

Price incl. Arm Cortex-A78AE (6 cores)

starting at 685 
MIC-711-OX
Highlights
Silent server in mini format with graphics chip for AI inference.
Incl. 8 GB RAM
upgradeable up to 16 GB RAM (1 DIMMs)
Incl. 128 GB M.2 NVMe
2x Nano SIM card holder
46 mm (H)
130 mm (W)
130 mm (D)
1 x Gbit/s LAN (RJ-45)

Price incl. Arm Cortex-A78AE (6 cores)

starting at 1.085 

 

All prices are net and do not include the statutory VAT. They are directed exclusively towards entrepreneurs (§ 14 BGB), legal entities subject to public law and special funds subject to public law.

 

AI Inference in use: case studies & success stories

Harvesting robots equipped with AI models and image processing can recognize ripe fruit and collect it with a robotic arm, resulting in an efficient and accurate fruit harvest.

Read the case study now

ebook_NVIDIA


NVIDIA Metropolis e-book

This e-book on Metropolis gives you a comprehensive overview of the new generation of AI applications.

Read the NVIDIA Metropolis e-book NOW

Optimized hardware for AI inference

When processing image or voice data, numerous and complex connections have to be weighted and taken into account. The calculation can therefore take a long time and place high demands on the CPU(s), RAM and power supply. Here, suitable hardware configurations can help you avoid unnecessarily long computing times. Our online shop offers high-performance systems that have been optimized for this purpose.

Efficiency through automation

An AI can only make qualified decisions that meet your business objectives after an extensive learning phase. This involves analyzing a large amount of relevant data, which serves as the basis for future decisions by the AI. Based on the analyzed data, the AI will make optimal decisions for your company's interests completely autonomously after a sufficient learning phase thanks to AI inference – saving you time, effort and resources!

Would you like to learn more about AI inference at Thomas-Krenn?
Get in touch!