
YOUR HETZNER SERVER WITH GPU FOR EVERY AI PROJECT
Selected AI use-cases
- Generative AI
- Natural Language Processing
- Analytics and science
Generative AI
Generative AI creates creative content such as images, text, audio or even code. GPU servers provide the massive parallel computing power required by diffusion or transformer models. A typical example is the hosting of an AI image generator based on stable diffusion, which uses powerful GPUs to generate detailed, realistic images in a matter of seconds.
Find your perfect GPU server for your work with AI
GEX130 – Maximum power for AI training
When training AI, an artificial intelligence model like DeepSeek is fed with large data sets so that it can perform certain tasks. This is an iterative process in which the model is optimized by adjusting its parameters to deliver the best results. The GEX130 offers the performance required for such complex processes with the NVIDIA RTX™ 6000 of the Ada generation. 48 GB of graphics memory, 142 RT cores, 568 tensor cores and 18176 CUDA cores provide the enormous speed and performance required for working with massive data sets and complex calculations. This is particularly helpful when dealing with image and language processing, generative models and time series analyses.

GEX44 – Highly efficient AI inference
AI inference means using a trained AI model like Llama that analyzes new data and makes predictions or decisions based on it. The process is important for the practical application of AI systems, as learned knowledge is applied to new, unknown data. The GEX44 is optimized for such scenarios and delivers fast, accurate answers with minimal latency. By utilizing the 192 tensor cores in the Nvidia RTX ™ 4000 SFF Ada Generation GPU, it can accelerate demanding calculations and thus significantly increase efficiency. This is important for applications such as real-time image recognition, speech processing and complex data analysis. The RTX™ 4000 SFF Ada Generation is also particularly energy-efficient.

Maximum data protection for your AI projects
As a hosting provider headquartered in Germany, data protection is a top priority at Hetzner. We offer our customers GDPR-compliant hosting and data sovereignty. Customer master data is not transferred to third countries. Our GPU servers are operated in our certified (DIN ISO/IEC 27001) data centers in Germany and Finland.
Everything you need to kickstart your AI project
Get AI models such as DeepSeek or Llama running on our dedicated GPU servers and tag us on Hugging Face for a shout-out of your favorite Projects.
Frequently asked questions
-
Are the GEX servers suitable for AI training?
Whether a server is suitable for AI training depends on the size (number of parameters) of the AI model used. The GEX130 has 48GB of VRAM and therefore offers sufficient performance for training most popular AI models.
-
Do the GEX servers support CUDA technology?
Our GEX servers are equipped with NVIDIA RTX GPUs and therefore support CUDA technology.
-
At which locations are the GEX servers available?
The GEX130 is available in our data centers in Nuremberg (NBG1), Falkenstein (FSN1) and Helsinki (HEL1). The GEX44 is available in Falkenstein (FSN1).
-
Does Hetzner also offer GPU servers with NVIDIA H100?
Our GEX servers use NVIDIA RTX GPUs. We are always interested in what hardware requirements our users have so that we can respond to them when developing new products. Please let us know your product requirements using the feedback form on this page.
-
Can the GEX servers also be configured with multiple GPUs?
Our GEX servers each have one GPU and cannot be configured with multiple GPUs. We are always interested in what hardware requirements our users have, so that we can take these into account when developing new products. Please let us know your product requirements using the feedback form on this page.