How Do You Choose Server Hardware to Run Artificial Intelligence (AI) Projects (2024)?
Let's discuss what Artificial Intelligence (AI) is and how to choose the right server hardware for AI projects.


How do you choose server hardware to run Artificial Intelligence (AI) projects (2024)?
We came at the time when we heard the term AI on every corner. High-tech companies switch their business direction more or less to AI. We often hear the saying that it's our future, and 90s sci-fi movie scenarios are just around the corner. We moved from playing Chess with automated robots to Digital Assistants, Chatbots, smart home devices, self-driving cars, etc.
But all we need to run this are high-tech machines, servers, and cloud services. In this article, we will discuss basic server hardware requirements.
What is Artificial Intelligence (AI)?
It's the technology that helps machines/computers analyze a large amount of data and simulate the human mind in finding solutions and solving tasks. In other words, it's human brains with digital intelligence. The turning point and well-known AI example is the ChatGPT bot released by the company OpenAI in 2022. It's the first chatbot that can handle human-like conversation and, more importantly, do difficult calculation/programming/texting tasks for you.
Possible hardware choices
AI projects may vary in computing intensity. For example, the Chess bot requires much less computing power than ChatGPT, a multi-million-dollar company. Choosing hardware is a very complex task requiring many stress tests. However, we can look through three main points as Processor units (CPU), memory (RAM), and Storage Space and video card (GPU)
CPU: Many cores and processor base frequency are crucial requirements for AI projects. Even a high-powered Chess bot might require at least 26 physical cores / 52 threads.
The processor base frequency is usually recommended to be at least 3.0GHz; however, it's rarely available with multiple CPU servers. 2.5GHz should still be considered a good choice.
RAM: I still remember 2014, when 1TB (1000GB) memory servers were considered a beast. Now, you can easily get 1TB with any service provider, and that's exactly what you need for heavy AI tasks.
Storage: Don't let your server performance decrease by using old SSD disks. New NVMe disks are a must-have for any heavy overload task. Disk speed is the key. Read and write speeds are considered to be four times faster than regular SSD disks.
4 x disks RAID10 will increase your 4x read and 2x write speed gain.
GPU: Video cards are the main brains of AI. That's why NVIDIA's stock has skyrocketed in recent years.
GPUs have thousands of smaller cores that can perform many calculations simultaneously, making them ideal for AI tasks that involve processing large amounts of data simultaneously. Many modern GPUs are designed with AI in mind, including features like Tensor Cores that boost performance for specific types of AI calculations.
Which GPU should you choose for the AI project?
NVIDIA released the A100 80GB model in 2021, and it's still a highly desired model. Full technical details can be found on the NVIDIA developers page: https://developer.nvidia.com/blog/nvidia-ampere-architecture-in-depth/
The price of the NVIDIA A100 80GB GPU still reaches 15,000 – 18,000 USD. Due to extremely high demand and the booming AI business, it may still be difficult to find it available. Due to high costs and long delivery times, developers choose Bare-metal cloud services to access servers with hourly billing options.
More budget-friendly options:
Anything from the older NVIDIA RTX 30 & 40 series release is still a good entry point. Here is the list of all NVIDIA RTX models:
NVIDIA RTX 30 Series:
RTX 3050
RTX 3050 Ti
RTX 3060
RTX 3060 Ti
RTX 3070
RTX 3070 Ti
RTX 3080
RTX 3080 Ti
RTX 3090
NVIDIA RTX 40 Series:
RTX 4060
RTX 4060 Ti
RTX 4070
RTX 4070 Ti
RTX 4080
RTX 4080 Ti
RTX 4090
Compared with the A100, the RTX 4090 model can be purchased for only 2,500 USD. It's a perfect choice for the AI project regarding price and performance ratio.
NVIDIA A100 80GB vs. RTX 4090 24GB:
NVIDIA GeForce RTX 4090's Advantages
Released in 2022
Boost Clock has increased by 79% (2520MHz vs 1410MHz)
9472 additional rendering cores
NVIDIA A100 PCIe 80 GB's Advantages
More VRAM (80GB vs 24GB)
Larger VRAM bandwidth (1935TB/s vs 1.01TB/s)
Lower TDP (300W vs 450W)
Detailed technical comparison can be found of the TOP CPU review here: https://www.topcpu.net/en/gpu-c/geforce-rtx-4090-vs-a100-pcie-80-gb
On paper, A100 is a better choice for most technical aspects. We get the dilemma in comparison with the price and performance ratio. The A100 model is for commercial use, costing 15,000 USD, while RTX 4090 is only 2,500 USD. Math is simple; you can get six new and good-performing video cards for the same price.