What happens when you let artificial intelligence design its own workstation? I decided to find out by challenging ChatGPT to build a $1,200 PC specifically for running Proxmox and various AI workloads. The results were... surprisingly enlightening.
The Challenge
The premise was simple: Give ChatGPT a $1,200 budget and let it select every component for a PC designed to run virtual machines and AI models. No human intervention, no guiding its choices—just pure AI decision-making. As someone who's built countless systems, I was particularly curious to see if it could navigate the complex web of component compatibility and performance requirements.
Breaking Down ChatGPT's Component Choices
The Foundation: Case and Power Supply
ChatGPT started with the LIAN LI LANCOOL 216, demonstrating a solid understanding of thermal management. The case's superior airflow design and pre-installed fans make it ideal for handling the heat output from intensive AI workloads. This choice showed that the AI grasped the importance of cooling in a system that would be running at high utilization.
For power, it selected the CORSAIR RM750x PSU. The 750-watt gold-rated power supply provides ample headroom for the current components while allowing for future GPU upgrades. The efficiency rating means lower electricity bills and less heat generation—a thoughtful consideration that honestly impressed me.
Core Components: Motherboard and CPU
The motherboard selection—an ASUS TUF GAMING B650-PLUS—revealed ChatGPT's ability to balance features with future-proofing. The board supports DDR5 memory and PCIe 4.0, providing a solid foundation for upgrades. More importantly, it ensures full compatibility with the chosen AMD platform.
Speaking of processors, ChatGPT opted for the AMD Ryzen 5 7600. This six-core CPU offers excellent multi-threading performance for running virtual machines while maintaining strong single-core speeds for general tasks. While there might have been other options, the choice demonstrates a good understanding of the performance-per-dollar sweet spot for virtualization workloads.
Memory and Storage: Planning for AI Workloads
The memory configuration showed real insight into AI requirements. ChatGPT specified 32GB of DDR5 6000MHz RAM, acknowledging that virtual machines and AI models are memory-hungry applications. Interestingly, it noted that more RAM would be ideal but had to balance this against budget constraints—showing an understanding of real-world trade-offs.
For storage, the WD_BLACK 1TB NVMe SSD was selected. The choice prioritizes fast read/write speeds, which are crucial for AI model loading and training. ChatGPT even suggested that users might want to consider adding more storage in the future, demonstrating foresight about typical AI workflow requirements.
The GPU: A Strategic Choice
Perhaps the most intriguing selection was the RTX 3060 graphics card. While not the most powerful GPU available, ChatGPT's reasoning was sound: AI workloads, particularly machine learning tasks, benefit more from VRAM capacity than raw processing power at this price point. The 3060's 12GB of VRAM makes it an excellent choice for running smaller AI models locally—proving that sometimes more expensive doesn't mean better suited to the task.
The Verdict (So Far)
Looking at the component selection holistically, ChatGPT demonstrated a surprisingly nuanced understanding of both PC building and AI workload requirements. It balanced thermal considerations, future upgradeability, and workload-specific performance needs while staying within budget.
The build shows thoughtful consideration of:
- Thermal management for sustained AI workloads
- Memory and storage requirements for virtualization
- GPU VRAM capacity for AI model hosting
- Platform upgradeability for future expansion
However, the real test will come when we actually put this system through its paces with Proxmox and various AI models. Will ChatGPT's choices prove as practical in real-world use as they appear on paper? Stay tuned for our follow-up article where we'll stress test this AI-designed system with actual AI workloads.