Skip to content

Building an AI Cluster with Five Mac Minis? This is Insane! 🤯

macmini-cluster

Have you ever wondered if you could build an AI model training cluster with compact Mac Minis that's more cost-effective and energy-efficient than high-end GPUs? Recently, a tech geek's crazy experiment went viral on YouTube—he actually built a "mini supercomputer" using five M4 chip Mac Minis! Today we'll dive into the details behind this genius move and see how you can replicate it at home (or at least pretend to).


🍎 Why Choose Mac Mini? Apple Silicon's "Cheat Code"

1. Unified Memory: The "Shared Power Bank" for CPU and GPU

Traditional GPUs (like NVIDIA RTX 490) max out at 24GB of VRAM, while a top-spec Mac Mini can pack 64GB of unified memory—CPU and GPU share the same memory pool, eliminating the need to shuffle data back and forth. It's like knocking down the wall between the kitchen and dining room: the chef (GPU) and waiter (CPU) no longer need to run around, doubling the serving speed!

2. MLX Framework: Apple's "Secret Weapon"

Apple launched MLX in 2023, a machine learning framework optimized specifically for its chips, claiming to squeeze every drop of performance from M-series chips. In tests, MLX runs Llama 3 models with 30% faster generation speed than PyTorch, making Mac Mini competitive against high-end GPUs!

3. Power Efficiency Champion: Five Machines Using Only 28W?

The author's实测 found that five Mac Minis consume only 28W at idle and just over 200W under full load. In comparison, a single RTX 4090 GPU draws 450W at full load—that electricity cost difference could buy you a bubble tea!


🔧 Step-by-Step Cluster Setup: From "Building Blocks" to "Connecting Pipes"

Step 1: Hardware Shopping List

  • Mac Mini × N units: Recommend M4 Pro chip + 64GB memory top spec (tycoons can choose M4 Ultra).
  • Thunderbolt 5 cables × several: Don't cheap out on knockoff cables, or you'll drop back to 2G speeds.
  • Thunderbolt hub: Since each Mac Mini only has 3 Thunderbolt ports, need this as a "connector" to link more than 3 units.

Step 2: Thunderbolt Bridged Network

  • Manual IP assignment: Set each machine's IP to 192.168.10.10, 192.168.10.20... (perfectionist's dream).
  • Enable "Jumbo Frames": Check Jumbo Packet in Thunderbolt bridge settings, letting data packets move like moving trucks—carrying more cargo at once, reducing traffic jams.
  • Say No to Wi-Fi:实测 shows Thunderbolt direct connection is 50% faster than wireless! After all, "wired connection never fails, wireless latency makes you fail."

Step 3: Enter the Magic Tool EXO

  • Distributed Computing "Idiot-Proof Package": The open-source tool EXO strongly recommended by the author automatically splits models into fragments and distributes them across different machines—no coding required.
  • Watch the Version Number: This tool updates more frequently than iPhone OS; tutorial videos might be outdated as soon as they're published (author's words: "Last month's video is already obsolete!").

⚡ Reality Check: Ideal vs. Reality

Fail #1: Adding Machines Makes It Slower?

When the author connected two base-model M4s (16GB memory) through a hub, generation speed plummeted from 70 token/s (single machine) to 45 token/s! The culprit? The hub became the bottleneck. Solution? Direct Thunderbolt connection, and speed instantly shot up to 95 token/s—indeed, "middlemen" can't be trusted!

Fail #2: 32GB Memory =智商税 (Stupid Tax)?

Running a 7B model on a 32GB M4 performed the same as the 16GB base model! Turns out memory bandwidth is the bottleneck, not capacity. It's like giving a sports car a swimming-pool-sized gas tank, but the engine is still a 1.0L three-cylinder—pointless!

Fail #3: Five Machines Worse Than One Top Spec?

When the author summoned five Mac Minis to tackle a 70B large model, generation speed was only 4.9 token/s—slow enough to brew a cup of coffee. Meanwhile, a single MacBook Pro with 128GB memory easily achieved 100+ token/s. Conclusion: "Many hands make light work" might be a false proposition in the AI world, unless your model truly needs to be拆成 Lego bricks.


🤔 So... What's This Actually Good For?

Suitable For:

  • Hardware Geeks: Just want to see five Mac Minis stacked together glowing and heating up.
  • Environmental Warriors: So energy-efficient even Musk would approve (though he'd probably just buy A100s).
  • Small Model Enthusiasts: Run models under 10B, experience the "ritual" of distributed computing.

Don't Bother If:

  • Large Model Players: Want to run Llama 3-400B? Better stick with H100.
  • Heat-Sensitive: Stack five machines together, and the bottom one hits 40°C—could fry an egg in summer.
  • Lazy: Tuning parameters is more troublesome than dating; even EXO's "idiot-proof" requires hours of tinkering.

🍻 Ultimate Soul-Searching Question: Why Not Just Buy a Top-Spec Mac?

The author's heartfelt conclusion: "Building this cluster is purely performance art! For practical use, better to buy an M4 Max + 128GB memory MacBook Pro—it crushes five base models in performance, without worrying about Thunderbolt cables tangling." So... unless you're bored (or have money to burn), just treat this article as science fiction. After all, the charm of technology sometimes lies in—knowing it's unnecessary, but wanting to try anyway! 🚀


Easter Egg: At the video's end, the author quietly pulls out a top-spec M4 Max MacBook Pro, instantly reducing the five Mac Mini cluster to a backdrop... (truly·reality check)