The 100 Most Popular Hardware Setups for AI Builders on Hugging Face

Community Article Published May 6, 2026

Just dropped a fun new dataset: the 100 most popular hardware setups for AI builders on Hugging Face, based on 297,135 users who voluntarily filled in the local hardware section of their HF profile: https://huggingface.co/datasets/clem/100_most_popular_hardware_setups_on_HF

Screenshot 2026-05-05 at 4.42.27โ€ฏPM

A note on methodology

H/t Jordan Nanos at SemiAnalysis who rightly pointed out that the cleanest way to read this is to scope each row into exactly one of four mutually exclusive buckets: Discrete GPU, SoC/APU, CPU-only, or CPU+GPU combo. That avoids comparing Apple SoCs to standalone GPU SKUs, and avoids double-counting vendors across combos.

image (8)

The 140,141 top-100 user-reports break down as:

  • 60,120 Discrete GPU users (43%)
  • 50,077 SoC/APU users (36%)
  • 17,841 CPU-only (13%)
  • 12,103 CPU+GPU combos (8.6%)

The long tail is enormous

The top 100 setups only cover 47% of all 297,135 reporters. More than half of HF builders are running something that didn't even crack the top 100. Hardware fragmentation in the local-AI world is real.

Each vendor owns its own category

NVIDIA dominates discrete GPUs at 97.3%, Apple dominates SoCs/APUs at 95.6%, Intel dominates pure CPU rows at 82.5%. The naive "Apple vs NVIDIA" comparison is misleading because they actually compete in different buckets.

The CPU world has a striking inversion

Among CPU-only users, Intel leads 82.5% to 17.5%. But among users who explicitly build a CPU+GPU rig, AMD Ryzen leads 65% to 35%. The enthusiast / DIY local-AI builder crowd has clearly moved to Ryzen, even though Intel still has the bigger broad installed base.

VRAM beats raw power

The single most popular discrete GPU isn't the 4090 or the 5090 โ€” it's the RTX 3060 at 4,737 users. Specifically the 12GB version, which has more VRAM than the 3060 Ti, 3070, 4060, and 4060 Ti, and the same as the 3080. The 12GB RTX 3060 has roughly 4x the users of the 8GB RTX 3060 Ti, even though they share a name. AI builders care about memory size, not benchmark scores.

AI builders skew hard toward Pro and Max chips

For the M3 family, only 22% of users are on the base M3. The rest are on M3 Pro, M3 Max, or M3 Ultra. The M3 Pro alone (3,141 users) is significantly more popular than the M3 base (1,968). Same pattern for M1: more people run the M1 Pro (4,815) than the base M1 (4,499). Higher unified-memory tiers matter way more for local AI than they do for general computing.

The 10-series refuses to die

GTX 1660, 1650 Mobile, 1070 Ti, 1060, 1080 Ti, and 1050 Ti โ€” six to nine-year-old Pascal-era cards โ€” collectively account for roughly 5,000 users in the top 100. Roughly 1 in 13 discrete-GPU users on HF is still running silicon from before the RTX era.

The M4 family is already bigger than the M1 family

M4 totals 16,639 users, 34% more than M1 (12,444). The M5, which just shipped, is already at 2,907. The M2 generation (6,917) sits oddly low โ€” fewer users than the older M3 (8,967) โ€” suggesting M2 had a short adoption window before users jumped to M3.

The 50-series is the fastest gen ramp in the data

RTX 50-series already has 13 SKUs in the top 100 totaling 15,341 users. Faster gen-on-gen adoption than the 30 or 40 series saw at the same age.

Datacenter / pro silicon is real but smaller than you'd expect

H100, A100, H200, V100, T4, L4, L40s, RTX 6000 Ada, RTX PRO 6000 WS, A6000, and GB10 together total 10,792 users (~7.7% of top-100 reports). Almost certainly under-counted because researchers don't usually list shared-cluster GPUs on their personal profile.

New AI-specific silicon is already showing up

NVIDIA's GB10 (DGX Spark) sits at #36 with 1,241 users. AMD's Ryzen AI Max+ 395 (Strix Halo) is at #49 with 962. Both are recent, and both are already meaningful in the rankings.

The most popular enthusiast combos

  • Ryzen 9 Zen 5 + RTX 5090 โ€” 1,504 users
  • Intel i9 13th-gen + RTX 4090 โ€” 1,424 users

Caveats worth keeping in mind

The data is self-reported and opt-in (biased toward HF-engaged local-AI builders); users typically list one machine even if they own several; cloud and shared-cluster work is almost certainly under-represented; and label granularity isn't uniform across vendors.


Dataset (Apache 2.0): huggingface.co/datasets/clem/100_most_popular_hardware_setups_on_HF

Haven't added your hardware yet? Takes 30 seconds: huggingface.co/settings/local-apps

Let's go local AI! ๐Ÿค—

Community

Sign up or log in to comment