AI Model Stack Builder
Calculate total VRAM for your multi-model AI setup.
Model Browser
DeepSeek R1 14BPopular
14B · 28.0 GBDeepSeek R1 32BPopular
32B · 64.0 GBDeepSeek R1 70BPopular
70B · 140.0 GBDeepSeek R1 7BPopular
7B · 14.0 GBDeepSeek R1 8BPopular
8B · 16.0 GBGemma 3 12BPopular
12B · 24.0 GBLlama 3.1 70BPopular
70B · 140.0 GBLlama 3.1 8BPopular
8B · 16.0 GBLlama 3.3 70BPopular
70B · 140.0 GBLlama 4 Scout 109BPopular
109B · 218.0 GBMinistral 7BPopular
7B · 14.0 GBMistral 7B v0.3Popular
7B · 14.0 GBMistral NeMo 12BPopular
12B · 24.0 GBPhi-3 Medium 14BPopular
14B · 28.0 GBPhi-4 14BPopular
14B · 28.0 GBQwen 2.5 14BPopular
14B · 28.0 GBQwen 2.5 7BPopular
7B · 14.0 GBQwen 3 14BPopular
14B · 28.0 GBQwen 3 32BPopular
32B · 64.0 GBQwen 3 8BPopular
8B · 16.0 GBCommand R 35B
35B · 70.0 GBCommand R+ 104B
104B · 208.0 GBDeepSeek R1 1.5B
1.5B · 3.0 GBDeepSeek V2 236B
236B · 472.0 GBDeepSeek V2 Lite 16B
16B · 32.0 GBDeepSeek V3 671B
671B · 1342.0 GBGemma 2 27B
27B · 54.0 GBGemma 2 2B
2B · 4.0 GBGemma 2 9B
9B · 18.0 GBGemma 3 1B
1B · 2.0 GBGemma 3 27B
27B · 54.0 GBGemma 3 4B
4B · 8.0 GBLlama 3.1 405B
405B · 810.0 GBLlama 3.2 1B
1B · 2.0 GBLlama 3.2 3B
3B · 6.0 GBLlama 4 Maverick 400B
400B · 800.0 GBMinistral 3B
3B · 6.0 GBMistral Large 3 675B
675B · 1350.0 GBMistral Small 24B
24B · 48.0 GBMixtral 8x22B
141B · 282.0 GBMixtral 8x7B
46.7B · 93.4 GBPhi-3 Mini 3.8B
3.8B · 7.6 GBPhi-4 Mini 3.8B
3.8B · 7.6 GBQwen 2.5 0.5B
0.5B · 1.0 GBQwen 2.5 1.5B
1.5B · 3.0 GBQwen 2.5 32B
32B · 64.0 GBQwen 2.5 3B
3B · 6.0 GBQwen 2.5 72B
72B · 144.0 GBQwen 3 0.6B
0.6B · 1.2 GBQwen 3 1.7B
1.7B · 3.4 GBQwen 3 4B
4B · 8.0 GBQwen 3 MoE 235B-A22B
235B · 470.0 GBQwen 3 MoE 30B-A3B
30B · 60.0 GBYour Stack
Add models from the browser to start building your stack.