Sloth LabSlothLab Tools

AI Model Stack Builder

Calculate total VRAM for your multi-model AI setup.

Model Browser

DeepSeek R1 14BPopular
14B · 28.0 GB
DeepSeek R1 32BPopular
32B · 64.0 GB
DeepSeek R1 70BPopular
70B · 140.0 GB
DeepSeek R1 7BPopular
7B · 14.0 GB
DeepSeek R1 8BPopular
8B · 16.0 GB
Gemma 3 12BPopular
12B · 24.0 GB
Llama 3.1 70BPopular
70B · 140.0 GB
Llama 3.1 8BPopular
8B · 16.0 GB
Llama 3.3 70BPopular
70B · 140.0 GB
Llama 4 Scout 109BPopular
109B · 218.0 GB
Ministral 7BPopular
7B · 14.0 GB
Mistral 7B v0.3Popular
7B · 14.0 GB
Mistral NeMo 12BPopular
12B · 24.0 GB
Phi-3 Medium 14BPopular
14B · 28.0 GB
Phi-4 14BPopular
14B · 28.0 GB
Qwen 2.5 14BPopular
14B · 28.0 GB
Qwen 2.5 7BPopular
7B · 14.0 GB
Qwen 3 14BPopular
14B · 28.0 GB
Qwen 3 32BPopular
32B · 64.0 GB
Qwen 3 8BPopular
8B · 16.0 GB
Command R 35B
35B · 70.0 GB
Command R+ 104B
104B · 208.0 GB
DeepSeek R1 1.5B
1.5B · 3.0 GB
DeepSeek V2 236B
236B · 472.0 GB
DeepSeek V2 Lite 16B
16B · 32.0 GB
DeepSeek V3 671B
671B · 1342.0 GB
Gemma 2 27B
27B · 54.0 GB
Gemma 2 2B
2B · 4.0 GB
Gemma 2 9B
9B · 18.0 GB
Gemma 3 1B
1B · 2.0 GB
Gemma 3 27B
27B · 54.0 GB
Gemma 3 4B
4B · 8.0 GB
Llama 3.1 405B
405B · 810.0 GB
Llama 3.2 1B
1B · 2.0 GB
Llama 3.2 3B
3B · 6.0 GB
Llama 4 Maverick 400B
400B · 800.0 GB
Ministral 3B
3B · 6.0 GB
Mistral Large 3 675B
675B · 1350.0 GB
Mistral Small 24B
24B · 48.0 GB
Mixtral 8x22B
141B · 282.0 GB
Mixtral 8x7B
46.7B · 93.4 GB
Phi-3 Mini 3.8B
3.8B · 7.6 GB
Phi-4 Mini 3.8B
3.8B · 7.6 GB
Qwen 2.5 0.5B
0.5B · 1.0 GB
Qwen 2.5 1.5B
1.5B · 3.0 GB
Qwen 2.5 32B
32B · 64.0 GB
Qwen 2.5 3B
3B · 6.0 GB
Qwen 2.5 72B
72B · 144.0 GB
Qwen 3 0.6B
0.6B · 1.2 GB
Qwen 3 1.7B
1.7B · 3.4 GB
Qwen 3 4B
4B · 8.0 GB
Qwen 3 MoE 235B-A22B
235B · 470.0 GB
Qwen 3 MoE 30B-A3B
30B · 60.0 GB

Your Stack

Add models from the browser to start building your stack.

Your Hardware