Pricing built to scale from weekend project
to global powerhouse
Non-Profit & Academic organizations please contact us for discounting
Explorer
For individuals, students, and hobbyists
Free Forever!
- ✓Includes
- ∞Unlimited public repositorieswith unlimited collaborators
- ✓5 private repositoriesMaximum 3 collaborators
- ✓50gb of data storage
- ✓50gb of data transfer
Hacker
For small teams, and larger projects
$30.00 per month
$30.00 per month
- ✓Everything in Explorer +
- ∞Unlimited private repositories
- ✓100gb of data storagemore available
- ✓100gb of data transfermore available
- ✓$5 free compute creditsmore available
Pro
For complex projects with larger data sets
$60.00 per month
$60.00 per month
- ✓Everything in Hacker +
- ✓External storage support
- ✓500gb of data transfermore available
- $20 free compute creditsmore available
- ✓AWS, Azure, Google
GPU Pricing
Available GPU instances for model deployment and fine-tuning
| Instance Type | GPU | vCPU | VRAM | System RAM | Price |
|---|---|---|---|---|---|
1x2 - 1 vCPU, 2 GiB RAM | CPU Only | N/A | N/A | 2 GiB | $0.03/hr |
1x4 - 1 vCPU, 4 GiB RAM | CPU Only | N/A | N/A | 4 GiB | $0.05/hr |
2x8 - 2 vCPUs, 8 GiB RAM | CPU Only | N/A | N/A | 8 GiB | $0.10/hr |
4x16 - 4 vCPUs, 16 GiB RAM | CPU Only | N/A | N/A | 16 GiB | $0.21/hr |
8x32 - 8 vCPUs, 32 GiB RAM | CPU Only | N/A | N/A | 32 GiB | $0.41/hr |
T4x4x16 - 1 T4 GPU, 16 GiB VRAM, 4 vCPUs, 16 GiB RAM | 1x T4 | N/A | 16 GiB | 16 GiB | $0.59/hr |
L4:4x16 - 1 L4 GPU, 24 GiB video memory, 4 vCPU, 16 GiB | 1x L4 | N/A | 24 GiB | 16 GiB | $0.71/hr |
16x64 - 16 vCPUs, 64 GiB RAM | CPU Only | N/A | N/A | 64 GiB | $0.83/hr |
T4x8x32 - 1 T4 GPU, 16 GiB VRAM, 8 vCPUs, 32 GiB RAM | 1x T4 | N/A | 16 GiB | 32 GiB | $0.85/hr |
A10Gx4x16 - 1 A10G GPU, 24 GiB VRAM, 4 vCPUs, 16 GiB RAM | 1x A10G | N/A | 24 GiB | 16 GiB | $1.05/hr |
A10Gx8x32 - 1 A10G GPU, 24 GiB VRAM, 8 vCPUs, 32 GiB RAM | 1x A10G | N/A | 24 GiB | 32 GiB | $1.27/hr |
T4x16x64 - 1 T4 GPU, 16 GiB VRAM, 16 vCPUs, 64 GiB RAM | 1x T4 | N/A | 16 GiB | 64 GiB | $1.35/hr |
A10Gx16x64 - 1 A10G GPU, 24 GiB VRAM, 16 vCPUs, 64 GiB RAM | 1x A10G | N/A | 24 GiB | 64 GiB | $1.70/hr |
L4:2x24x96 - 2 L4 GPU, 48 GiB video memory, 24 vCPU, 96 GiB | 2x L4 | N/A | 48 GiB | 96 GiB | $2.00/hr |
T4:2x24x96 - 2 T4 GPUs, 32 GiB VRAM, 24 vCPU, 96 GiB RAM | 2x T4 | N/A | 32 GiB | 96 GiB | $2.20/hr |
A100:12x144 - 1 A100 GPU, 80 GiB VRAM, 12 vCPU, 144 GiB RAM | 1x A100 | N/A | 80 GiB | 144 GiB | $2.25/hr |
A10G:2x24x96 - 2 A10G GPUs, 48 GiB VRAM, 24 vCPU, 96 GiB RAM | 2x A10G | N/A | 48 GiB | 94 GiB | $2.96/hr |
V100x8x61 - 1 V100 GPU, 16 GiB VRAM, 8 vCPU, 61 GiB RAM | 1x V100 | N/A | 16 GiB | 61 GiB | $3.67/hr |
H100MIG:3gx13x117 - 1 H100 MIG GPU, 40 GiB VRAM, 13 vCPUs, 117 GiB RAM | 1x H100_40GB | N/A | 40 GiB | 117 GiB | $3.75/hr |
H100x26x234 - 1 H100 GPU, 80 GiB VRAM, 26 vCPUs, 234 GiB RAM | 1x H100 | N/A | 80 GiB | 234 GiB | $3.75/hr |
L4:4x48x192 - 4 L4 GPU, 96 GiB video memory, 48 vCPU, 192 GiB | 4x L4 | N/A | 96 GiB | 192 GiB | $4.00/hr |
T4:4x48x192 - 4 T4 GPUs, 64 GiB VRAM, 48 vCPUs, 192 GiB RAM | 4x T4 | N/A | 64 GiB | 192 GiB | $4.40/hr |
A100:2x24x288 - 2 A100 GPUs, 160 GiB VRAM, 24 vCPU, 288 GiB RAM | 2x A100 | N/A | 160 GiB | 288 GiB | $4.50/hr |
A10G:4x48x192 - 4 A10G GPUs, 96 GiB VRAM, 48 vCPU, 192 GiB RAM | 4x A10G | N/A | 96 GiB | 188 GiB | $5.92/hr |
H100:2x52x468 - 2 H100 GPUs, 160 GiB VRAM, 52 vCPUs, 468 GiB RAM | 2x H100 | N/A | 160 GiB | 468 GiB | $7.50/hr |
A100:4x36x288 - 4 A100 GPUs, 320 GiB VRAM, 48 vCPU, 576 GiB RAM | 4x A100 | N/A | 320 GiB | 576 GiB | $9.00/hr |
H100:4x104x936 - 4 H100 GPUs, 320 GiB VRAM, 104 vCPUs, 936 GiB RAM | 4x H100 | N/A | 320 GiB | 936 GiB | $15.00/hr |
A10G:8x192x768 - 8 A10G GPUs, 192 GiB VRAM, 192 vCPU, 768 GiB RAM | 8x A10G | N/A | 192 GiB | 750 GiB | $17.00/hr |
Fine-Tuning Pricing
Models available for fine-tuning and dedicated deployment
| Model | Pricing Method | Dedicated Inference | Full Fine-Tune | LoRA Fine-Tune |
|---|---|---|---|---|
Llama 3.1 8B Instruct meta-llama/Llama-3.1-8B-Instruct text-to-text | Time-based | 1x H100 $3.75/hr | 1x H100 $3.75/hr | 1x H100 $3.75/hr |
Llama 3.2 3B Instruct meta-llama/Llama-3.2-3B-Instruct text-to-text | Time-based | 1x H100 $3.75/hr | 1x H100 $3.75/hr | 1x H100 $3.75/hr |
openai/gpt-oss-20b openai/gpt-oss-20b text-to-text | Time-based | 1x H100 $3.75/hr | 1x H100 $3.75/hr | 1x H100 $3.75/hr |
Serverless Model Pricing
Pricing for instant access to models
| Model | Pricing Method | Inference Cost |
|---|---|---|
codestral-2405 codestral-2405 text-to-text | Token-based | $0.20 / 1M input tokens $0.60 / 1M output tokens |
Gemini 1.5 Flash gemini-1-5-flash text-to-text | Token-based | $0.02 / 1M input tokens $0.02 / 1M output tokens |
gpt-4o gpt-4o text-to-text | Token-based | $2.50 / 1M input tokens $10.00 / 1M output tokens |
gpt-4o-mini gpt-4o-mini text-to-text | Token-based | $0.15 / 1M input tokens $0.60 / 1M output tokens |
meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo meta-llama-meta-llama-3-1-8b-instruct-turbo text-to-text | Token-based | $0.18 / 1M input tokens $0.18 / 1M output tokens |
ministral-3b-latest ministral-3b-latest text-to-text | Token-based | $0.04 / 1M input tokens $0.04 / 1M output tokens |
ministral-8b-latest ministral-8b-latest text-to-text | Token-based | $0.10 / 1M input tokens $0.10 / 1M output tokens |
mistral-large-2407 mistral-large-2407 text-to-text | Token-based | $2.00 / 1M input tokens $6.00 / 1M output tokens |
mistral-nemo mistral-nemo text-to-text | Token-based | $0.15 / 1M input tokens $0.15 / 1M output tokens |
mistral-small-2409 mistral-small-2409 text-to-text | Token-based | $0.20 / 1M input tokens $0.60 / 1M output tokens |
o1-mini o1-mini text-to-text | Token-based | $3.00 / 1M input tokens $12.00 / 1M output tokens |
o1-preview o1-preview text-to-text | Token-based | $15.00 / 1M input tokens $60.00 / 1M output tokens |
openai/gpt-oss-20b openai/gpt-oss-20b text-to-text | Token-based | $0.07 / 1M input tokens $0.30 / 1M output tokens |
open-mistral-7b open-mistral-7b text-to-text | Token-based | $0.25 / 1M input tokens $0.25 / 1M output tokens |
open-mixtral-8x22b open-mixtral-8x22b text-to-text | Token-based | $2.00 / 1M input tokens $6.00 / 1M output tokens |
open-mixtral-8x7b open-mixtral-8x7b text-to-text | Token-based | $0.70 / 1M input tokens $0.70 / 1M output tokens |
pixtral-12b pixtral-12b text-to-text | Token-based | $0.15 / 1M input tokens $0.15 / 1M output tokens |
Seedream 4.0 seedream-4 text-to-imageimage-to-image | Per Image | N/A |