Pricing built to scale from weekend project
to global powerhouse

Non-Profit & Academic organizations please contact us for discounting

Explorer
For individuals, students, and hobbyists
Free Forever!
  • Includes
  • Unlimited public repositories
    with unlimited collaborators
  • 5 private repositories
    Maximum 3 collaborators
  • 50gb of data storage
  • 50gb of data transfer
Hacker
For small teams, and larger projects
$30.00 per month
  • Everything in Explorer +
  • Unlimited private repositories
  • 100gb of data storage
    more available
  • 100gb of data transfer
    more available
  • $5 free compute credits
    more available
Pro
For complex projects with larger data sets
$60.00 per month
  • Everything in Hacker +
  • External storage support
  • 500gb of data transfer
    more available
  • $20 free compute credits
    more available
  • AWS, Azure, Google

GPU Pricing

Available GPU instances for model deployment and fine-tuning

Instance TypeGPUvCPUVRAMSystem RAMPrice
1x2 - 1 vCPU, 2 GiB RAM
CPU Only
N/AN/A2 GiB
$0.03/hr
1x4 - 1 vCPU, 4 GiB RAM
CPU Only
N/AN/A4 GiB
$0.05/hr
2x8 - 2 vCPUs, 8 GiB RAM
CPU Only
N/AN/A8 GiB
$0.10/hr
4x16 - 4 vCPUs, 16 GiB RAM
CPU Only
N/AN/A16 GiB
$0.21/hr
8x32 - 8 vCPUs, 32 GiB RAM
CPU Only
N/AN/A32 GiB
$0.41/hr
T4x4x16 - 1 T4 GPU, 16 GiB VRAM, 4 vCPUs, 16 GiB RAM
1x T4
N/A16 GiB16 GiB
$0.59/hr
L4:4x16 - 1 L4 GPU, 24 GiB video memory, 4 vCPU, 16 GiB
1x L4
N/A24 GiB16 GiB
$0.71/hr
16x64 - 16 vCPUs, 64 GiB RAM
CPU Only
N/AN/A64 GiB
$0.83/hr
T4x8x32 - 1 T4 GPU, 16 GiB VRAM, 8 vCPUs, 32 GiB RAM
1x T4
N/A16 GiB32 GiB
$0.85/hr
A10Gx4x16 - 1 A10G GPU, 24 GiB VRAM, 4 vCPUs, 16 GiB RAM
1x A10G
N/A24 GiB16 GiB
$1.05/hr
A10Gx8x32 - 1 A10G GPU, 24 GiB VRAM, 8 vCPUs, 32 GiB RAM
1x A10G
N/A24 GiB32 GiB
$1.27/hr
T4x16x64 - 1 T4 GPU, 16 GiB VRAM, 16 vCPUs, 64 GiB RAM
1x T4
N/A16 GiB64 GiB
$1.35/hr
A10Gx16x64 - 1 A10G GPU, 24 GiB VRAM, 16 vCPUs, 64 GiB RAM
1x A10G
N/A24 GiB64 GiB
$1.70/hr
L4:2x24x96 - 2 L4 GPU, 48 GiB video memory, 24 vCPU, 96 GiB
2x L4
N/A48 GiB96 GiB
$2.00/hr
T4:2x24x96 - 2 T4 GPUs, 32 GiB VRAM, 24 vCPU, 96 GiB RAM
2x T4
N/A32 GiB96 GiB
$2.20/hr
A100:12x144 - 1 A100 GPU, 80 GiB VRAM, 12 vCPU, 144 GiB RAM
1x A100
N/A80 GiB144 GiB
$2.25/hr
A10G:2x24x96 - 2 A10G GPUs, 48 GiB VRAM, 24 vCPU, 96 GiB RAM
2x A10G
N/A48 GiB94 GiB
$2.96/hr
V100x8x61 - 1 V100 GPU, 16 GiB VRAM, 8 vCPU, 61 GiB RAM
1x V100
N/A16 GiB61 GiB
$3.67/hr
H100MIG:3gx13x117 - 1 H100 MIG GPU, 40 GiB VRAM, 13 vCPUs, 117 GiB RAM
1x H100_40GB
N/A40 GiB117 GiB
$3.75/hr
H100x26x234 - 1 H100 GPU, 80 GiB VRAM, 26 vCPUs, 234 GiB RAM
1x H100
N/A80 GiB234 GiB
$3.75/hr
L4:4x48x192 - 4 L4 GPU, 96 GiB video memory, 48 vCPU, 192 GiB
4x L4
N/A96 GiB192 GiB
$4.00/hr
T4:4x48x192 - 4 T4 GPUs, 64 GiB VRAM, 48 vCPUs, 192 GiB RAM
4x T4
N/A64 GiB192 GiB
$4.40/hr
A100:2x24x288 - 2 A100 GPUs, 160 GiB VRAM, 24 vCPU, 288 GiB RAM
2x A100
N/A160 GiB288 GiB
$4.50/hr
A10G:4x48x192 - 4 A10G GPUs, 96 GiB VRAM, 48 vCPU, 192 GiB RAM
4x A10G
N/A96 GiB188 GiB
$5.92/hr
H100:2x52x468 - 2 H100 GPUs, 160 GiB VRAM, 52 vCPUs, 468 GiB RAM
2x H100
N/A160 GiB468 GiB
$7.50/hr
A100:4x36x288 - 4 A100 GPUs, 320 GiB VRAM, 48 vCPU, 576 GiB RAM
4x A100
N/A320 GiB576 GiB
$9.00/hr
H100:4x104x936 - 4 H100 GPUs, 320 GiB VRAM, 104 vCPUs, 936 GiB RAM
4x H100
N/A320 GiB936 GiB
$15.00/hr
A10G:8x192x768 - 8 A10G GPUs, 192 GiB VRAM, 192 vCPU, 768 GiB RAM
8x A10G
N/A192 GiB750 GiB
$17.00/hr

Fine-Tuning Pricing

Models available for fine-tuning and dedicated deployment

ModelPricing MethodDedicated InferenceFull Fine-TuneLoRA Fine-Tune
Llama 3.1 8B Instruct
meta-llama/Llama-3.1-8B-Instruct
text-to-text
Time-based
1x H100
$3.75/hr
1x H100
$3.75/hr
1x H100
$3.75/hr
Llama 3.2 3B Instruct
meta-llama/Llama-3.2-3B-Instruct
text-to-text
Time-based
1x H100
$3.75/hr
1x H100
$3.75/hr
1x H100
$3.75/hr
openai/gpt-oss-20b
openai/gpt-oss-20b
text-to-text
Time-based
1x H100
$3.75/hr
1x H100
$3.75/hr
1x H100
$3.75/hr

Serverless Model Pricing

Pricing for instant access to models

ModelPricing MethodInference Cost
codestral-2405
codestral-2405
text-to-text
Token-based
$0.20 / 1M input tokens
$0.60 / 1M output tokens
Gemini 1.5 Flash
gemini-1-5-flash
text-to-text
Token-based
$0.02 / 1M input tokens
$0.02 / 1M output tokens
gpt-4o
gpt-4o
text-to-text
Token-based
$2.50 / 1M input tokens
$10.00 / 1M output tokens
gpt-4o-mini
gpt-4o-mini
text-to-text
Token-based
$0.15 / 1M input tokens
$0.60 / 1M output tokens
meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo
meta-llama-meta-llama-3-1-8b-instruct-turbo
text-to-text
Token-based
$0.18 / 1M input tokens
$0.18 / 1M output tokens
ministral-3b-latest
ministral-3b-latest
text-to-text
Token-based
$0.04 / 1M input tokens
$0.04 / 1M output tokens
ministral-8b-latest
ministral-8b-latest
text-to-text
Token-based
$0.10 / 1M input tokens
$0.10 / 1M output tokens
mistral-large-2407
mistral-large-2407
text-to-text
Token-based
$2.00 / 1M input tokens
$6.00 / 1M output tokens
mistral-nemo
mistral-nemo
text-to-text
Token-based
$0.15 / 1M input tokens
$0.15 / 1M output tokens
mistral-small-2409
mistral-small-2409
text-to-text
Token-based
$0.20 / 1M input tokens
$0.60 / 1M output tokens
o1-mini
o1-mini
text-to-text
Token-based
$3.00 / 1M input tokens
$12.00 / 1M output tokens
o1-preview
o1-preview
text-to-text
Token-based
$15.00 / 1M input tokens
$60.00 / 1M output tokens
openai/gpt-oss-20b
openai/gpt-oss-20b
text-to-text
Token-based
$0.07 / 1M input tokens
$0.30 / 1M output tokens
open-mistral-7b
open-mistral-7b
text-to-text
Token-based
$0.25 / 1M input tokens
$0.25 / 1M output tokens
open-mixtral-8x22b
open-mixtral-8x22b
text-to-text
Token-based
$2.00 / 1M input tokens
$6.00 / 1M output tokens
open-mixtral-8x7b
open-mixtral-8x7b
text-to-text
Token-based
$0.70 / 1M input tokens
$0.70 / 1M output tokens
pixtral-12b
pixtral-12b
text-to-text
Token-based
$0.15 / 1M input tokens
$0.15 / 1M output tokens
Seedream 4.0
seedream-4
text-to-imageimage-to-image
Per Image
N/A