Pricing built to scale from weekend project
to global powerhouse
Non-Profit & Academic organizations please contact us for discounting
Explorer
For individuals, students, and hobbyists
Free Forever!
- ✓Includes
- ∞Unlimited public repositorieswith unlimited collaborators
- ✓5 private repositoriesMaximum 3 collaborators
- ✓50gb of data storage
- ✓50gb of data transfer
Hacker
For small teams, and larger projects
$30.00 per month
$30.00 per month
- ✓Everything in Explorer +
- ∞Unlimited private repositories
- ✓100gb of data storagemore available
- ✓100gb of data transfermore available
- ✓$5 free compute creditsmore available
Pro
For complex projects with larger data sets
$60.00 per month
$60.00 per month
- ✓Everything in Hacker +
- ✓External storage support
- ✓500gb of data transfermore available
- $20 free compute creditsmore available
- ✓AWS, Azure, Google
Model Pricing
All models are pay as you go. No upfront costs, no subscription required.
| Model | Pricing Method | Inference Cost |
|---|---|---|
GPT 5.5 OpenAI's newest frontier model — improved reasoning over 5.4 at the same 1.05M context, configurable thinking budget, and full tool-use support. multi-to-text | Token-based | $6.50 / 1M input tokens $39.00 / 1M output tokens |
GPT Image 2 Text-to-image generation with photorealistic output, accurate text rendering, and strong prompt adherence. text-to-image | Per Image | $0.2200 / image |
Seedance 2.0 - Reference to Video Reference-guided video from prompt plus optional images, videos, and audio references. multi-to-video | Per Video Output Second | Regular: $0.40 / second High Res: $0.80 / second |
Happy Horse - Image to Video Image-to-video generation up to 1080P from a single reference image. multi-to-video | Per Video Output Second | Regular: $0.14 / second High Res: $0.28 / second |
Happy Horse - Reference to Video Reference-to-video generation up to 1080P with up to 9 reference images. multi-to-video | Per Video Output Second | Regular: $0.14 / second High Res: $0.28 / second |
Happy Horse - Text to Video Text-to-video generation up to 1080P with configurable aspect ratio and duration. text-to-video | Per Video Output Second | Regular: $0.14 / second High Res: $0.28 / second |
WAN 2.7 - Image to Video Animates images into video up to 15s at 1080P with first/last-frame guidance, video continuation, and optional driving audio. multi-to-video | Per Video Output Second | Regular: $0.10 / second High Res: $0.15 / second |
WAN 2.7 - Reference to Video Reference-guided video generation with character consistency, multi-character support, optional reference voices, and up to 1080P output. multi-to-video | Per Video Output Second | Regular: $0.10 / second High Res: $0.15 / second |
WAN 2.7 - Text to Video Text-to-video with multi-shot generation, up to 1080P, 2-15s duration, and optional driving audio. text-to-video | Per Video Output Second | Regular: $0.10 / second High Res: $0.15 / second |
DeepSeek V4 Flash Cheap, fast DeepSeek V4 — 13B active params over a 1M context, well-suited to high-volume traffic and as a default daily-driver. text-to-text | Token-based | $0.18 / 1M input tokens $0.36 / 1M output tokens |
GPU Pricing
Dedicated GPUs charge per second of use and will automatically shut down after 15 minutes of inactivity.
| Instance Type | GPU | vCPU | VRAM | System RAM | Price |
|---|---|---|---|---|---|
1x2 - 1 vCPU, 2 GiB RAM | CPU Only | N/A | N/A | 2 GiB | $0.03/hr |
1x4 - 1 vCPU, 4 GiB RAM | CPU Only | N/A | N/A | 4 GiB | $0.05/hr |
2x8 - 2 vCPUs, 8 GiB RAM | CPU Only | N/A | N/A | 8 GiB | $0.10/hr |
4x16 - 4 vCPUs, 16 GiB RAM | CPU Only | N/A | N/A | 16 GiB | $0.21/hr |
8x32 - 8 vCPUs, 32 GiB RAM | CPU Only | N/A | N/A | 32 GiB | $0.41/hr |
T4x4x16 - 1 T4 GPU, 16 GiB VRAM, 4 vCPUs, 16 GiB RAM | 1x T4 | N/A | 16 GiB | 16 GiB | $0.59/hr |
L4:4x16 - 1 L4 GPU, 24 GiB video memory, 4 vCPU, 16 GiB | 1x L4 | N/A | 24 GiB | 16 GiB | $0.71/hr |
16x64 - 16 vCPUs, 64 GiB RAM | CPU Only | N/A | N/A | 64 GiB | $0.83/hr |
T4x8x32 - 1 T4 GPU, 16 GiB VRAM, 8 vCPUs, 32 GiB RAM | 1x T4 | N/A | 16 GiB | 32 GiB | $0.85/hr |
A10Gx4x16 - 1 A10G GPU, 24 GiB VRAM, 4 vCPUs, 16 GiB RAM | 1x A10G | N/A | 24 GiB | 16 GiB | $1.05/hr |
A10Gx8x32 - 1 A10G GPU, 24 GiB VRAM, 8 vCPUs, 32 GiB RAM | 1x A10G | N/A | 24 GiB | 32 GiB | $1.27/hr |
T4x16x64 - 1 T4 GPU, 16 GiB VRAM, 16 vCPUs, 64 GiB RAM | 1x T4 | N/A | 16 GiB | 64 GiB | $1.35/hr |
A10Gx16x64 - 1 A10G GPU, 24 GiB VRAM, 16 vCPUs, 64 GiB RAM | 1x A10G | N/A | 24 GiB | 64 GiB | $1.70/hr |
L4:2x24x96 - 2 L4 GPU, 48 GiB video memory, 24 vCPU, 96 GiB | 2x L4 | N/A | 48 GiB | 96 GiB | $2.00/hr |
T4:2x24x96 - 2 T4 GPUs, 32 GiB VRAM, 24 vCPU, 96 GiB RAM | 2x T4 | N/A | 32 GiB | 96 GiB | $2.20/hr |
A100:12x144 - 1 A100 GPU, 80 GiB VRAM, 12 vCPU, 144 GiB RAM | 1x A100 | N/A | 80 GiB | 144 GiB | $2.25/hr |
A10G:2x24x96 - 2 A10G GPUs, 48 GiB VRAM, 24 vCPU, 96 GiB RAM | 2x A10G | N/A | 48 GiB | 94 GiB | $2.96/hr |
V100x8x61 - 1 V100 GPU, 16 GiB VRAM, 8 vCPU, 61 GiB RAM | 1x V100 | N/A | 16 GiB | 61 GiB | $3.67/hr |
H100x26x234 - 1 H100 GPU, 80 GiB VRAM, 26 vCPUs, 234 GiB RAM | 1x H100 | N/A | 80 GiB | 234 GiB | $3.75/hr |
H100MIG:3gx13x117 - 1 H100 MIG GPU, 40 GiB VRAM, 13 vCPUs, 117 GiB RAM | 1x H100_40GB | N/A | 40 GiB | 117 GiB | $3.75/hr |
L4:4x48x192 - 4 L4 GPU, 96 GiB video memory, 48 vCPU, 192 GiB | 4x L4 | N/A | 96 GiB | 192 GiB | $4.00/hr |
T4:4x48x192 - 4 T4 GPUs, 64 GiB VRAM, 48 vCPUs, 192 GiB RAM | 4x T4 | N/A | 64 GiB | 192 GiB | $4.40/hr |
A100:2x24x288 - 2 A100 GPUs, 160 GiB VRAM, 24 vCPU, 288 GiB RAM | 2x A100 | N/A | 160 GiB | 288 GiB | $4.50/hr |
A10G:4x48x192 - 4 A10G GPUs, 96 GiB VRAM, 48 vCPU, 192 GiB RAM | 4x A10G | N/A | 96 GiB | 188 GiB | $5.92/hr |
H100:2x52x468 - 2 H100 GPUs, 160 GiB VRAM, 52 vCPUs, 468 GiB RAM | 2x H100 | N/A | 160 GiB | 468 GiB | $7.50/hr |
A100:4x36x288 - 4 A100 GPUs, 320 GiB VRAM, 48 vCPU, 576 GiB RAM | 4x A100 | N/A | 320 GiB | 576 GiB | $9.00/hr |
H200 | 1x H200 | 8 | 137 GiB | 137 GiB | $9.98/hr |
H100:4x104x936 - 4 H100 GPUs, 320 GiB VRAM, 104 vCPUs, 936 GiB RAM | 4x H100 | N/A | 320 GiB | 936 GiB | $15.00/hr |
A10G:8x192x768 - 8 A10G GPUs, 192 GiB VRAM, 192 vCPU, 768 GiB RAM | 8x A10G | N/A | 192 GiB | 750 GiB | $17.00/hr |
Fine-Tuning Pricing
Fine-tuning is billed per second of training time, after which the model can be deployed to a dedicated GPU for inference.
| Model | Pricing Method | Dedicated Inference | Full Fine-Tune | LoRA Fine-Tune |
|---|---|---|---|---|
Gemma 4 E2B gemma-4-e2b-it multi-to-text | Time-based | 1x H100 $3.75/hr | 1x H100 $3.75/hr | 1x H100 $3.75/hr |
Gemma 4 E4B gemma-4-e4b-it multi-to-text | Time-based | 1x H100 $3.75/hr | 1x H100 $3.75/hr | 1x H100 $3.75/hr |
LTX-2.3 Pro ltx-2-3-pro multi-to-video | Time-based | 1x H200 $9.98/hr | 1x H200 $9.98/hr | 1x H200 $9.98/hr |
Qwen3.5 0.8B qwen3-5_0-8b multi-to-text | Time-based | 1x H100 $3.75/hr | 1x H100 $3.75/hr | 1x H100 $3.75/hr |
Qwen3.5 2B qwen3-5_2b multi-to-text | Time-based | 1x H100 $3.75/hr | 1x H100 $3.75/hr | 1x H100 $3.75/hr |
Qwen3.5 4B qwen3-5_4b multi-to-text | Time-based | 1x H100 $3.75/hr | 1x H100 $3.75/hr | 1x H100 $3.75/hr |
Qwen3.5 9B qwen3-5_9b multi-to-text | Time-based | 1x H100 $3.75/hr | 1x H100 $3.75/hr | 1x H100 $3.75/hr |
FLUX.2 Klein 4B black-forest-labs-flux-2-klein-4b multi-to-image | Time-based | 1x H100 $3.75/hr | 1x H100 $3.75/hr | 1x H100 $3.75/hr |
FLUX.2 Klein 9B black-forest-labs-flux-2-klein-9b multi-to-image | Time-based | 1x H100 $3.75/hr | 1x H100 $3.75/hr | 1x H100 $3.75/hr |
LTX-2 Pro ltx-2-19b-image-to-video multi-to-video | Time-based | 1x H100 $3.75/hr | 1x H100 $3.75/hr | 1x H100 $3.75/hr |