Run Models on Your Data
Choose the right model, get to the perfect prompt, kick off the data flywheel.
Oxen makes it easy to improve your use of state of the art AI.
16 models, on 4 inference providers. New models added every week.
MMetaOOthersShow all
?o1-mini
texttext
o1-mini is a fast, cost-efficient reasoning model tailored to coding, math, and science use cases. The model has 128K context and an October 2023 knowledge cutoff.
Input: $3.00 / Output: $12.00
?gpt-4o
texttext
GPT-4o is our most advanced multimodal model that’s faster and cheaper than GPT-4 Turbo with stronger vision capabilities. The model has 128K context and an October 2023 knowledge cutoff.
Input: $2.50 / Output: $10.00
?gpt-4o-mini
texttext
GPT-4o mini is our most cost-efficient small model that’s smarter and cheaper than GPT-3.5 Turbo, and has vision capabilities. The model has 128K context and an October 2023 knowledge cutoff.
Input: $0.15 / Output: $0.60
?o1-preview
texttext
o1-preview is our new reasoning model for complex tasks. The model has 128K context and an October 2023 knowledge cutoff.
Input: $15.00 / Output: $60.00
?mistral-large-2407
texttext
Top-tier reasoning for high-complexity tasks, for your most sophisticated needs.
Input: $2.00 / Output: $6.00
?mistral-small-2409
texttext
Cost-efficient, fast, and reliable option for use cases such as translation, summarization, and sentiment analysis.
Input: $0.20 / Output: $0.60
?codestral-2405
texttext
State-of-the-art Mistral model trained specifically for code tasks.
Input: $0.20 / Output: $0.60
?ministral-3b-latest
texttext
Most efficient edge model.
Input: $0.04 / Output: $0.04
?ministral-8b-latest
texttext
Powerful model for on-device use cases.
Input: $0.10 / Output: $0.10
?pixtral-12b
texttext
Version-capable small model.
Input: $0.15 / Output: $0.15
?mistral-nemo
texttext
State-of-the-art Mistral model trained specifically for code tasks.
Input: $0.15 / Output: $0.15
?open-mistral-7b
texttext
A 7B transformer model, fast-deployed and easily customisable.
Input: $0.25 / Output: $0.25
?open-mixtral-8x7b
texttext
A 7B sparse Mixture-of-Experts (SMoE). Uses 12.9B active parameters out of 45B total.
Input: $0.70 / Output: $0.70
?open-mixtral-8x22b
texttext
Mixtral 8x22B is currently the most performant open model. A 22B sparse Mixture-of-Experts (SMoE). Uses only 39B active parameters out of 141B.
Input: $2.00 / Output: $6.00
?Gemini 1.5 Flash
texttexttextembeddings
Fast, Lightweight Model

Input: $0.02 / Output: $0.02
Mmeta-llama/Meta-Llama-3.1-8B-Instruct-Turbo
texttext
meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo
Input: $0.18 / Output: $0.18