Lawkitt Logo

The Lawkitt Router

The easiest way to use Lawkitt with the same model lineup available inside the Lawkitt Editor.

Bring your own keys or use the Lawkitt Router for a streamlined experience. FAQ

25 of 25 models

Arcee AI: Trinity Large Preview (free)Free!

Frontier-scale open-weight model built as a 400B-parameter sparse Mixture-of-Experts (13B active). Optimized for creative writing, chat, and agentic workflows with long-context support.

ProviderArcee AI
Context Window131K
Input PriceFree
Output PriceFree

Claude Haiku 4.5

Claude Haiku 4.5 matches Sonnet 4's performance on coding, computer use, and agent tasks at substantially lower cost and faster speeds. It delivers near-frontier performance and Claude's unique character at a price point that works for scaled sub-agent deployments, free tier products, and intelligence-sensitive applications with budget constraints.

ProviderAnthropic
Context Window200K
Input Price$1.00/1M tokens
Output Price$5.00/1M tokens

Claude Opus 4.1

Claude Opus 4.1 is a drop-in replacement for Opus 4 that delivers superior performance and precision for real-world coding and agentic tasks. Opus 4.1 advances state-of-the-art coding performance to 74.5% on SWE-bench Verified, and handles complex, multi-step problems with more rigor and attention to detail.

ProviderAnthropic
Context Window200K
Input Price$15.00/1M tokens
Output Price$75.00/1M tokens

Claude Opus 4.5

Claude Opus 4.5 is Anthropic's latest model in the Opus series, meant for demanding reasoning tasks and complex problem solving. This model has improvements in general intelligence and vision compared to previous iterations. In addition, it is suited for difficult coding tasks and agentic workflows, especially those with computer use and tool use, and can effectively handle context usage and external memory files.

ProviderAnthropic
Context Window200K
Input Price$5.00/1M tokens
Output Price$25.00/1M tokens

Claude Sonnet 4.5

Claude Sonnet 4.5 is the newest model in the Sonnet series, offering improvements and updates over Sonnet 4.

ProviderAnthropic
Context Window1M
Input Price$3.00/1M tokens
Output Price$15.00/1M tokens

DeepSeek: DeepSeek V3.2

High-efficiency model with sparse attention and strong reasoning plus agentic tool-use performance.

ProviderDeepSeek
Context Window164K
Input Price$0.25/1M tokens
Output Price$0.38/1M tokens

DeepSeek: R1 0528 (free)Free!

May 28th update to the original DeepSeek R1. Performance on par with OpenAI o1, but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass.

ProviderDeepSeek
Context Window164K
Input PriceFree
Output PriceFree

Gemini 2.5 Pro

Gemini 2.5 Pro is our most advanced reasoning Gemini model, capable of solving complex problems. Gemini 2.5 Pro can comprehend vast datasets and challenging problems from different information sources, including text, audio, images, video, and even entire code repositories.

ProviderGoogle
Context Window1M
Input Price$1.25/1M tokens
Output Price$10.00/1M tokens

Gemini 3 Flash

Google's most intelligent model built for speed, combining frontier intelligence with superior search and grounding.

ProviderGoogle
Context Window1M
Input Price$0.50/1M tokens
Output Price$3.00/1M tokens

Gemini 3 Pro Preview

This model improves upon Gemini 2.5 Pro and is catered towards challenging tasks, especially those involving complex reasoning or agentic workflows. Improvements highlighted include use cases for coding, multi-step function calling, planning, reasoning, deep knowledge tasks, and instruction following.

ProviderGoogle
Context Window1M
Input Price$2.00/1M tokens
Output Price$12.00/1M tokens

GLM 4.6

As the latest iteration in the GLM series, GLM-4.6 achieves comprehensive enhancements across multiple domains, including real-world coding, long-context processing, reasoning, searching, writing, and agentic applications.

ProviderZ.AI
Context Window200K
Input Price$0.45/1M tokens
Output Price$1.80/1M tokens

GLM 4.7

GLM-4.7 is Z.AI's latest flagship model, with major upgrades focused on two key areas: stronger coding capabilities and more stable multi-step reasoning and execution.

ProviderZ.AI
Context Window200K
Input Price$0.60/1M tokens
Output Price$2.20/1M tokens

GPT-5

GPT-5 is OpenAI's flagship language model that excels at complex reasoning, broad real-world knowledge, code-intensive, and multi-step agentic tasks.

ProviderOpenAI
Context Window400K
Input Price$1.25/1M tokens
Output Price$10.00/1M tokens

GPT-5 mini

GPT-5 mini is a cost optimized model that excels at reasoning/chat tasks. It offers an optimal balance between speed, cost, and capability.

ProviderOpenAI
Context Window400K
Input Price$0.25/1M tokens
Output Price$2.00/1M tokens

GPT-5.1

An upgraded version of GPT-5 that adapts thinking time more precisely to the question to spend more time on complex questions and respond more quickly to simpler tasks.

ProviderOpenAI
Context Window400K
Input Price$1.25/1M tokens
Output Price$10.00/1M tokens

GPT-5.1-Codex-Max

GPT-5.1 Codex Max: Our most intelligent coding model optimized for long-horizon, agentic coding tasks.

ProviderOpenAI
Context Window400K
Input Price$1.25/1M tokens
Output Price$10.00/1M tokens

GPT-5.2

GPT-5.2: Our flagship model for coding and agentic tasks across industries

ProviderOpenAI
Context Window400K
Input Price$1.75/1M tokens
Output Price$14.00/1M tokens

GPT-5.2-Codex

GPT-5.2-Codex is a version of GPT-5.2 further optimized for agentic coding in Codex, including improvements on long-horizon work through context compaction, stronger performance on large code changes like refactors and migrations, improved performance in Windows environments, and significantly stronger cybersecurity capabilities.

ProviderOpenAI
Context Window400K
Input Price$1.75/1M tokens
Output Price$14.00/1M tokens

Kimi K2 Turbo

Kimi K2 Turbo is the high-speed version of kimi-k2, with the same model parameters as kimi-k2, but the output speed is increased to 60 tokens per second, with a maximum of 100 tokens per second, the context length is 256k

ProviderMoonshot AI
Context Window256K
Input Price$2.40/1M tokens
Output Price$10.00/1M tokens

MiniMax M2.1

MiniMax 2.1 is MiniMax's latest model, optimized specifically for robustness in coding, tool use, instruction following, and long-horizon planning.

ProviderMiniMax
Context Window205K
Input Price$0.30/1M tokens
Output Price$1.20/1M tokens

MiniMax M2.1 Lightning

MiniMax-M2.1-lightning is a faster version of MiniMax-M2.1, offering the same performance but with significantly higher throughput (output speed ~100 TPS, MiniMax-M2 output speed ~60 TPS).

ProviderMiniMax
Context Window205K
Input Price$0.30/1M tokens
Output Price$2.40/1M tokens

MoonshotAI: Kimi K2.5

Native multimodal model with strong visual coding capability and agentic tool-calling.

ProviderMoonshotAI
Context Window262K
Input Price$0.50/1M tokens
Output Price$2.80/1M tokens

OpenAI: gpt-oss-120b

Open-weight 117B-parameter MoE model for high-reasoning and agentic use cases with configurable reasoning depth and native tool use.

ProviderOpenAI
Context Window131K
Input Price$0.04/1M tokens
Output Price$0.19/1M tokens

Qwen: Qwen3 Max

Updated Qwen3 release with improved reasoning, instruction following, multilingual support, and long-context performance.

ProviderQwen
Context Window256K
Input Price$1.20/1M tokens
Output Price$6.00/1M tokens

xAI: Grok 4.1 Fast

Agentic tool-calling model built for real-world use cases like research and support with a 2M context window.

ProviderxAI
Context Window2M
Input Price$0.20/1M tokens
Output Price$0.50/1M tokens

Frequently Asked Questions

What are AI model providers?

Model providers offer language models with different capabilities, pricing, and data policies.

What is the Lawkitt Router?

It's our curated router for the models that power the Lawkitt Editor.

No separate accounts or API key management required.

Do I have to use the Lawkitt Router?

No. You can connect to local LLMs via providers like Ollama and LM Studio.

How is pricing calculated?

Model pricing is based on token usage for input and output, measured per million tokens.

How is my data treated?

Lawkitt does not train on your data. Each model provider has its own privacy policy.

Where can I see Lawkitt pricing?