Get Started
Pricing
Our pricing model is simple and transparent:
- You pay for what you use.
- We charge based on the number of tokens you use, the price of the LLM (Large Language Model) you select, and then apply a 20% upcharge on top of the LLM’s cost.
How It Works
- Choose your LLM: Use any supported LLM (currently only supporting OpenAI and Anthropic).
- Track your usage: We count the number of tokens you use with each LLM.
- Calculate the cost:
- We calculate the cost based on the LLM’s published pricing for the tokens you used.
- We then add a 20% upcharge to that amount.
Example:
If you use Claude Sonnet 3.7, it would usually cost 3 dollars per million input tokens and 15 dollars per million output tokens with the anthropic api.
When using our sdk, it would cost you 3.6 dollars per million input tokens and 18 dollars per million output tokens.
Our sdk currently allows you to use remote sandboxes for free but this may be subject to change in the future.