We offer a variety of open-source models for fine-tuning and serverless inference. Additionally, you can use your own OpenAI API key to fine-tune and serve OpenAI models directly through our platform. For custom fine-tuned models, FinetuneDB handles deployment and makes them accessible via the inference API. Below is a preview of the models we offer.

Meta

  • Llama-v3.1-8b-instruct

    • Price: $0.00030 per 1k tokens (Input/Output)
  • Llama-v3.1-70b-instruct (Meta):

    • Price: $0.00110 per 1k tokens (Input/Output)

Mistral

  • Mixtral-8x7b-instruct (Mistral):
    • Price: $0.00080 per 1k tokens (Input/Output)
  • Mixtral-8x22b-instruct (Mistral):
    • Price: $0.00080 per 1k tokens (Input/Output)

OpenAI Models (via your API key)

  • gpt-4o-2024-08-06:
    • Price: $0.00250 per 1k tokens (Input)
    • Price: $0.01000 per 1k tokens (Output)
  • gpt-4o-mini-2024-07-18:
    • Price: $0.00015 per 1k tokens (Input)
    • Price: $0.00060 per 1k tokens (Output)
  • gpt-3.5-turbo-0125:
    • Price: $0.00050 per 1k tokens (Input)
    • Price: $0.00150 per 1k tokens (Output)

Custom Model Deployment and Integration

In addition to using our pre-configured models, FinetuneDB allows you to deploy custom models and integrate self-hosted models.

Deploying Custom Models

You can deploy custom fine-tuned models, and FinetuneDB will handle the deployment, making your model accessible via the inference API.

Integrating Self-Hosted Models

For teams managing their own infrastructure, integrate self-hosted models with FinetuneDB using VLLM. This setup allows you to leverage our platform’s capabilities while maintaining control over your hosting environment.

Please contact us for more information regaring custom model deployment and integration.


Note: Open-source inference API and OpenAI pricing is subject to change and may not always be up to date.