Deploy a model in production¶
Once you have trained your model, you can deploy it in production. This section provides several guides in how to deploy your model in different environments.
Start here to get an overview of the different deployment options.
Deploy your model in the platform using the serverless option, using a shared serverless environment.
Deploy your model in the platform using a dedicated deployment and a load balancer.
Deploy your model in your cloud using the provided Docker image.
Deploy models from external marketplaces (BioImage Model Zoo).
Deploy your own LLM from a selection of open-source models (DeepSeek, Qwen, LLama, etc), using vLLM and Open-WebUI.
Deploy your model in the platform using the serverless option, but manually configuring the deployment. This is an advanced option.