A high level overview of the project.The main source of knowledge on how to use the project. Refer always to here in case of doubt.The authentication management for accessing the AI4OS stack.Where users will typically search for modules developed by the community, and find the relevant pointers to use them. It allows authenticated users to deploy virtual machines on specific hardware (eg. gpus) to train a module.The service that allows to store your data remotely and access them from inside your deployment.The code of all the modules and services behind the project is stored.Where the Docker images of the modules are stored.Continuous Integration and Continuous Development Jenkins instance to keep everything up-to-date with latest code changes.Scalable serverless inference of AI models.Check if a specific AI4OS service might be down for some reason.Create new modules based on our project’s template.
New to the project? How about a quick dive?
A more in depth documentation, with detailed description on the architecture and components is provided in the following sections.
- AI4OS architecture
- User roles and workflows
- AI4OS Modules
- AI4OS Modules Template
- DEEPaaS API
- AI4OS Dashboard
- AI4OS Storage
- AI4OS Inference
Use a model (basic user)¶
Train a model (intermediate user)¶
- Train a model locally
- Train a model remotely
- 1. Choose a module from the Marketplace
- 2. Upload your files to Nextcloud
- 3. Deploy with the Training Dashboard
- 4. Go to JupyterLab and mount your dataset
- 5. Open the DEEPaaS API and train the model
- 6. Test and export the newly trained model
- 7. Create a Docker repo for your new module
- 8. Share your new module in the Marketplace
- 9. [optional] Add your new module to the original Continuous Integration pipeline
- Use rclone
Develop a model (advanced user)¶
Use a tool (intermediate user)¶
We have specific guides for each of the tools.