External links and resources¶
This is a compendium of all the production services that are currently deployed to serve AI4OS users.
- Homepage
A high level overview of the project.
- Documentation
The main source of knowledge on how to use the project. Refer always to here in case of doubt.
- Dashboard (AI4EOSC, iMagine, AI4Life, Tutorials)
View the catalog of AI modules and make deployments.
- NextCloud
The service that allows to store your data remotely and access them from inside your deployment.
- Github (software)
The code of the software powering the platform.
- Github (modules)
The code of all the modules available in the platform.
- DockerHub
Where the Docker images of the modules are stored.
- Harbor
Custom Docker image registry we deployed to overcome DockerHub limitations.
- CI/CD pipeline
Continuous Integration and Continuous Development Jenkins instance to keep everything up-to-date with latest code changes.
- Status of services
Check if a specific AI4OS service might be down for some reason.
- Module template
Create new modules based on our project’s template.
- MLflow server (AI4EOSC, iMagine)
Log your trainings parameters and models with our MLflow server.
- Inference platform (OSCAR) (AI4EOSC, iMagine)
Scalable serverless inference of AI models.
- Inference pipelines platform (Flowfuse)
Compose custom AI inference pipelines.
- LLM Chatbot (beta!)
Chat with our LLM bot and interact live with our documentation.
- Youtube channel
Find video-tutorials and more.