Welcome to AI4OS documentation!

The AI4OS stack empowers scientist by lowering the barrier to adopt the latest AI models and tools. It covers the full ML cycle: from model creation, to training, deployment and monitoring in production. Following the FAIR principles for science, both our software (platform and models) are fully open-source and easily portable to any other platform (no vendor lock-in!).

A quick note on terminology

AI4OS is the name of the generic software stack that is powering the deployments of different platforms (AI4EOSC, iMagine, etc). So, for example, the AI4OS Dashboard is the component that can be deployed as the AI4EOSC Dashboard or the iMagine Dashboard. These platform-specific Dashboards can have minor customizations but the underlying technology remains the same.

Current supported platforms:

  • AI4EOSC: AI for the European Open Science Cloud

  • iMagine: Imaging data and services for aquatic science

  • AI4Life: AI models and methods for the life sciences

Useful links

A high level overview of the project.
The main source of knowledge on how to use the project. Refer always to here in case of doubt.
The authentication management for accessing the AI4OS stack.
Where users will typically search for modules developed by the community, and find the relevant pointers to use them. It allows authenticated users to deploy virtual machines on specific hardware (eg. gpus) to train a module.
The service that allows to store your data remotely and access them from inside your deployment. (old instance)
The code of the software powering the platform.
The code of all the modules available in the platform.
Where the Docker images of the modules are stored.
Custom Docker image registry we deployed to overcome DockerHub limitations.
Continuous Integration and Continuous Development Jenkins instance to keep everything up-to-date with latest code changes.
Check if a specific AI4OS service might be down for some reason.
Create new modules based on our project’s template.
Log your trainings parameters and models with our MLflow server.
Scalable serverless inference of AI models.
Compose custom AI inference pipelines.

User documentation

If you are a user (current or potential) you should start here.

Component documentation

Here we share the documentation of components develop within the platform but that have their own documentation pages:

Technical documentation

If you are searching for technical notes on various areas, please check the following section.

Indices and tables