Machine learning model serving
Deploy your pre-trained models using TorchServe, TensorFlow Serving, MLFlow. Obtain API endpoints for your models for programmatic access and build apps.
Host applications such as R Shiny, Plotly Dash, custom Flask apps, etc. to allow others to explore your data or results, or to provide tools based on your deployed machine learning models.
Interactive development environments (IDEs)
Use a web-browser based Jupyter Lab or RStudio instance to run analyses or to collaborate with your team mates. We also accept requests for use in teaching.
SciLifeLab Serve is available free of charge to all life science researchers affiliated with a Swedish university (no SciLifeLab affiliation is required). At this time, SciLifeLab Serve is running in beta-mode - the main functionality is ready and should function as intended but we cannot yet guarantee uninterrupted operation. When the service transitions out of beta, all users and created resources will continue functioning. If you encounter an issue or find a bug, drop us a line to firstname.lastname@example.org.
At the moment, SciLifeLab Serve does not have any set limits on storage, CPU, or memory allocation for different services offered. No specific decisions about future resource limitations have been taken yet. When the service is launched in production, there are likely to be default limitations but we endeavour to cover the needs of most research groups and communities.
The team behind SciLifeLab Serve is happy to answer your questions and receive feedback, drop us a line to email@example.com.