Description
A prerequisite for running applications in a cloud environment is the presence of a container registry. Another common scenario is users performing machine learning workloads in such environments. However, these types of workloads require dedicated infrastructure to run properly. We can leverage these two facts to help users save resources by storing their machine learning models in OCI registries, similar to how we handle some WebAssembly modules. This approach will save users the resources typically required for a machine learning model repository for the applications they need to run.
Goals
Allow PyTorch users to save and load machine learning models in OCI registries.
Resources
This project is part of:
Hack Week 24
Activity
Comments
Be the first to comment!
Similar Projects
Research how LLMs could help to Linux developers and/or users by anicka
Description
Large language models like Chat...
ghostwrAIter - a local AI assisted tool for helping with support cases by paolodepa
Description
This project is meant to figh...
AI for product management by a_jaeger
Description
Learn about AI and how it can...
Make more sense of openQA test results using AI by livdywan
Description
AI has the potential to help wi...
SUSE AI Meets the Game Board by moio
Use [tabletopgames.ai](https://tabletopgames.ai...