Description

A prerequisite for running applications in a cloud environment is the presence of a container registry. Another common scenario is users performing machine learning workloads in such environments. However, these types of workloads require dedicated infrastructure to run properly. We can leverage these two facts to help users save resources by storing their machine learning models in OCI registries, similar to how we handle some WebAssembly modules. This approach will save users the resources typically required for a machine learning model repository for the applications they need to run.

Goals

Allow PyTorch users to save and load machine learning models in OCI registries.

Resources

Looking for hackers with the skills:

ai mlops pytorch oci cloud

This project is part of:

Hack Week 24

Activity

  • 2 months ago: horon liked this project.
  • 2 months ago: jguilhermevanz started this project.
  • 2 months ago: jguilhermevanz added keyword "ai" to this project.
  • 2 months ago: jguilhermevanz added keyword "mlops" to this project.
  • 2 months ago: jguilhermevanz added keyword "pytorch" to this project.
  • 2 months ago: jguilhermevanz added keyword "oci" to this project.
  • 2 months ago: jguilhermevanz added keyword "cloud" to this project.
  • 2 months ago: jguilhermevanz originated this project.

  • Comments

    Be the first to comment!

    Similar Projects

    Run local LLMs with Ollama and explore possible integrations with Uyuni by PSuarezHernandez

    Description

    Using Ollama you can easily run different LLM models in your local computer. This project is about exploring Ollama, testing different LLMs and try to fine tune them. Also, explore potential ways of integration with Uyuni.

    Goals

    • Explore Ollama
    • Test different models
    • Fine tuning
    • Explore possible integration in Uyuni

    Resources

    • https://ollama.com/
    • https://huggingface.co/
    • https://apeatling.com/articles/part-2-building-your-training-data-for-fine-tuning/


    ghostwrAIter - a local AI assisted tool for helping with support cases by paolodepa

    Description

    This project is meant to fight the loneliness of the support team members, providing them an AI assistant (hopefully) capable of scraping supportconfigs in a RAG fashion, trying to answer specific questions.

    Goals

    • Setup an Ollama backend, spinning one (or more??) code-focused LLMs selected by license, performance and quality of the results between:
    • Setup a Web UI for it, choosing an easily extensible and customizable option between:
    • Extend the solution in order to be able to:
      • Add ZIU/Concord shared folders to its RAG context
      • Add BZ cases, splitted in comments to its RAG context
        • A plus would be to login using the IDP portal to ghostwrAIter itself and use the same credentials to query BZ
      • Add specific packages picking them from IBS repos
        • A plus would be to login using the IDP portal to ghostwrAIter itself and use the same credentials to query IBS
        • A plus would be to desume the packages of interest and the right channel and version to be picked from the added BZ cases


    Learn how to integrate Elixir and Phoenix Liveview with LLMs by ninopaparo

    Description

    Learn how to integrate Elixir and Phoenix Liveview with LLMs by building an application that can provide answers to user queries based on a corpus of custom-trained data.

    Goals

    Develop an Elixir application via the Phoenix framework that:

    • Employs Retrieval Augmented Generation (RAG) techniques
    • Supports the integration and utilization of various Large Language Models (LLMs).
    • Is designed with extensibility and adaptability in mind to accommodate future enhancements and modifications.

    Resources

    • https://elixir-lang.org/
    • https://www.phoenixframework.org/
    • https://github.com/elixir-nx/bumblebee
    • https://ollama.com/


    Gen-AI chatbots and test-automation of generated responses by mdati

    Description

    Start experimenting the generative SUSE-AI chat bot, asking questions on different areas of knowledge or science and possibly analyze the quality of the LLM model response, specific and comparative, checking the answers provided by different LLM models to a same query, using proper quality metrics or tools or methodologies.

    Try to define basic guidelines and requirements for quality test automation of AI-generated responses.

    First approach of investigation can be based on manual testing: methodologies, findings and data can be useful then to organize valid automated testing.

    Goals

    • Identify criteria and measuring scales for assessment of a text content.
    • Define quality of an answer/text based on defined criteria .
    • Identify some knowledge sectors and a proper list of problems/questions per sector.
    • Manually run query session and apply evaluation criteria to answers.
    • Draft requirements for test automation of AI answers.

    Resources

    • Announcement of SUSE-AI for Hack Week in Slack
    • Openplatform and related 3 LLM models gemma:2b, llama3.1:8b, qwen2.5-coder:3b.

    Notes

    • Foundation models (FMs):
      are large deep learning neural networks, trained on massive datasets, that have changed the way data scientists approach machine learning (ML). Rather than develop artificial intelligence (AI) from scratch, data scientists use a foundation model as a starting point to develop ML models that power new applications more quickly and cost-effectively.

    • Large language models (LLMs):
      are a category of foundation models pre-trained on immense amounts of data acquiring abilities by learning statistical relationships from vast amounts of text during a self- and semi-supervised training process, making them capable of understanding and generating natural language and other types of content , to perform a wide range of tasks.
      LLMs can be used for generative AI (artificial intelligence) to produce content based on input prompts in human language.

    Validation of a AI-generated answer is not an easy task to perform, as manually as automated.
    An LLM answer text shall contain a given level of informations: correcness, completeness, reasoning description etc.
    We shall rely in properly applicable and measurable criteria of validation to get an assessment in a limited amount of time and resources.


    Use local/private LLM for semantic knowledge search by digitaltomm

    Description

    Use a local LLM, based on SUSE AI (ollama, openwebui) to power geeko search (public instance: https://geeko.port0.org/).

    Goals

    Build a SUSE internal instance of https://geeko.port0.org/ that can operate on internal resources, crawling confluence.suse.com, gitlab.suse.de, etc.

    Resources

    Repo: https://github.com/digitaltom/semantic-knowledge-search

    Public instance: https://geeko.port0.org/

    Results

    Internal instance:

    I have an internal test instance running which has indexed a couple of internal wiki pages from the SCC team. It's using the ollama (llama3.1:8b) backend of suse-ai.openplatform.suse.com to create embedding vectors for indexed resources and to create a chat response. The semantic search for documents is done with a vector search inside of sqlite, using sqlite-vec.

    image


    Mortgage Plan Analyzer by RMestre

    https://github.com/rjpmestre/mortgage-plan-analyzer

    Project Description

    Many people face challenges when trying to renegotiate their mortgages with different banks. They receive offers from multiple lenders and struggle to compare them effectively. Each proposal may have slightly different terms and data presentation, making it hard to make informed decisions. Additionally, understanding the impact of various taxes and variables can be complex. The Mortgage Plan Analyzer project aims to address these issues.

    Project Overview:

    The Mortgage Plan Analyzer is a web-based tool built using PHP, Laravel, Livewire, and AdminLTE/bootstrap. It provides a user-friendly platform for individuals to input basic specifications about their mortgage, adjust taxes and variables, and obtain short-term projections for each proposal. Users can also compare multiple mortgage offers side by side, enabling them to make informed decisions about their mortgage renegotiation.

    Why Start This Project:

    I found myself in this position and most tools I found around are either for marketing/selling purposes or not flexible enough. As i was starting getting lost in a jungle of spreadsheets i thought I could just create a tool to help me and others that may be experiencing the same struggles to provide clarity and transparency in the decision-making process.

    Hackweek 25 ideas (to refine still :) )

    • Euribor Trends in Projections
    • - Use historical Euribor data to model optimistic and pessimistic scenarios for variable-rate loans.
    • Use the annual summaries (installments, amortizations, etc) and run some analysis to highlight key differences, like short-term savings vs. long-term costs
    • Financial plan can be hard/boring to follow. Create a simple viewing mode that summarizes monthly values and their annual sums.

    Hackweek 24 update

    • Improved summaries graphs by adding:
    • - Line graph;
    • - Accumulated line graph;
    • - Set the range to short/mid/long term;
    • - Highlight best simulation and value per year;
    • Improve the general behaviour of the forms:
    • - Simulations name setting;
    • - Cloning simulations;
    • - Adjust update timing on input changes;
    • Show/Hide big tables;
    • Support multi languages (added english);
    • Added examples;
    • Adjustments to fonts and sizes;
    • Fixed loading screen;
    • Dependencies adjustments;

    Hackweek 23 initial release

    • Developed a base site that:
    • - Allows adding up to 3 simulations;
    • - Create financial plans;
    • - Simulations comparison graph for the first 4 years;
    • Created Github project @ https://github.com/rjpmestre/mortgage-plan-analyzer ;
    • Launched a demo instance using Oracle Cloud Free Tier currently @ http://138.3.251.182/

    Resources

    • Banco de Portugal: Main simulator all portuguese banks have to follow ( https://clientebancario.bportugal.pt/credito-habitacao )
    • Laravel: A PHP web application framework for building robust and scalable applications. ( https://laravel.com/ )
    • Livewire: A Laravel library for building dynamic interfaces without writing JavaScript. ( https://livewire.laravel.com/ )
    • AdminLTE: A responsive admin dashboard template for creating a visually appealing interface. ( https://adminlte.io/ )