Description

Using Ollama you can easily run different LLM models in your local computer. This project is about exploring Ollama, testing different LLMs and try to fine tune them. Also, explore potential ways of integration with Uyuni.

Goals

  • Explore Ollama
  • Test different models
  • Fine tuning
  • Explore possible integration in Uyuni

Resources

  • https://ollama.com/
  • https://huggingface.co/
  • https://apeatling.com/articles/part-2-building-your-training-data-for-fine-tuning/

Looking for hackers with the skills:

uyuni llm ollama python ai

This project is part of:

Hack Week 24

Activity

  • 10 months ago: juliogonzalezgil liked this project.
  • 10 months ago: frantisek.simorda liked this project.
  • 11 months ago: j_renner liked this project.
  • 11 months ago: PSuarezHernandez added keyword "uyuni" to this project.
  • 11 months ago: PSuarezHernandez added keyword "llm" to this project.
  • 11 months ago: PSuarezHernandez added keyword "ollama" to this project.
  • 11 months ago: PSuarezHernandez added keyword "python" to this project.
  • 11 months ago: PSuarezHernandez added keyword "ai" to this project.
  • 11 months ago: PSuarezHernandez liked this project.
  • 11 months ago: PSuarezHernandez started this project.
  • 11 months ago: PSuarezHernandez originated this project.

  • Comments

    • PSuarezHernandez
      10 months ago by PSuarezHernandez | Reply

      Some conclusions after Hackweek 24:

      • ollama + open-webui is a nice combo to allow running LLMs locally (tried also Local AI)
      • open-webui allows you to add custom knoweldge bases (collections) to feed models.
      • Uyuni documentation, Salt documentation can be used on this collections to make models to learn.
      • Using a tailored documentation works better to feed models.
      • Tried different models: llama3.1, mistral, mistral-nemo, gemma2, phi3,..
      • Getting promising results, particularly with mistral-nemo.. but also getting model hallutinations - model parameters can be adjusted to reduce them.

      Takeaways

      • Small models runs fairly well with CPU only.
      • Making an expert assistance on Uyuni, with an extensive knowledge based on documentation, might be something to keep exploring.

      Next steps

      • Make the model to understand Uyuni API, so it is able to translate user requests to actual call to Uyuni API.

    • rudrakshkarpe
      about 1 month ago by rudrakshkarpe | Reply

      Hi @PSuarezHernandez ,

      will this project be part of Hackweek 2025?

    Similar Projects

    Move Uyuni Test Framework from Selenium to Playwright + AI by oscar-barrios

    Description

    This project aims to migrate the existing Uyuni Test Framework from Selenium to Playwright. The move will improve the stability, speed, and maintainability of our end-to-end tests by leveraging Playwright's modern features. We'll be rewriting the current Selenium code in Ruby to Playwright code in TypeScript, which includes updating the test framework runner, step definitions, and configurations. This is also necessary because we're moving from Cucumber Ruby to CucumberJS.

    If you're still curious about the AI in the title, it was just a way to grab your attention. Thanks for your understanding.


    Goals

    • Migrate Core tests including Onboarding of clients
    • Improve test reliabillity: Measure and confirm a significant reduction of flakynes.
    • Implement a robust framework: Establish a well-structured and reusable Playwright test framework using the CucumberJS

    Resources