Description

Using Ollama you can easily run different LLM models in your local computer. This project is about exploring Ollama, testing different LLMs and try to fine tune them. Also, explore potential ways of integration with Uyuni.

Goals

  • Explore Ollama
  • Test different models
  • Fine tuning
  • Explore possible integration in Uyuni

Resources

  • https://ollama.com/
  • https://huggingface.co/
  • https://apeatling.com/articles/part-2-building-your-training-data-for-fine-tuning/

Looking for hackers with the skills:

uyuni llm ollama python ai

This project is part of:

Hack Week 24

Activity

  • 5 months ago: juliogonzalezgil liked this project.
  • 5 months ago: frantisek.simorda liked this project.
  • 5 months ago: j_renner liked this project.
  • 5 months ago: PSuarezHernandez added keyword "uyuni" to this project.
  • 5 months ago: PSuarezHernandez added keyword "llm" to this project.
  • 5 months ago: PSuarezHernandez added keyword "ollama" to this project.
  • 5 months ago: PSuarezHernandez added keyword "python" to this project.
  • 5 months ago: PSuarezHernandez added keyword "ai" to this project.
  • 5 months ago: PSuarezHernandez liked this project.
  • 5 months ago: PSuarezHernandez started this project.
  • 5 months ago: PSuarezHernandez originated this project.

  • Comments

    • PSuarezHernandez
      5 months ago by PSuarezHernandez | Reply

      Some conclusions after Hackweek 24:

      • ollama + open-webui is a nice combo to allow running LLMs locally (tried also Local AI)
      • open-webui allows you to add custom knoweldge bases (collections) to feed models.
      • Uyuni documentation, Salt documentation can be used on this collections to make models to learn.
      • Using a tailored documentation works better to feed models.
      • Tried different models: llama3.1, mistral, mistral-nemo, gemma2, phi3,..
      • Getting promising results, particularly with mistral-nemo.. but also getting model hallutinations - model parameters can be adjusted to reduce them.

      Takeaways

      • Small models runs fairly well with CPU only.
      • Making an expert assistance on Uyuni, with an extensive knowledge based on documentation, might be something to keep exploring.

      Next steps

      • Make the model to understand Uyuni API, so it is able to translate user requests to actual call to Uyuni API.

    Similar Projects

    This project is one of its kind!