Run local LLMs with Ollama and explore possible integrations with Uyuni
a project by PSuarezHernandez
a project by PSuarezHernandez
Updated
5 months
ago.
4 hacker ♥️.
1 follower.
Description
Using Ollama you can easily run different LLM models in your local computer. This project is about exploring Ollama, testing different LLMs and try to fine tune them. Also, explore potential ways of integration with Uyuni.
Goals
- Explore Ollama
- Test different models
- Fine tuning
- Explore possible integration in Uyuni
Resources
- https://ollama.com/
- https://huggingface.co/
- https://apeatling.com/articles/part-2-building-your-training-data-for-fine-tuning/
This project is part of:
Hack Week 24
Activity
Comments
-
5 months ago by PSuarezHernandez | Reply
Some conclusions after Hackweek 24:
- ollama + open-webui is a nice combo to allow running LLMs locally (tried also Local AI)
- open-webui allows you to add custom knoweldge bases (collections) to feed models.
- Uyuni documentation, Salt documentation can be used on this collections to make models to learn.
- Using a tailored documentation works better to feed models.
- Tried different models: llama3.1, mistral, mistral-nemo, gemma2, phi3,..
- Getting promising results, particularly with
mistral-nemo
.. but also getting model hallutinations - model parameters can be adjusted to reduce them.
Takeaways
- Small models runs fairly well with CPU only.
- Making an expert assistance on Uyuni, with an extensive knowledge based on documentation, might be something to keep exploring.
Next steps
- Make the model to understand Uyuni API, so it is able to translate user requests to actual call to Uyuni API.
Similar Projects
This project is one of its kind!