Project Description
At SCC, we have a rotating task of COOTW (Commanding Office of the Week). This task involves responding to customer requests from jira and slack help channels, monitoring production systems and doing small chores. Usually, we have documentation to help the COOTW answer questions and quickly find fixes. Most of these are distributed across github, trello and SUSE Support documentation. The aim of this project is to explore the magic of LLMs and create a conversational bot.
Goal for this Hackweek
- Evaluate data gathering and cleanup. Create sensible embeddings to be used by LLMs.
- Explore performance of Open-source llama based models (LLama, Vicuna, Mistral) in generating coherent and reasonably fast responses.
- Look into creating evaluation data for later use
- [Optional][requires beefy GPUs] Train Low Rank Adaptations (LoRA) and compare results.
Resources
This project is part of:
Hack Week 23 Hack Week 24
Activity
Comments
Be the first to comment!
Similar Projects
Save pytorch models in OCI registries by jguilhermevanz
Description
A prerequisite for running ap...
SUSE AI Meets the Game Board by moio
Use [tabletopgames.ai](https://tabletopgames.ai...
AI for product management by a_jaeger
Description
Learn about AI and how it can...
Learn how to integrate Elixir and Phoenix Liveview with LLMs by ninopaparo
Description
Learn how to integrate Elixir...
ghostwrAIter - a local AI assisted tool for helping with support cases by paolodepa
Description
This project is meant to figh...
Run local LLMs with Ollama and explore possible integrations with Uyuni by PSuarezHernandez
Description
Using Ollama you can easily run...