Updated
about 6 hours
ago.
No love.
3 followers.
Description
This project explores how to create a custom Large Language Model (LLM) trained specifically on Rancher related data. We will explore LLM distillation techniques to produce a smaller, more efficient model suitable for deployment in resource-constrained environments. We will also explore how Retrieval-Augmented Generation (RAG) can enhance model performance by combining the strengths of fine-tuning with dynamic knowledge retrieval.
Goals
The goal is to understand the full process of customizing an LLM to deeply understand Rancher concepts
Resources
https://humanloop.com/blog/model-distillation https://huggingface.co/blog/Kseniase/kd
Looking for hackers with the skills:
Nothing? Add some keywords!
This project is part of:
Hack Week 25
Activity
Comments
Be the first to comment!
Similar Projects
This project is one of its kind!