Updated
about 1 year
ago.
1 hackers ♥️.
2 followers.
Project Description
Fine tuning of a LLaMa2 model
Goal for this Hackweek
- Learn how to fine tune an LLM (Large Language Model)
- Stretch goal: Maybe apply this to internal documentation? (so we can ask questions about internal SUSE stuff that we would otherwise have to search in Confluence?)
Resources
https://www.datacamp.com/tutorial/fine-tuning-llama-2
Comments
-
about 1 year ago by rtorrero | Reply
Slight different approach was followed in the end: instead of fine-tunning, I ended up using Retrieval Augmented Generation
See: https://gpt-index.readthedocs.io/en/latest/getting_started/concepts.html
The result can be seen in: https://github.com/rtorrero/LlamaDocQuery
Similar Projects
This project is one of its kind!