Project Description

Fine tuning of a LLaMa2 model

Goal for this Hackweek

  • Learn how to fine tune an LLM (Large Language Model)
  • Stretch goal: Maybe apply this to internal documentation? (so we can ask questions about internal SUSE stuff that we would otherwise have to search in Confluence?)

Resources

https://www.datacamp.com/tutorial/fine-tuning-llama-2

Looking for hackers with the skills:

Nothing? Add some keywords!

This project is part of:

Hack Week 23

Activity

  • about 1 year ago: rtorrero liked this project.
  • about 1 year ago: rtorrero started this project.
  • about 1 year ago: rtorrero originated this project.

  • Comments

    • rtorrero
      about 1 year ago by rtorrero | Reply

      Slight different approach was followed in the end: instead of fine-tunning, I ended up using Retrieval Augmented Generation

      See: https://gpt-index.readthedocs.io/en/latest/getting_started/concepts.html

      The result can be seen in: https://github.com/rtorrero/LlamaDocQuery

    Similar Projects

    This project is one of its kind!