Project Description
Generate a personalized avatar artwork images by fine-tuning stable diffusion on personal pictures
Goal for this Hackweek
Get a new fancy and unique avatar!
Resources
- https://huggingface.co/docs/diffusers/using-diffusers/sdxl
- https://huggingface.co/docs/diffusers/training/dreambooth
- https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sdxl.md
- https://civitai.com/models/133005/juggernaut-xl?modelVersionId=198530
Looking for hackers with the skills:
This project is part of:
Hack Week 23
Activity
Comments
-
about 1 year ago by STorresi | Reply
These are generated after a bespoke LoRA training using DreamBooth over the JuggernautXL model, which in turn is based on SDXL 1.0.
As you can see, hands are still tricky (a known issue of diffusion models, apparently), but I didn't try inpainting and img2img fine-tuning, which are supposed to be the go-to way to solve small issues like that. I must say the overall experience was quite painful due to the hardware requirements of SDXL and the amount of memory leaks in pytorch. A high-end consumer grade GPU like an NVIDIA 4080 with 16GB of VRAM often wasn't enough and ran OOM.
Similar Projects
Learn how to integrate Elixir and Phoenix Liveview with LLMs by ninopaparo
Description
Learn how to integrate Elixir and Phoenix Liveview with LLMs by building an application that can provide answers to user queries based on a corpus of custom-trained data.
Goals
Develop an Elixir application via the Phoenix framework that:
- Employs Retrieval Augmented Generation (RAG) techniques
- Supports the integration and utilization of various Large Language Models (LLMs).
- Is designed with extensibility and adaptability in mind to accommodate future enhancements and modifications.
Resources
- https://elixir-lang.org/
- https://www.phoenixframework.org/
- https://github.com/elixir-nx/bumblebee
- https://ollama.com/
Research how LLMs could help to Linux developers and/or users by anicka
Description
Large language models like ChatGPT have demonstrated remarkable capabilities across a variety of applications. However, their potential for enhancing the Linux development and user ecosystem remains largely unexplored. This project seeks to bridge that gap by researching practical applications of LLMs to improve workflows in areas such as backporting, packaging, log analysis, system migration, and more. By identifying patterns that LLMs can leverage, we aim to uncover new efficiencies and automation strategies that can benefit developers, maintainers, and end users alike.
Goals
- Evaluate Existing LLM Capabilities: Research and document the current state of LLM usage in open-source and Linux development projects, noting successes and limitations.
- Prototype Tools and Scripts: Develop proof-of-concept scripts or tools that leverage LLMs to perform specific tasks like automated log analysis, assisting with backporting patches, or generating packaging metadata.
- Assess Performance and Reliability: Test the tools' effectiveness on real-world Linux data and analyze their accuracy, speed, and reliability.
- Identify Best Use Cases: Pinpoint which tasks are most suitable for LLM support, distinguishing between high-impact and impractical applications.
- Document Findings and Recommendations: Summarize results with clear documentation and suggest next steps for potential integration or further development.
Resources
- Local LLM Implementations: Access to locally hosted LLMs such as LLaMA, GPT-J, or similar open-source models that can be run and fine-tuned on local hardware.
- Computing Resources: Workstations or servers capable of running LLMs locally, equipped with sufficient GPU power for training and inference.
- Sample Data: Logs, source code, patches, and packaging data from openSUSE or SUSE repositories for model training and testing.
- Public LLMs for Benchmarking: Access to APIs from platforms like OpenAI or Hugging Face for comparative testing and performance assessment.
- Existing NLP Tools: Libraries such as spaCy, Hugging Face Transformers, and PyTorch for building and interacting with local LLMs.
- Technical Documentation: Tutorials and resources focused on setting up and optimizing local LLMs for tasks relevant to Linux development.
- Collaboration: Engagement with community experts and teams experienced in AI and Linux for feedback and joint exploration.
Make more sense of openQA test results using AI by livdywan
Description
AI has the potential to help with something many of us spend a lot of time doing which is making sense of openQA logs when a job fails.
User Story
Allison Average has a puzzled look on their face while staring at log files that seem to make little sense. Is this a known issue, something completely new or maybe related to infrastructure changes?
Goals
- Leverage a chat interface to help Allison
- Create a model from scratch based on data from openQA
- Proof of concept for automated analysis of openQA test results
Bonus
- Use AI to suggest solutions to merge conflicts
- This would need a merge conflict editor that can suggest solving the conflict
- Use image recognition for needles
Resources
Timeline
Day 1
- Conversing with open-webui to teach me how to create a model based on openQA test results
- Asking for example code using TensorFlow in Python
- Discussing log files to explore what to analyze
- Drafting a new project called Testimony (based on Implementing a containerized Python action) - the project name was also suggested by the assistant
Day 2
- Using NotebookLLM (Gemini) to produce conversational versions of blog posts
- Researching the possibility of creating a project logo with AI
- Asking open-webui, persons with prior experience and conducting a web search for advice
Highlights
- I briefly tested compared models to see if they would make me more productive. Between llama, gemma and mistral there was no amazing difference in the results for my case.
- Convincing the chat interface to produce code specific to my use case required very explicit instructions.
- Asking for advice on how to use open-webui itself better was frustratingly unfruitful both in trivial and more advanced regards.
- Documentation on source materials used by LLM's and tools for this purpose seems virtually non-existent - specifically if a logo can be generated based on particular licenses
Outcomes
- Chat interface-supported development is providing good starting points and open-webui being open source is more flexible than Gemini. Although currently some fancy features such as grounding and generated podcasts are missing.
- Allison still has to be very experienced with openQA to use a chat interface for test review. Publicly available system prompts would make that easier, though.
ghostwrAIter - a local AI assisted tool for helping with support cases by paolodepa
Description
This project is meant to fight the loneliness of the support team members, providing them an AI assistant (hopefully) capable of scraping supportconfigs in a RAG fashion, trying to answer specific questions.
Goals
- Setup an Ollama backend, spinning one (or more??) code-focused LLMs selected by license, performance and quality of the results between:
- deepseek-coder-v2
- dolphin-mistral
- starcoder2
- (...others??)
- Setup a Web UI for it, choosing an easily extensible and customizable option between:
- Extend the solution in order to be able to:
- Add ZIU/Concord shared folders to its RAG context
- Add BZ cases, splitted in comments to its RAG context
- A plus would be to login using the IDP portal to ghostwrAIter itself and use the same credentials to query BZ
- Add specific packages picking them from IBS repos
- A plus would be to login using the IDP portal to ghostwrAIter itself and use the same credentials to query IBS
- A plus would be to desume the packages of interest and the right channel and version to be picked from the added BZ cases
Use local/private LLM for semantic knowledge search by digitaltomm
Description
Use a local LLM, based on SUSE AI (ollama, openwebui) to power geeko search (public instance: https://geeko.port0.org/).
Goals
Build a SUSE internal instance of https://geeko.port0.org/ that can operate on internal resources, crawling confluence.suse.com, gitlab.suse.de, etc.
Resources
Repo: https://github.com/digitaltom/semantic-knowledge-search
Public instance: https://geeko.port0.org/
Results
Internal instance:
I have an internal test instance running which has indexed a couple of internal wiki pages from the SCC team. It's using the ollama (llama3.1:8b
) backend of suse-ai.openplatform.suse.com to create embedding vectors for indexed resources and to create a chat response. The semantic search for documents is done with a vector search inside of sqlite, using sqlite-vec.