Goals
- get used to some of this ugly buzzword tools as they are used in a broad audience
- read out bugzilla bug description and try to find out, if the initial description (comment) has any deeper information of the bug
Bugzilla
The script 'py-bug.py' reads out the public bugs of bugzilla.opensuse.org, one by one, and writes the * first comment (bug description) * Summary * number of comments * creation time * time of last comment to a json file. Unforntunately I could not get the area/type of the bug, so something like 'kernel', 'yast'...
Tensorflow
The script 'json-reader.py' reads in the json file of the bugs and tries to learn if the inital bug description could be linked to the 'duration' (time of las comment - creation time) or the number of comments. For this the neuronal net could be modifed by commandline parameters
Lessons learned
- accelerated docker containers are not easy to install, had to use the pip package instead
- my GPU (1050Ti) is not so much faster than my CPU (Xeon E3-1231v3)
- could not train the modell to get any useful information, so no automatic bug resolution
Github Repo
https://github.com/mslacken/ml-bugs
Looking for hackers with the skills:
This project is part of:
Hack Week 18
Activity
Comments
Be the first to comment!
Similar Projects
Make more sense of openQA test results using AI by livdywan
Description
AI has the potential to help with something many of us spend a lot of time doing which is making sense of openQA logs when a job fails.
User Story
Allison Average has a puzzled look on their face while staring at log files that seem to make little sense. Is this a known issue, something completely new or maybe related to infrastructure changes?
Goals
- Leverage a chat interface to help Allison
- Create a model from scratch based on data from openQA
- Proof of concept for automated analysis of openQA test results
Bonus
- Use AI to suggest solutions to merge conflicts
- This would need a merge conflict editor that can suggest solving the conflict
- Use image recognition for needles
Resources
Timeline
Day 1
- Conversing with open-webui to teach me how to create a model based on openQA test results
- Asking for example code using TensorFlow in Python
- Discussing log files to explore what to analyze
- Drafting a new project called Testimony (based on Implementing a containerized Python action) - the project name was also suggested by the assistant
Day 2
- Using NotebookLLM (Gemini) to produce conversational versions of blog posts
- Researching the possibility of creating a project logo with AI
- Asking open-webui, persons with prior experience and conducting a web search for advice
Highlights
- I briefly tested compared models to see if they would make me more productive. Between llama, gemma and mistral there was no amazing difference in the results for my case.
- Convincing the chat interface to produce code specific to my use case required very explicit instructions.
- Asking for advice on how to use open-webui itself better was frustratingly unfruitful both in trivial and more advanced regards.
- Documentation on source materials used by LLM's and tools for this purpose seems virtually non-existent - specifically if a logo can be generated based on particular licenses
Outcomes
- Chat interface-supported development is providing good starting points and open-webui being open source is more flexible than Gemini. Although currently some fancy features such as grounding and generated podcasts are missing.
- Allison still has to be very experienced with openQA to use a chat interface for test review. Publicly available system prompts would make that easier, though.