Project Description
As Generative AI is everywhere around, we want to research its possibilities, how it can help SUSE, its employees and customers. The initial idea is to build solution based on Amazon Bedrock, to integrate our asset management tools and to be able to query the data and get the answers using human-like text.
Goal for this Hackweek
Populate all available data from SUSE Asset management tools (integrate with Jira Insight, Racktables, CloudAccountMetadata, Cloudquery,...) into the foundational model (e.g. Amazon Titan). Then make the foundational model able to answer simple queries like how many VMs are running in PRG2 or who are the owners of EC2 instances of t2 family.
This is only one of the ideas for GenAI we have. Most probably we will try to cover also another scenarios. If you are interested or you have any other idea how to utilize foundational models, let us know.
Resources
- https://aws.amazon.com/bedrock/
- https://github.com/aws-samples/amazon-bedrock-workshop
- Specifically for this hackweek was created AWS Account
ITPE Gen IA Dev (047178302800)
accessible via Okta - whoever is interested, please contact me (or raise JiraSD ticket to be added to CLZ: ITPE Gen IA Dev) and use region us-west-2 (don't mind the typo, heh). - we have booked AWS engineer, expert on Bedrock on 2023-11-06 (1-5pm CET, meeting link) - anyone interested can join
Keywords
AI, GenAI, GenerativeAI, AWS, Amazon Bedrock, Amazon Titan, Asset Management
Looking for hackers with the skills:
ai genai generativeai aws amazontitan assetmanagement bedrock
This project is part of:
Hack Week 23
Activity
Comments
-
about 1 year ago by mpiala | Reply
and recording of the workshop: Gen AI with AWS-20231106_130308-Meeting Recording.mp4
-
about 1 year ago by vadim | Reply
@mpiala now that we all have some hands on experience with bedrock I suggest that you create a few workgroup sessions for tomorrow / Thursday and invite contributors. I'd split the work into three workstreams:
- Create infrastructure / pipelines that would deploy the project in a reproducible way
- Create a crawler that would parse the source data and populate a vector database (probably a lambda that can be triggered by cloudwatch)
- Create a backend that would query the vector DB, run inference and integrate with slack
For vector DB we can use something off the shelf, like pinecone.io - later we can move it to Athena or something else.
Similar Projects
ghostwrAIter - a local AI assisted tool for helping with support cases by paolodepa
Description
This project is meant to fight the loneliness of the support team members, providing them an AI assistant (hopefully) capable of scraping supportconfigs in a RAG fashion, trying to answer specific questions.
Goals
- Setup an Ollama backend, spinning one (or more??) code-focused LLMs selected by license, performance and quality of the results between:
- deepseek-coder-v2
- dolphin-mistral
- starcoder2
- (...others??)
- Setup a Web UI for it, choosing an easily extensible and customizable option between:
- Extend the solution in order to be able to:
- Add ZIU/Concord shared folders to its RAG context
- Add BZ cases, splitted in comments to its RAG context
- A plus would be to login using the IDP portal to ghostwrAIter itself and use the same credentials to query BZ
- Add specific packages picking them from IBS repos
- A plus would be to login using the IDP portal to ghostwrAIter itself and use the same credentials to query IBS
- A plus would be to desume the packages of interest and the right channel and version to be picked from the added BZ cases
Make more sense of openQA test results using AI by livdywan
Description
AI has the potential to help with something many of us spend a lot of time doing which is making sense of openQA logs when a job fails.
User Story
Allison Average has a puzzled look on their face while staring at log files that seem to make little sense. Is this a known issue, something completely new or maybe related to infrastructure changes?
Goals
- Leverage a chat interface to help Allison
- Create a model from scratch based on data from openQA
- Proof of concept for automated analysis of openQA test results
Bonus
- Use AI to suggest solutions to merge conflicts
- This would need a merge conflict editor that can suggest solving the conflict
- Use image recognition for needles
Resources
Timeline
Day 1
- Conversing with open-webui to teach me how to create a model based on openQA test results
- Asking for example code using TensorFlow in Python
- Discussing log files to explore what to analyze
- Drafting a new project called Testimony (based on Implementing a containerized Python action) - the project name was also suggested by the assistant
Day 2
- Using NotebookLLM (Gemini) to produce conversational versions of blog posts
- Researching the possibility of creating a project logo with AI
- Asking open-webui, persons with prior experience and conducting a web search for advice
Highlights
- I briefly tested compared models to see if they would make me more productive. Between llama, gemma and mistral there was no amazing difference in the results for my case.
- Convincing the chat interface to produce code specific to my use case required very explicit instructions.
- Asking for advice on how to use open-webui itself better was frustratingly unfruitful both in trivial and more advanced regards.
- Documentation on source materials used by LLM's and tools for this purpose seems virtually non-existent - specifically if a logo can be generated based on particular licenses
Outcomes
- Chat interface-supported development is providing good starting points and open-webui being open source is more flexible than Gemini. Although currently some fancy features such as grounding and generated podcasts are missing.
- Allison still has to be very experienced with openQA to use a chat interface for test review. Publicly available system prompts would make that easier, though.
Save pytorch models in OCI registries by jguilhermevanz
Description
A prerequisite for running applications in a cloud environment is the presence of a container registry. Another common scenario is users performing machine learning workloads in such environments. However, these types of workloads require dedicated infrastructure to run properly. We can leverage these two facts to help users save resources by storing their machine learning models in OCI registries, similar to how we handle some WebAssembly modules. This approach will save users the resources typically required for a machine learning model repository for the applications they need to run.
Goals
Allow PyTorch users to save and load machine learning models in OCI registries.
Resources
Research how LLMs could help to Linux developers and/or users by anicka
Description
Large language models like ChatGPT have demonstrated remarkable capabilities across a variety of applications. However, their potential for enhancing the Linux development and user ecosystem remains largely unexplored. This project seeks to bridge that gap by researching practical applications of LLMs to improve workflows in areas such as backporting, packaging, log analysis, system migration, and more. By identifying patterns that LLMs can leverage, we aim to uncover new efficiencies and automation strategies that can benefit developers, maintainers, and end users alike.
Goals
- Evaluate Existing LLM Capabilities: Research and document the current state of LLM usage in open-source and Linux development projects, noting successes and limitations.
- Prototype Tools and Scripts: Develop proof-of-concept scripts or tools that leverage LLMs to perform specific tasks like automated log analysis, assisting with backporting patches, or generating packaging metadata.
- Assess Performance and Reliability: Test the tools' effectiveness on real-world Linux data and analyze their accuracy, speed, and reliability.
- Identify Best Use Cases: Pinpoint which tasks are most suitable for LLM support, distinguishing between high-impact and impractical applications.
- Document Findings and Recommendations: Summarize results with clear documentation and suggest next steps for potential integration or further development.
Resources
- Local LLM Implementations: Access to locally hosted LLMs such as LLaMA, GPT-J, or similar open-source models that can be run and fine-tuned on local hardware.
- Computing Resources: Workstations or servers capable of running LLMs locally, equipped with sufficient GPU power for training and inference.
- Sample Data: Logs, source code, patches, and packaging data from openSUSE or SUSE repositories for model training and testing.
- Public LLMs for Benchmarking: Access to APIs from platforms like OpenAI or Hugging Face for comparative testing and performance assessment.
- Existing NLP Tools: Libraries such as spaCy, Hugging Face Transformers, and PyTorch for building and interacting with local LLMs.
- Technical Documentation: Tutorials and resources focused on setting up and optimizing local LLMs for tasks relevant to Linux development.
- Collaboration: Engagement with community experts and teams experienced in AI and Linux for feedback and joint exploration.
AI for product management by a_jaeger
Description
Learn about AI and how it can help myself
What are the jobs that a PM does where AI can help - and how?
Goals
- Investigate how AI can help with different tasks
- Check out different AI tools, which one is best for which job
- Summarize learning
Resources
- Reading some blog posts by PMs that looked into it
- Popular and less popular AI tools
SUSE AI Meets the Game Board by moio
Use tabletopgames.ai’s open source TAG and PyTAG frameworks to apply Statistical Forward Planning and Deep Reinforcement Learning to two board games of our own design. On an all-green, all-open source, all-AWS stack!
AI + Board Games
Board games have long been fertile ground for AI innovation, pushing the boundaries of capabilities such as strategy, adaptability, and real-time decision-making - from Deep Blue's chess mastery to AlphaZero’s domination of Go. Games aren’t just fun: they’re complex, dynamic problems that often mirror real-world challenges, making them interesting from an engineering perspective.
As avid board gamers, aspiring board game designers, and engineers with careers in open source infrastructure, we’re excited to dive into the latest AI techniques first-hand.
Our goal is to develop an all-open-source, all-green AWS-based stack powered by some serious hardware to drive our board game experiments forward!
Project Goals
Set Up the Stack:
- Install and configure the TAG and PyTAG frameworks on SUSE Linux Enterprise Base Container Images.
- Integrate with the SUSE AI stack for GPU-accelerated training on AWS.
- Validate a sample GPU-accelerated PyTAG workload on SUSE AI.
- Ensure the setup is entirely repeatable with Terraform and configuration scripts, documenting results along the way.
Design and Implement AI Agents:
- Develop AI agents for the two board games, incorporating Statistical Forward Planning and Deep Reinforcement Learning techniques.
- Fine-tune model parameters to optimize game-playing performance.
- Document the advantages and limitations of each technique.
Test, Analyze, and Refine:
- Conduct AI vs. AI and AI vs. human matches to evaluate agent strategies and performance.
- Record insights, document learning outcomes, and refine models based on real-world gameplay.
Technical Stack
- Frameworks: TAG and PyTAG for AI agent development
- Platform: SUSE AI
- Tools: AWS for high-performance GPU acceleration
Why This Project Matters
This project not only deepens our understanding of AI techniques by doing but also showcases the power and flexibility of SUSE’s open-source infrastructure for supporting high-level AI projects. By building on an all-open-source stack, we aim to create a pathway for other developers and AI enthusiasts to explore, experiment, and deploy their own innovative projects within the open-source space.
Our Motivation
We believe hands-on experimentation is the best teacher.
Combining our engineering backgrounds with our passion for board games, we’ll explore AI in a way that’s both challenging and creatively rewarding. Our ultimate goal? To hack an AI agent that’s as strategic and adaptable as a real human opponent (if not better!) — and to leverage it to design even better games... for humans to play!