While the python-based AppArmor utils (aa-logprof etc.) are much easier to understand and maintain than the old perl code, there are still some terribly long functions like parseprofiledata() in aa.py that are not too easy to understand. Also, using hasher() (a recursive array) as storage can have some strange side effects. Another problem is that test coverage isn't too good, especially for the bigger functions.

I already wrote the CapabilityRule and CapabilityRuleset classes (and also the BaseRule and BaseRuleset classes) some months ago, and changed the code to use those classes. This code is already in upstream bzr.

My plan for hackweek is to convert more rule types into classes, and to add full test coverage for them. Besides much more readable code, this will also result in "accidently" fixing some bugs that were not noticed yet.

A side goal is to keep the upstream devs busy with patch reviews by continueing my patch flood I started some weeks ago *g*

I'll start with network rules / the NetworkRule and NetworkRuleset classes, and then maybe roll a dice to decide what I'll convert next ;-)

Looking for hackers with the skills:

apparmor python tests

This project is part of:

Hack Week 12

Activity

  • almost 10 years ago: cboltz added keyword "apparmor" to this project.
  • almost 10 years ago: cboltz added keyword "python" to this project.
  • almost 10 years ago: cboltz added keyword "tests" to this project.
  • almost 10 years ago: cboltz liked this project.
  • almost 10 years ago: cboltz started this project.
  • almost 10 years ago: cboltz originated this project.

  • Comments

    • cboltz
      over 9 years ago by cboltz | Reply

      Some minutes ago, I finally commited the NetworkRule and NetworkRuleset classes (and the patch that actually uses them) to AppArmor bzr - they were delayed by some previous patches with a slower-than-usual review (dependencies not only happen for packages ;-)

      I'll continue to rewrite more rule types into classes, but that will probably have to wait after oSC15 and can also happen without formal hackweek tracking ;-)

    Similar Projects

    Team Hedgehogs' Data Observability Dashboard by gsamardzhiev

    Description

    This project aims to develop a comprehensive Data Observability Dashboard that provides r insights into key aspects of data quality and reliability. The dashboard will track:

    Data Freshness: Monitor when data was last updated and flag potential delays.

    Data Volume: Track table row counts to detect unexpected surges or drops in data.

    Data Distribution: Analyze data for null values, outliers, and anomalies to ensure accuracy.

    Data Schema: Track schema changes over time to prevent breaking changes.

    The dashboard's aim is to support historical tracking to support proactive data management and enhance data trust across the data function.

    Goals

    Although the final goal is to create a power bi dashboard that we are able to monitor, our goals is to 1. Create the necessary tables that track the relevant metadata about our current data 2. Automate the process so it runs in a timely manner

    Resources

    AWS Redshift; AWS Glue, Airflow, Python, SQL

    Why Hedgehogs?

    Because we like them.


    Run local LLMs with Ollama and explore possible integrations with Uyuni by PSuarezHernandez

    Description

    Using Ollama you can easily run different LLM models in your local computer. This project is about exploring Ollama, testing different LLMs and try to fine tune them. Also, explore potential ways of integration with Uyuni.

    Goals

    • Explore Ollama
    • Test different models
    • Fine tuning
    • Explore possible integration in Uyuni

    Resources

    • https://ollama.com/
    • https://huggingface.co/
    • https://apeatling.com/articles/part-2-building-your-training-data-for-fine-tuning/


    Symbol Relations by hli

    Description

    There are tools to build function call graphs based on parsing source code, for example, cscope.

    This project aims to achieve a similar goal by directly parsing the disasembly (i.e. objdump) of a compiled binary. The assembly code is what the CPU sees, therefore more "direct". This may be useful in certain scenarios, such as gdb/crash debugging.

    Detailed description and Demos can be found in the README file:

    Supports x86 for now (because my customers only use x86 machines), but support for other architectures can be added easily.

    Tested with python3.6

    Goals

    Any comments are welcome.

    Resources

    https://github.com/lhb-cafe/SymbolRelations

    symrellib.py: mplements the symbol relation graph and the disassembly parser

    symrel_tracer*.py: implements tracing (-t option)

    symrel.py: "cli parser"


    Ansible for add-on management by lmanfredi

    Description

    Machines can contains various combinations of add-ons and are often modified during the time.

    The list of repos can change so I would like to create an automation able to reset the status to a given state, based on metadata available for these machines

    Goals

    Create an Ansible automation able to take care of add-on (repo list) configuration using metadata as reference

    Resources

    Results

    Created WIP project Ansible-add-on-openSUSE


    Make more sense of openQA test results using AI by livdywan

    Description

    AI has the potential to help with something many of us spend a lot of time doing which is making sense of openQA logs when a job fails.

    User Story

    Allison Average has a puzzled look on their face while staring at log files that seem to make little sense. Is this a known issue, something completely new or maybe related to infrastructure changes?

    Goals

    • Leverage a chat interface to help Allison
    • Create a model from scratch based on data from openQA
    • Proof of concept for automated analysis of openQA test results

    Bonus

    • Use AI to suggest solutions to merge conflicts
      • This would need a merge conflict editor that can suggest solving the conflict
    • Use image recognition for needles

    Resources

    Timeline

    Day 1

    • Conversing with open-webui to teach me how to create a model based on openQA test results

    Day 2

    Highlights

    • I briefly tested compared models to see if they would make me more productive. Between llama, gemma and mistral there was no amazing difference in the results for my case.
    • Convincing the chat interface to produce code specific to my use case required very explicit instructions.
    • Asking for advice on how to use open-webui itself better was frustratingly unfruitful both in trivial and more advanced regards.
    • Documentation on source materials used by LLM's and tools for this purpose seems virtually non-existent - specifically if a logo can be generated based on particular licenses

    Outcomes

    • Chat interface-supported development is providing good starting points and open-webui being open source is more flexible than Gemini. Although currently some fancy features such as grounding and generated podcasts are missing.
    • Allison still has to be very experienced with openQA to use a chat interface for test review. Publicly available system prompts would make that easier, though.