Problem statement

Right now, we have different resources to pool videos. The goal of is to consolidate all video resources into a central place to make them easily searchable, and enable a youtube like experience, rather than a simple file list.

Approach

Evaluate both VoctoWeb and MediaGoblin (packaging efforts for this are also a hackweek project). The goal is to have all videos pooled in one place, searchable, and easily accessible from a browser.

Evaluation criteria:

  • Allows simple uploads
  • Allows watching in a browser
  • Allows easy download for offline consumption
  • Provides all vital metadata
  • Provides a good search functionality.
  • Provides means to easily embed videos into other sites

The plan is to write Salt packages for deployment to ensure it's reproducible and to publish the Salt recipes. With a production instance, import, existing videos will be imported. A VM with the required resources has already been requested.

Stretch goals

Improve Availability, reduce latency

The video server will be initially located on the Nuremberg site. I'll investigate how bad the latency to other sites is under broad use. most efficiently make the video available to all sites. This could be a mirrorbrain-based CDN, or caching nodes. I will document my findings on what's being used elsewhere to get the job done, and lay out a path

Hook up with the existing voctomix recording toolchain

A goal for this setup is to be able to feed it with projects like the OpenSUSE Video box, which is currently Debian-based. OpenSUSE should be able to foot the same task. With Voctomix already packaged, we need to bring it to the latest version, and package more parts of the toolchain, such as the Conference Recording System and ultimately the tracker GUI, where packaging is pending a proper license by upstream.

Looking for hackers with the skills:

video voctoweb mediagoblin voctomix rubyonrails python ffmpeg gstreamer

This project is part of:

Hack Week 16

Activity

  • over 6 years ago: mcaj joined this project.
  • about 7 years ago: bruclik liked this project.
  • about 7 years ago: mstrigl liked this project.
  • about 7 years ago: mstrigl joined this project.
  • about 7 years ago: dmolkentin added keyword "gstreamer" to this project.
  • about 7 years ago: dmolkentin added keyword "ffmpeg" to this project.
  • about 7 years ago: dmolkentin added keyword "rubyonrails" to this project.
  • about 7 years ago: dmolkentin added keyword "python" to this project.
  • about 7 years ago: dmolkentin added keyword "voctoweb" to this project.
  • about 7 years ago: dmolkentin added keyword "mediagoblin" to this project.
  • about 7 years ago: dmolkentin added keyword "voctomix" to this project.
  • about 7 years ago: dmolkentin added keyword "video" to this project.
  • about 7 years ago: TBro started this project.
  • about 7 years ago: dmolkentin originated this project.

  • Comments

    • mstrigl
      about 7 years ago by mstrigl | Reply

      I think my project "Packaging mediagoblin" (https://hackweek.suse.com/16/projects/package-mediagoblin) fits here well.

    • dmolkentin
      about 7 years ago by dmolkentin | Reply

      CRS has been licensed under Apache-2 terms and is now available at https://build.opensuse.org/package/show/home:dmolkentin:video/crs-tracker, with the scripts to follow once I figure a good way to packaging them.

    Similar Projects

    Update my own python audio and video time-lapse and motion capture apps and publish by dmair

    Project Description

    Many years ago, in my own time, I wrote a Qt python application to periodically capture frames from a V4L2 video device (e.g. a webcam) and used it to create daily weather timelapse videos from windows at my home. I have maintained it at home in my own time and this year have added motion detection making it a functional video security tool but with no guarantees. I also wrote a linux audio monitoring app in python using Qt in my own time that captures live signal strength along with 24 hour history of audio signal level/range and audio spectrum. I recently added background noise filtering to the app. In due course I aim to include voice detection, currently I'm assuming via Google's public audio interface. Neither of these is a professional home security app but between them they permit a user to freely monitor video and audio data from a home in a manageable way. Both projects are on github but out-of-date with personal work, I would like to organize and update the github versions of these projects.

    Goal for this Hackweek

    It would probably help to migrate all the v4l2py module based video code to linuxpy.video based code and that looks like a re-write of large areas of the video code. It would also be good to remove a lot of python lint that is several years old to improve the projects with the main goal being to push the recent changes with better organized code to github. If there is enough time I'd like to take the in-line Qt QSettings persistent state code used per-app and write a python class that encapsulates the Qt QSettings class in a value_of(name)/name=value manner for shared use in projects so that persistent state can be accessed read or write anywhere within the apps using a simple interface.

    Resources

    I'm not specifically looking for help but welcome other input.


    Fix RSpec tests in order to replace the ruby-ldap rubygem in OBS by enavarro_suse

    Description

    "LDAP mode is not official supported by OBS!". See: config/options.yml.example#L100-L102

    However, there is an RSpec file which tests LDAP mode in OBS. These tests use the ruby-ldap rubygem, mocking the results returned by a LDAP server.

    The ruby-ldap rubygem seems no longer maintaned, and also prevents from updating to a more recent Ruby version. A good alternative is to replace it with the net-ldap rubygem.

    Before replacing the ruby-ldap rubygem, we should modify the tests so the don't mock the responses of a LDAP server. Instead, we should modify the tests and run them against a real LDAP server.

    Goals

    Goals of this project:

    • Modify the RSpec tests and run them against a real LDAP server
    • Replace the net-ldap rubygem with the ruby-ldap rubygem

    Achieving the above mentioned goals will:

    • Permit upgrading OBS from Ruby 3.1 to Ruby 3.2
    • Make a step towards officially supporting LDAP in OBS.

    Resources


    Symbol Relations by hli

    Description

    There are tools to build function call graphs based on parsing source code, for example, cscope.

    This project aims to achieve a similar goal by directly parsing the disasembly (i.e. objdump) of a compiled binary. The assembly code is what the CPU sees, therefore more "direct". This may be useful in certain scenarios, such as gdb/crash debugging.

    Detailed description and Demos can be found in the README file:

    Supports x86 for now (because my customers only use x86 machines), but support for other architectures can be added easily.

    Tested with python3.6

    Goals

    Any comments are welcome.

    Resources

    https://github.com/lhb-cafe/SymbolRelations

    symrellib.py: mplements the symbol relation graph and the disassembly parser

    symrel_tracer*.py: implements tracing (-t option)

    symrel.py: "cli parser"


    Run local LLMs with Ollama and explore possible integrations with Uyuni by PSuarezHernandez

    Description

    Using Ollama you can easily run different LLM models in your local computer. This project is about exploring Ollama, testing different LLMs and try to fine tune them. Also, explore potential ways of integration with Uyuni.

    Goals

    • Explore Ollama
    • Test different models
    • Fine tuning
    • Explore possible integration in Uyuni

    Resources

    • https://ollama.com/
    • https://huggingface.co/
    • https://apeatling.com/articles/part-2-building-your-training-data-for-fine-tuning/


    Saline (state deployment control and monitoring tool for SUSE Manager/Uyuni) by vizhestkov

    Project Description

    Saline is an addition for salt used in SUSE Manager/Uyuni aimed to provide better control and visibility for states deploymend in the large scale environments.

    In current state the published version can be used only as a Prometheus exporter and missing some of the key features implemented in PoC (not published). Now it can provide metrics related to salt events and state apply process on the minions. But there is no control on this process implemented yet.

    Continue with implementation of the missing features and improve the existing implementation:

    • authentication (need to decide how it should be/or not related to salt auth)

    • web service providing the control of states deployment

    Goal for this Hackweek

    • Implement missing key features

    • Implement the tool for state deployment control with CLI

    Resources

    https://github.com/openSUSE/saline


    Team Hedgehogs' Data Observability Dashboard by gsamardzhiev

    Description

    This project aims to develop a comprehensive Data Observability Dashboard that provides r insights into key aspects of data quality and reliability. The dashboard will track:

    Data Freshness: Monitor when data was last updated and flag potential delays.

    Data Volume: Track table row counts to detect unexpected surges or drops in data.

    Data Distribution: Analyze data for null values, outliers, and anomalies to ensure accuracy.

    Data Schema: Track schema changes over time to prevent breaking changes.

    The dashboard's aim is to support historical tracking to support proactive data management and enhance data trust across the data function.

    Goals

    Although the final goal is to create a power bi dashboard that we are able to monitor, our goals is to 1. Create the necessary tables that track the relevant metadata about our current data 2. Automate the process so it runs in a timely manner

    Resources

    AWS Redshift; AWS Glue, Airflow, Python, SQL

    Why Hedgehogs?

    Because we like them.


    SUSE AI Meets the Game Board by moio

    Use tabletopgames.ai’s open source TAG and PyTAG frameworks to apply Statistical Forward Planning and Deep Reinforcement Learning to two board games of our own design. On an all-green, all-open source, all-AWS stack!
    A chameleon playing chess in a train car, as a metaphor of SUSE AI applied to games


    Results: Infrastructure Achievements

    We successfully built and automated a containerized stack to support our AI experiments. This included:

    A screenshot of k9s and nvtop showing PyTAG running in Kubernetes with GPU acceleration

    ./deploy.sh and voilà - Kubernetes running PyTAG (k9s, above) with GPU acceleration (nvtop, below)

    Results: Game Design Insights

    Our project focused on modeling and analyzing two card games of our own design within the TAG framework:

    • Game Modeling: We implemented models for Dario's "Bamboo" and Silvio's "Totoro" and "R3" games, enabling AI agents to play thousands of games ...in minutes!
    • AI-driven optimization: By analyzing statistical data on moves, strategies, and outcomes, we iteratively tweaked the game mechanics and rules to achieve better balance and player engagement.
    • Advanced analytics: Leveraging AI agents with Monte Carlo Tree Search (MCTS) and random action selection, we compared performance metrics to identify optimal strategies and uncover opportunities for game refinement .

    Cards from the three games

    A family picture of our card games in progress. From the top: Bamboo, Totoro, R3

    Results: Learning, Collaboration, and Innovation

    Beyond technical accomplishments, the project showcased innovative approaches to coding, learning, and teamwork:

    • "Trio programming" with AI assistance: Our "trio programming" approach—two developers and GitHub Copilot—was a standout success, especially in handling slightly-repetitive but not-quite-exactly-copypaste tasks. Java as a language tends to be verbose and we found it to be fitting particularly well.
    • AI tools for reporting and documentation: We extensively used AI chatbots to streamline writing and reporting. (Including writing this report! ...but this note was added manually during edit!)
    • GPU compute expertise: Overcoming challenges with CUDA drivers and cloud infrastructure deepened our understanding of GPU-accelerated workloads in the open-source ecosystem.
    • Game design as a learning platform: By blending AI techniques with creative game design, we learned not only about AI strategies but also about making games fun, engaging, and balanced.

    Last but not least we had a lot of fun! ...and this was definitely not a chatbot generated line!

    The Context: AI + Board Games