Project Description

People need to test operating systems and applications on s390 platform.

Installation from scratch solutions include:

  • just deploy and provision manually add-emoji (with the help of ftpboot script, if you are at SUSE)
  • use s3270 terminal emulation (used by openQA people?)
  • use LXC from IBM to start CP commands and analyze the results
  • use zPXE to do some PXE-alike booting (used by the orthos team?)
  • use tessia to install from scratch using autoyast
  • use libvirt for s390 to do some nested virtualization on some already deployed z/VM system
  • directly install a Linux kernel on a LPAR and use kvm + libvirt from there

Deployment from image solutions include:

  • use ICIC web interface (openstack in disguise, contributed by IBM)
  • use ICIC from the openstack terraform provider (used by Rancher QA)
  • use zvm_ansible to control SMAPI
  • connect directly to SMAPI low-level socket interface

IBM Cloud Infrastructure Center (ICIC) harnesses the Feilong API, but you can use Feilong without installing ICIC, provided you set up a "z/VM cloud connector" into one of your VMs following this schema.

What about writing a terraform Feilong provider, just like we have the terraform libvirt provider? That would allow to transparently call Feilong from your main.tf files to deploy and destroy resources on your system/z.

Other Feilong-based solutions include:

  • make libvirt Feilong-aware
  • simply call Feilong from shell scripts with curl
  • use zvmconnector client python library from Feilong
  • use zthin part of Feilong to directly command SMAPI.

Goal for Hackweek 23

My final goal is to be able to easily deploy and provision VMs automatically on a z/VM system, in a way that people might enjoy even outside of SUSE.

My technical preference is to write a terraform provider plugin, as it is the approach that involves the least software components for our deployments, while remaining clean, and compatible with our existing development infrastructure.

Goals for Hackweek 24

Feilong provider works and is used internally by SUSE Manager team. Let's push it forward!

Let's add support for fiberchannel disks and multipath.

Goals for Hackweek 25

  • Finish support for fiberchannel disks and multipath
  • Fix problems with registration on hashicorp providers registry
  • Finish the U part of CRUD
  • Move from private repos to Open Mainframe project

Resources

Outcome

Looking for hackers with the skills:

s390 mainframe zvm golang terraform deployment

This project is part of:

Hack Week 23 Hack Week 24

Activity

  • about 1 month ago: pinvernizzi liked this project.
  • about 1 year ago: juliogonzalezgil liked this project.
  • about 1 year ago: e_bischoff liked this project.
  • about 1 year ago: mfriesenegger liked this project.
  • about 1 year ago: dgedon liked this project.
  • about 1 year ago: mfranc liked this project.
  • about 1 year ago: e_bischoff started this project.
  • about 1 year ago: e_bischoff added keyword "deployment" to this project.
  • about 1 year ago: e_bischoff added keyword "terraform" to this project.
  • about 1 year ago: e_bischoff added keyword "golang" to this project.
  • about 1 year ago: e_bischoff added keyword "zvm" to this project.
  • about 1 year ago: e_bischoff added keyword "s390" to this project.
  • about 1 year ago: e_bischoff added keyword "mainframe" to this project.
  • about 1 year ago: e_bischoff originated this project.

  • Comments

    • mfriesenegger
      about 1 year ago by mfriesenegger | Reply

      As the Feilong project chair, I like the terraform-feilong-provider project and making libvirt Feilong-aware. I will support your effort!

    • e_bischoff
      about 1 year ago by e_bischoff | Reply

      Thanks for your support Mike. For the moment, I am still not completely sure whether I will take the terraform approach or the libvirt approach. The first one seems to me better, as for practical purposes it's one software layer less for us. I could even pick up something completely different. But so far the terraform provider approach seems the most promising for the least effort.

    • e_bischoff
      about 1 year ago by e_bischoff | Reply

      OK, decision taken, I will stick to the terraform approach.

      The bad part is that I have to code for 2 versions of terraform as we need to support both plugin protocols 5 and 6.

      The good part is that we get a golang library for Feilong for free (there is a project ZVM connector golang but it does not provide marshalling and demarshalling).

    • e_bischoff
      about 1 year ago by e_bischoff | Reply

      We have a working provider and a partial Go library. Mission accomplished, although it would be nice to attract other contributors and fill in the holes.

    • e_bischoff
      11 months ago by e_bischoff | Reply

      Go library has now 100% coverage.

      I'm not sure anymore that the protocol 5 provider was useful, but I'll keep maintaining it because it's handy for my tests.

      The provider still lacks R and U parts of CRUD.

    • e_bischoff
      17 days ago by e_bischoff | Reply

      In Feilong source code, some functions are undocumented, mainly around FiberChannel. This means that the Go library is not 100% complete as I thought. I will try to add the missing Go methods as well as the documentation in Feilong itself.

    • e_bischoff
      12 days ago by e_bischoff | Reply

      I have added all missing functions to the Go library. I also wrote the upstream API doc for all missing functions and tried to fix as much as I could the rest of the upstream API doc. That was around one thousand modified documentation lines.

      Using these new functions, and with the help of Mike, I was able to make both FiberChannel and multipath work. We still need ad hoc images though, with multipath-tools package installed and multipathd service enabled. I'll try to get them either from Mike or from the public cloud team.

    • e_bischoff
      8 days ago by e_bischoff | Reply

      The hackweek 24 is getting to its end. I prepared a kind of TODO for hackweek 25, with remaining issues.

    Similar Projects

    Contribute to terraform-provider-libvirt by pinvernizzi

    Description

    The SUSE Manager (SUMA) teams' main tool for infrastructure automation, Sumaform, largely relies on terraform-provider-libvirt. That provider is also widely used by other teams, both inside and outside SUSE.

    It would be good to help the maintainers of this project and give back to the community around it, after all the amazing work that has been already done.

    If you're interested in any of infrastructure automation, Terraform, virtualization, tooling development, Go (...) it is also a good chance to learn a bit about them all by putting your hands on an interesting, real-use-case and complex project.

    Goals

    • Get more familiar with Terraform provider development and libvirt bindings in Go
    • Solve some issues and/or implement some features
    • Get in touch with the community around the project

    Resources


    file-organizer: A CLI Tool for Efficient File Management by okhatavkar

    Description

    Create a Go-based CLI tool that helps organize files in a specified folder by sorting them into subdirectories based on defined criteria, such as file type or creation date. Users will pass a folder path as an argument, and the tool will process and organize the files within it.

    Goals

    • Develop Go skills by building a practical command-line application.
    • Learn to manage and manipulate files and directories in Go using standard libraries.
    • Create a tool that simplifies file management, making it easier to organize and maintain directories.

    Resources

    • Go Standard Libraries: Utilize os, filepath, and time for file operations.
    • CLI Development: Use flag for basic argument parsing or consider cobra for enhanced functionality.
    • Go Learning Material: Go by Example and The Go Programming Language Documentation.

    Features

    • File Type Sorting: Automatically move files into subdirectories based on their extensions (e.g., documents, images, videos).
    • Date-Based Organization: Add an option to organize files by creation date into year/month folders.
    • User-Friendly CLI: Build intuitive commands and clear outputs for ease of use. This version maintains the core idea of organizing files efficiently while focusing on Go development and practical file management.


    kubectl clone: Seamlessly Clone Kubernetes Resources Across Multiple Rancher Clusters and Projects by dpunia

    Description

    kubectl clone is a kubectl plugin that empowers users to clone Kubernetes resources across multiple clusters and projects managed by Rancher. It simplifies the process of duplicating resources from one cluster to another or within different namespaces and projects, with optional on-the-fly modifications. This tool enhances multi-cluster resource management, making it invaluable for environments where Rancher orchestrates numerous Kubernetes clusters.

    Goals

    1. Seamless Multi-Cluster Cloning
      • Clone Kubernetes resources across clusters/projects with one command.
      • Simplifies management, reduces operational effort.

    Resources

    1. Rancher & Kubernetes Docs

      • Rancher API, Cluster Management, Kubernetes client libraries.
    2. Development Tools

      • Kubectl plugin docs, Go programming resources.

    Building and Installing the Plugin

    1. Set Environment Variables: Export the Rancher URL and API token:
    • export RANCHER_URL="https://rancher.example.com"
    • export RANCHER_TOKEN="token-xxxxx:xxxxxxxxxxxxxxxxxxxx"
    1. Build the Plugin: Compile the Go program:
    • go build -o kubectl-clone ./pkg/
    1. Install the Plugin: Move the executable to a directory in your PATH:
    • mv kubectl-clone /usr/local/bin/

    Ensure the file is executable:

    • chmod +x /usr/local/bin/kubectl-clone
    1. Verify the Plugin Installation: Test the plugin by running:
    • kubectl clone --help

    You should see the usage information for the kubectl-clone plugin.

    Usage Examples

    1. Clone a Deployment from One Cluster to Another:
    • kubectl clone --source-cluster c-abc123 --type deployment --name nginx-deployment --target-cluster c-def456 --new-name nginx-deployment-clone
    1. Clone a Service into Another Namespace and Modify Labels:


    toptop - a top clone written in Go by dshah

    Description

    toptop is a clone of Linux's top CLI tool, but written in Go.

    Goals

    Learn more about Go (mainly bubbletea) and Linux

    Resources

    GitHub


    Harvester Packer Plugin by mrohrich

    Description

    Hashicorp Packer is an automation tool that allows automatic customized VM image builds - assuming the user has a virtualization tool at their disposal. To make use of Harvester as such a virtualization tool a plugin for Packer needs to be written. With this plugin users could make use of their Harvester cluster to build customized VM images, something they likely want to do if they have a Harvester cluster.

    Goals

    Write a Packer plugin bridging the gap between Harvester and Packer. Users should be able to create customized VM images using Packer and Harvester with no need to utilize another virtualization platform.

    Resources

    Hashicorp documentation for building custom plugins for Packer https://developer.hashicorp.com/packer/docs/plugins/creation/custom-builders

    Source repository of the Harvester Packer plugin https://github.com/m-ildefons/harvester-packer-plugin


    Rancher/k8s Trouble-Maker by tonyhansen

    Project Description

    When studying for my RHCSA, I found trouble-maker, which is a program that breaks a Linux OS and requires you to fix it. I want to create something similar for Rancher/k8s that can allow for troubleshooting an unknown environment.

    Goal for this Hackweek

    Create a basic framework for creating Rancher/k8s cluster lab environments as needed for the Break/Fix Create at least 5 modules that can be applied to the cluster and require troubleshooting

    Resources

    https://github.com/rancher/terraform-provider-rancher2 https://github.com/rancher/tf-rancher-up


    SUSE AI Meets the Game Board by moio

    Use tabletopgames.ai’s open source TAG and PyTAG frameworks to apply Statistical Forward Planning and Deep Reinforcement Learning to two board games of our own design. On an all-green, all-open source, all-AWS stack!
    A chameleon playing chess in a train car, as a metaphor of SUSE AI applied to games


    Results: Infrastructure Achievements

    We successfully built and automated a containerized stack to support our AI experiments. This included:

    A screenshot of k9s and nvtop showing PyTAG running in Kubernetes with GPU acceleration

    ./deploy.sh and voilà - Kubernetes running PyTAG (k9s, above) with GPU acceleration (nvtop, below)

    Results: Game Design Insights

    Our project focused on modeling and analyzing two card games of our own design within the TAG framework:

    • Game Modeling: We implemented models for Dario's "Bamboo" and Silvio's "Totoro" and "R3" games, enabling AI agents to play thousands of games ...in minutes!
    • AI-driven optimization: By analyzing statistical data on moves, strategies, and outcomes, we iteratively tweaked the game mechanics and rules to achieve better balance and player engagement.
    • Advanced analytics: Leveraging AI agents with Monte Carlo Tree Search (MCTS) and random action selection, we compared performance metrics to identify optimal strategies and uncover opportunities for game refinement .

    Cards from the three games

    A family picture of our card games in progress. From the top: Bamboo, Totoro, R3

    Results: Learning, Collaboration, and Innovation

    Beyond technical accomplishments, the project showcased innovative approaches to coding, learning, and teamwork:

    • "Trio programming" with AI assistance: Our "trio programming" approach—two developers and GitHub Copilot—was a standout success, especially in handling slightly-repetitive but not-quite-exactly-copypaste tasks. Java as a language tends to be verbose and we found it to be fitting particularly well.
    • AI tools for reporting and documentation: We extensively used AI chatbots to streamline writing and reporting. (Including writing this report! ...but this note was added manually during edit!)
    • GPU compute expertise: Overcoming challenges with CUDA drivers and cloud infrastructure deepened our understanding of GPU-accelerated workloads in the open-source ecosystem.
    • Game design as a learning platform: By blending AI techniques with creative game design, we learned not only about AI strategies but also about making games fun, engaging, and balanced.

    Last but not least we had a lot of fun! ...and this was definitely not a chatbot generated line!

    The Context: AI + Board Games


    Testing and adding GNU/Linux distributions on Uyuni by juliogonzalezgil

    Join the Gitter channel! https://gitter.im/uyuni-project/hackweek

    Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines. It also manages configuration, can run audits, build image containers, monitor and much more!

    Currently there are a few distributions that are completely untested on Uyuni or SUSE Manager (AFAIK) or just not tested since a long time, and could be interesting knowing how hard would be working with them and, if possible, fix whatever is broken.

    For newcomers, the easiest distributions are those based on DEB or RPM packages. Distributions with other package formats are doable, but will require adapting the Python and Java code to be able to sync and analyze such packages (and if salt does not support those packages, it will need changes as well). So if you want a distribution with other packages, make sure you are comfortable handling such changes.

    No developer experience? No worries! We had non-developers contributors in the past, and we are ready to help as long as you are willing to learn. If you don't want to code at all, you can also help us preparing the documentation after someone else has the initial code ready, or you could also help with testing :-)

    The idea is testing Salt and Salt-ssh clients, but NOT traditional clients, which are deprecated.

    To consider that a distribution has basic support, we should cover at least (points 3-6 are to be tested for both salt minions and salt ssh minions):

    1. Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
    2. Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
    3. Package management (install, remove, update...)
    4. Patching
    5. Applying any basic salt state (including a formula)
    6. Salt remote commands
    7. Bonus point: Java part for product identification, and monitoring enablement
    8. Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
    9. Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)
    10. Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)

    If something is breaking: we can try to fix it, but the main idea is research how supported it is right now. Beyond that it's up to each project member how much to hack :-)

    • If you don't have knowledge about some of the steps: ask the team
    • If you still don't know what to do: switch to another distribution and keep testing.

    This card is for EVERYONE, not just developers. Seriously! We had people from other teams helping that were not developers, and added support for Debian and new SUSE Linux Enterprise and openSUSE Leap versions :-)

    Pending

    FUSS

    FUSS is a complete GNU/Linux solution (server, client and desktop/standalone) based on Debian for managing an educational network.

    https://fuss.bz.it/

    Seems to be a Debian 12 derivative, so adding it could be quite easy.

    • [W] Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
    • [W] Onboarding (salt minion from UI, salt minion from bootstrap script, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator) --> Working for all 3 options (salt minion UI, salt minion bootstrap script and salt-ssh minion from the UI).
    • [W] Package management (install, remove, update...) --> Installing a new package works, needs to test the rest.
    • [I] Patching (if patch information is available, could require writing some code to parse it, but IIRC we have support for Ubuntu already). No patches detected. Do we support patches for Debian at all?
    • [W] Applying any basic salt state (including a formula)
    • [W] Salt remote commands
    • [ ] Bonus point: Java part for product identification, and monitoring enablement


    Contribute to terraform-provider-libvirt by pinvernizzi

    Description

    The SUSE Manager (SUMA) teams' main tool for infrastructure automation, Sumaform, largely relies on terraform-provider-libvirt. That provider is also widely used by other teams, both inside and outside SUSE.

    It would be good to help the maintainers of this project and give back to the community around it, after all the amazing work that has been already done.

    If you're interested in any of infrastructure automation, Terraform, virtualization, tooling development, Go (...) it is also a good chance to learn a bit about them all by putting your hands on an interesting, real-use-case and complex project.

    Goals

    • Get more familiar with Terraform provider development and libvirt bindings in Go
    • Solve some issues and/or implement some features
    • Get in touch with the community around the project

    Resources