Project Description

People need to test operating systems and applications on s390 platform.

Installation from scratch solutions include:

  • just deploy and provision manually add-emoji (with the help of ftpboot script, if you are at SUSE)
  • use s3270 terminal emulation (used by openQA people?)
  • use LXC from IBM to start CP commands and analyze the results
  • use zPXE to do some PXE-alike booting (used by the orthos team?)
  • use tessia to install from scratch using autoyast
  • use libvirt for s390 to do some nested virtualization on some already deployed z/VM system
  • directly install a Linux kernel on a LPAR and use kvm + libvirt from there

Deployment from image solutions include:

  • use ICIC web interface (openstack in disguise, contributed by IBM)
  • use ICIC from the openstack terraform provider (used by Rancher QA)
  • use zvm_ansible to control SMAPI
  • connect directly to SMAPI low-level socket interface

IBM Cloud Infrastructure Center (ICIC) harnesses the Feilong API, but you can use Feilong without installing ICIC, provided you set up a "z/VM cloud connector" into one of your VMs following this schema.

What about writing a terraform Feilong provider, just like we have the terraform libvirt provider? That would allow to transparently call Feilong from your main.tf files to deploy and destroy resources on your system/z.

Other Feilong-based solutions include:

  • make libvirt Feilong-aware
  • simply call Feilong from shell scripts with curl
  • use zvmconnector client python library from Feilong
  • use zthin part of Feilong to directly command SMAPI.

Goal for Hackweek 23

My final goal is to be able to easily deploy and provision VMs automatically on a z/VM system, in a way that people might enjoy even outside of SUSE.

My technical preference is to write a terraform provider plugin, as it is the approach that involves the least software components for our deployments, while remaining clean, and compatible with our existing development infrastructure.

Goals for Hackweek 24

Feilong provider works and is used internally by SUSE Manager team. Let's push it forward!

Let's add support for fiberchannel disks and multipath.

Goals for Hackweek 25

Modernization, maturity, and maintenance.

Resources

Outcome

Looking for hackers with the skills:

s390 mainframe zvm golang terraform deployment

This project is part of:

Hack Week 23 Hack Week 24 Hack Week 25

Activity

  • 3 days ago: horon liked this project.
  • 16 days ago: mmaslanova liked this project.
  • about 1 year ago: pinvernizzi liked this project.
  • about 2 years ago: juliogonzalezgil liked this project.
  • about 2 years ago: e_bischoff liked this project.
  • about 2 years ago: mfriesenegger liked this project.
  • about 2 years ago: dgedon liked this project.
  • about 2 years ago: mfranc liked this project.
  • about 2 years ago: e_bischoff started this project.
  • about 2 years ago: e_bischoff added keyword "deployment" to this project.
  • about 2 years ago: e_bischoff added keyword "terraform" to this project.
  • about 2 years ago: e_bischoff added keyword "golang" to this project.
  • about 2 years ago: e_bischoff added keyword "zvm" to this project.
  • about 2 years ago: e_bischoff added keyword "s390" to this project.
  • about 2 years ago: e_bischoff added keyword "mainframe" to this project.
  • about 2 years ago: e_bischoff originated this project.

  • Comments

    • mfriesenegger
      about 2 years ago by mfriesenegger | Reply

      As the Feilong project chair, I like the terraform-feilong-provider project and making libvirt Feilong-aware. I will support your effort!

    • e_bischoff
      about 2 years ago by e_bischoff | Reply

      Thanks for your support Mike. For the moment, I am still not completely sure whether I will take the terraform approach or the libvirt approach. The first one seems to me better, as for practical purposes it's one software layer less for us. I could even pick up something completely different. But so far the terraform provider approach seems the most promising for the least effort.

    • e_bischoff
      about 2 years ago by e_bischoff | Reply

      OK, decision taken, I will stick to the terraform approach.

      The bad part is that I have to code for 2 versions of terraform as we need to support both plugin protocols 5 and 6.

      The good part is that we get a golang library for Feilong for free (there is a project ZVM connector golang but it does not provide marshalling and demarshalling).

    • e_bischoff
      about 2 years ago by e_bischoff | Reply

      We have a working provider and a partial Go library. Mission accomplished, although it would be nice to attract other contributors and fill in the holes.

    • e_bischoff
      almost 2 years ago by e_bischoff | Reply

      Go library has now 100% coverage.

      I'm not sure anymore that the protocol 5 provider was useful, but I'll keep maintaining it because it's handy for my tests.

      The provider still lacks R and U parts of CRUD.

    • e_bischoff
      about 1 year ago by e_bischoff | Reply

      In Feilong source code, some functions are undocumented, mainly around FiberChannel. This means that the Go library is not 100% complete as I thought. I will try to add the missing Go methods as well as the documentation in Feilong itself.

    • e_bischoff
      about 1 year ago by e_bischoff | Reply

      I have added all missing functions to the Go library. I also wrote the upstream API doc for all missing functions and tried to fix as much as I could the rest of the upstream API doc. That was around one thousand modified documentation lines.

      Using these new functions, and with the help of Mike, I was able to make both FiberChannel and multipath work. We still need ad hoc images though, with multipath-tools package installed and multipathd service enabled. I'll try to get them either from Mike or from the public cloud team.

    • e_bischoff
      about 1 year ago by e_bischoff | Reply

      The hackweek 24 is getting to its end. I prepared a kind of TODO for hackweek 25, with remaining issues.

    • e_bischoff
      1 day ago by e_bischoff | Reply

      Hackweek 25 is starting.

      Goals:

      • in Feilong itself:
        • [ ] add support for SLES 16, and Network Manager in general
      • in the Go library:
        • [x] add calls for the new functions that appeared in upstream Feilong
        • [ ] move from private repos to Open Mainframe project
      • in the terraform provider:
        • [ ] add support for fiberchannel disks and multipath
        • [ ] finish the U part of CRUD
        • [x] support OpenTofu
        • [ ] register to OpenTofu providers registry
        • [ ] fix problems with registration on hashicorp providers registry
        • [ ] move from private repos to Open Mainframe project

    Similar Projects

    SUSE Health Check Tools by roseswe

    SUSE HC Tools Overview

    A collection of tools written in Bash or Go 1.24++ to make life easier with handling of a bunch of tar.xz balls created by supportconfig.

    Background: For SUSE HC we receive a bunch of supportconfig tar balls to check them for misconfiguration, areas for improvement or future changes.

    Main focus on these HC are High Availability (pacemaker), SLES itself and SAP workloads, esp. around the SUSE best practices.

    Goals

    • Overall improvement of the tools
    • Adding new collectors
    • Add support for SLES16

    Resources

    csv2xls* example.sh go.mod listprodids.txt sumtext* trails.go README.md csv2xls.go exceltest.go go.sum m.sh* sumtext.go vercheck.py* config.ini csvfiles/ getrpm* listprodids* rpmdate.sh* sumxls* verdriver* credtest.go example.py getrpm.go listprodids.go sccfixer.sh* sumxls.go verdriver.go

    docollall.sh* extracthtml.go gethostnamectl* go.sum numastat.go cpuvul* extractcluster.go firmwarebug* gethostnamectl.go m.sh* numastattest.go cpuvul.go extracthtml* firmwarebug.go go.mod numastat* xtr_cib.sh*

    $ getrpm -r pacemaker >> Product ID: 2795 (SUSE Linux Enterprise Server for SAP Applications 15 SP7 x86_64), RPM Name: +--------------+----------------------------+--------+--------------+--------------------+ | Package Name | Version | Arch | Release | Repository | +--------------+----------------------------+--------+--------------+--------------------+ | pacemaker | 2.1.10+20250718.fdf796ebc8 | x86_64 | 150700.3.3.1 | sle-ha/15.7/x86_64 | | pacemaker | 2.1.9+20250410.471584e6a2 | x86_64 | 150700.1.9 | sle-ha/15.7/x86_64 | +--------------+----------------------------+--------+--------------+--------------------+ Total packages found: 2


    Create a Cloud-Native policy engine with notifying capabilities to optimize resource usage by gbazzotti

    Description

    The goal of this project is to begin the initial phase of development of an all-in-one Cloud-Native Policy Engine that notifies resource owners when their resources infringe predetermined policies. This was inspired by a current issue in the CES-SRE Team where other solutions seemed to not exactly correspond to the needs of the specific workloads running on the Public Cloud Team space.

    The initial architecture can be checked out on the Repository listed under Resources.

    Among the features that will differ this project from other monitoring/notification systems:

    • Pre-defined sensible policies written at the software-level, avoiding a learning curve by requiring users to write their own policies
    • All-in-one functionality: logging, mailing and all other actions are not required to install any additional plugins/packages
    • Easy account management, being able to parse all required configuration by a single JSON file
    • Eliminate integrations by not requiring metrics to go through a data-agreggator

    Goals

    • Create a minimal working prototype following the workflow specified on the documentation
    • Provide instructions on installation/usage
    • Work on email notifying capabilities

    Resources


    Mammuthus - The NFS-Ganesha inside Kubernetes controller by vcheng

    Description

    As the user-space NFS provider, the NFS-Ganesha is wieldy use with serval projects. e.g. Longhorn/Rook. We want to create the Kubernetes Controller to make configuring NFS-Ganesha easy. This controller will let users configure NFS-Ganesha through different backends like VFS/CephFS.

    Goals

    1. Create NFS-Ganesha Package on OBS: nfs-ganesha5, nfs-ganesha6
    2. Create NFS-Ganesha Container Image on OBS: Image
    3. Create a Kubernetes controller for NFS-Ganesha and support the VFS configuration on demand. Mammuthus

    Resources

    NFS-Ganesha


    Rewrite Distrobox in go (POC) by fabriziosestito

    Description

    Rewriting Distrobox in Go.

    Main benefits:

    • Easier to maintain and to test
    • Adapter pattern for different container backends (LXC, systemd-nspawn, etc.)

    Goals

    • Build a minimal starting point with core commands
    • Keep the CLI interface compatible: existing users shouldn't notice any difference
    • Use a clean Go architecture with adapters for different container backends
    • Keep dependencies minimal and binary size small
    • Benchmark against the original shell script

    Resources

    • Upstream project: https://github.com/89luca89/distrobox/
    • Distrobox site: https://distrobox.it/
    • ArchWiki: https://wiki.archlinux.org/title/Distrobox


    Play with the userfaultfd(2) system call and download on demand using HTTP Range Requests with Golang by rbranco

    Description

    The userfaultfd(2) is a cool system call to handle page faults in user-space. This should allow me to list the contents of an ISO or similar archive without downloading the whole thing. The userfaultfd(2) part can also be done in theory with the PROT_NONE mprotect + SIGSEGV trick, for complete Unix portability, though reportedly being slower.

    Goals

    1. Create my own library for userfaultfd(2) in Golang.
    2. Create my own library for HTTP Range Requests.
    3. Complete portability with Unix.
    4. Benchmarks.
    5. Contribute some tests to LTP.

    Resources

    1. https://docs.kernel.org/admin-guide/mm/userfaultfd.html
    2. https://github.com/loopholelabs/userfaultfd-go
    3. https://github.com/DHowett/ranger
    4. https://www.cons.org/cracauer/cracauer-userfaultfd.html


    Testing and adding GNU/Linux distributions on Uyuni by juliogonzalezgil

    Join the Gitter channel! https://gitter.im/uyuni-project/hackweek

    Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines. It also manages configuration, can run audits, build image containers, monitor and much more!

    Currently there are a few distributions that are completely untested on Uyuni or SUSE Manager (AFAIK) or just not tested since a long time, and could be interesting knowing how hard would be working with them and, if possible, fix whatever is broken.

    For newcomers, the easiest distributions are those based on DEB or RPM packages. Distributions with other package formats are doable, but will require adapting the Python and Java code to be able to sync and analyze such packages (and if salt does not support those packages, it will need changes as well). So if you want a distribution with other packages, make sure you are comfortable handling such changes.

    No developer experience? No worries! We had non-developers contributors in the past, and we are ready to help as long as you are willing to learn. If you don't want to code at all, you can also help us preparing the documentation after someone else has the initial code ready, or you could also help with testing :-)

    The idea is testing Salt and Salt-ssh clients, but NOT traditional clients, which are deprecated.

    To consider that a distribution has basic support, we should cover at least (points 3-6 are to be tested for both salt minions and salt ssh minions):

    1. Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
    2. Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
    3. Package management (install, remove, update...)
    4. Patching
    5. Applying any basic salt state (including a formula)
    6. Salt remote commands
    7. Bonus point: Java part for product identification, and monitoring enablement
    8. Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
    9. Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)
    10. Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)

    If something is breaking: we can try to fix it, but the main idea is research how supported it is right now. Beyond that it's up to each project member how much to hack :-)

    • If you don't have knowledge about some of the steps: ask the team
    • If you still don't know what to do: switch to another distribution and keep testing.

    This card is for EVERYONE, not just developers. Seriously! We had people from other teams helping that were not developers, and added support for Debian and new SUSE Linux Enterprise and openSUSE Leap versions :-)

    Pending

    Debian 13

    The new version of the beloved Debian GNU/Linux OS

    Seems to be a Debian 12 derivative, so adding it could be quite easy.

    • [ ] Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
    • W] Onboarding (salt minion from UI, salt minion from bootstrap script, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
    • [ ] Package management (install, remove, update...)
    • [ ] Patching (if patch information is available, could require writing some code to parse it, but IIRC we have support for Ubuntu already). Probably not for Debian as IIRC we don't support patches yet.
    • [ ] Applying any basic salt state (including a formula)
    • [ ] Salt remote commands
    • [ ] Bonus point: Java part for product identification, and monitoring enablement
    • [ ] Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
    • [ ] Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)


    Rancher/k8s Trouble-Maker by tonyhansen

    Project Description

    When studying for my RHCSA, I found trouble-maker, which is a program that breaks a Linux OS and requires you to fix it. I want to create something similar for Rancher/k8s that can allow for troubleshooting an unknown environment.

    Goals for Hackweek 25

    • Update to modern Rancher and verify that existing tests still work
    • Change testing logic to populate secrets instead of requiring a secondary script
    • Add new tests

    Goals for Hackweek 24 (Complete)

    • Create a basic framework for creating Rancher/k8s cluster lab environments as needed for the Break/Fix
    • Create at least 5 modules that can be applied to the cluster and require troubleshooting

    Resources

    • https://github.com/celidon/rancher-troublemaker
    • https://github.com/rancher/terraform-provider-rancher2
    • https://github.com/rancher/tf-rancher-up
    • https://github.com/rancher/quickstart


    Multimachine on-prem test with opentofu, ansible and Robot Framework by apappas

    Description

    A long time ago I explored using the Robot Framework for testing. A big deficiency over our openQA setup is that bringing up and configuring the connection to a test machine is out of scope.

    Nowadays we have a way¹ to deploy SUTs outside openqa, but we only use if for cloud tests in conjuction with openqa. Using knowledge gained from that project I am going to try to create a test scenario that replicates an openqa test but this time including the deployment and setup of the SUT.

    Goals

    Create a simple multimachine test scenario with the support server and SUT all created by the robot framework.

    Resources

    1. https://github.com/SUSE/qe-sap-deployment
    2. terraform-libvirt-provider