Make Rancher and NeuVector AWS QuickStart persistent across Shutdown.

Specifically update this AWS portion of the QuickStart to leverage Amazon Elastic IP Addresses, making the stack persistent across shutdowns startups. Designed to save budget when not using. While Terraform is designed to build and tear-down, sometimes we add additional customizations post-build which we want to be persistent for the next demo, PoC, or development experiment. Not losing the public IP assigned to cluster API, etc. would allow persistency across shutdown.

Additionally, we could include code to create and leverage persistent storage for Rancher and NV, deploy Longhorn, launch a downstream EKS or EKS-Anywhere test cluster or experiment with zero node managed node-groups or Spot based node-groups, as a stretch goal.. possibly.

https://github.com/rancher/quickstart

https://github.com/rancher/quickstart/issues/223

Seeking: Anyone with Terraform or Amazon Cloudformation familiarity Keywords: Rancher NeuVector AWS Terraform Cloudformation Demo PoC

Looking for hackers with the skills:

terraform cloudformation

This project is part of:

Hack Week 23

Activity

  • about 1 year ago: pschinagl started this project.
  • about 1 year ago: kevinmayres added keyword "terraform" to this project.
  • about 1 year ago: kevinmayres added keyword "cloudformation" to this project.
  • about 1 year ago: kevinmayres liked this project.
  • about 1 year ago: kevinmayres originated this project.

  • Comments

    Be the first to comment!

    Similar Projects

    terraform-provider-feilong by e_bischoff

    Project Description

    People need to test operating systems and applications on s390 platform.

    Installation from scratch solutions include:

    • just deploy and provision manually add-emoji (with the help of ftpboot script, if you are at SUSE)
    • use s3270 terminal emulation (used by openQA people?)
    • use LXC from IBM to start CP commands and analyze the results
    • use zPXE to do some PXE-alike booting (used by the orthos team?)
    • use tessia to install from scratch using autoyast
    • use libvirt for s390 to do some nested virtualization on some already deployed z/VM system
    • directly install a Linux kernel on a LPAR and use kvm + libvirt from there

    Deployment from image solutions include:

    • use ICIC web interface (openstack in disguise, contributed by IBM)
    • use ICIC from the openstack terraform provider (used by Rancher QA)
    • use zvm_ansible to control SMAPI
    • connect directly to SMAPI low-level socket interface

    IBM Cloud Infrastructure Center (ICIC) harnesses the Feilong API, but you can use Feilong without installing ICIC, provided you set up a "z/VM cloud connector" into one of your VMs following this schema.

    What about writing a terraform Feilong provider, just like we have the terraform libvirt provider? That would allow to transparently call Feilong from your main.tf files to deploy and destroy resources on your system/z.

    Other Feilong-based solutions include:

    • make libvirt Feilong-aware
    • simply call Feilong from shell scripts with curl
    • use zvmconnector client python library from Feilong
    • use zthin part of Feilong to directly command SMAPI.

    Goal for Hackweek 23

    My final goal is to be able to easily deploy and provision VMs automatically on a z/VM system, in a way that people might enjoy even outside of SUSE.

    My technical preference is to write a terraform provider plugin, as it is the approach that involves the least software components for our deployments, while remaining clean, and compatible with our existing development infrastructure.

    Goals for Hackweek 24

    Feilong provider works and is used internally by SUSE Manager team. Let's push it forward!

    • Fix problems with registration on hashicorp providers registry
    • Add support for fiberchannel disks
    • Finish the U part of CRUD
    • Move from private repos to Open Mainframe project
    • Anything else as needed


    Testing and adding GNU/Linux distributions on Uyuni by juliogonzalezgil

    Join the Gitter channel! https://gitter.im/uyuni-project/hackweek

    Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines. It also manages configuration, can run audits, build image containers, monitor and much more!

    Currently there are a few distributions that are completely untested on Uyuni or SUSE Manager (AFAIK) or just not tested since a long time, and could be interesting knowing how hard would be working with them and, if possible, fix whatever is broken.

    For newcomers, the easiest distributions are those based on DEB or RPM packages. Distributions with other package formats are doable, but will require adapting the Python and Java code to be able to sync and analyze such packages (and if salt does not support those packages, it will need changes as well). So if you want a distribution with other packages, make sure you are comfortable handling such changes.

    No developer experience? No worries! We had non-developers contributors in the past, and we are ready to help as long as you are willing to learn. If you don't want to code at all, you can also help us preparing the documentation after someone else has the initial code ready, or you could also help with testing :-)

    The idea is testing Salt and Salt-ssh clients, but NOT traditional clients, which are deprecated.

    To consider that a distribution has basic support, we should cover at least (points 3-6 are to be tested for both salt minions and salt ssh minions):

    1. Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
    2. Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
    3. Package management (install, remove, update...)
    4. Patching
    5. Applying any basic salt state (including a formula)
    6. Salt remote commands
    7. Bonus point: Java part for product identification, and monitoring enablement
    8. Bonus point: sumaform enablement (https://github.com/uyuni-project/sumaform)
    9. Bonus point: Documentation (https://github.com/uyuni-project/uyuni-docs)
    10. Bonus point: testsuite enablement (https://github.com/uyuni-project/uyuni/tree/master/testsuite)

    If something is breaking: we can try to fix it, but the main idea is research how supported it is right now. Beyond that it's up to each project member how much to hack :-)

    • If you don't have knowledge about some of the steps: ask the team
    • If you still don't know what to do: switch to another distribution and keep testing.

    This card is for EVERYONE, not just developers. Seriously! We had people from other teams helping that were not developers, and added support for Debian and new SUSE Linux Enterprise and openSUSE Leap versions :-)

    Pending

    FUSS

    FUSS is a complete GNU/Linux solution (server, client and desktop/standalone) based on Debian for managing an educational network.

    https://fuss.bz.it/

    Seems to be a Debian 12 derivative, so adding it could be quite easy.

    • [ ] Reposync (this will require using spacewalk-common-channels and adding channels to the .ini file)
    • [ ] Onboarding (salt minion from UI, salt minion from bootstrap scritp, and salt-ssh minion) (this will probably require adding OS to the bootstrap repository creator)
    • [ ] Package management (install, remove, update...)
    • [ ] Patching (if patch information is available, could require writing some code to parse it, but IIRC we have support for Ubuntu already)
    • [ ] Applying any basic salt state (including a formula)
    • [ ] Salt remote commands
    • [ ] Bonus point: Java part for product identification, and monitoring enablement


    Contribute to terraform-provider-libvirt by pinvernizzi

    Description

    The SUSE Manager (SUMA) teams' main tool for infrastructure automation, Sumaform, largely relies on terraform-provider-libvirt. That provider is also widely used by other teams, both inside and outside SUSE.

    It would be good to help the maintainers of this project and give back to the community around it, after all the amazing work that has been already done.

    If you're interested in any of infrastructure automation, Terraform, virtualization, tooling development, Go (...) it is also a good chance to learn a bit about them all by putting your hands on an interesting, real-use-case and complex project.

    Goals

    • Get more familiar with Terraform provider development and libvirt bindings in Go
    • Solve some issues and/or implement some features
    • Get in touch with the community around the project

    Resources


    SUSE AI Meets the Game Board by moio

    Use tabletopgames.ai’s open source TAG and PyTAG frameworks to apply Statistical Forward Planning and Deep Reinforcement Learning to two board games of our own design. On an all-green, all-open source, all-AWS stack!
    A chameleon playing chess in a train car, as a metaphor of SUSE AI applied to games


    AI + Board Games

    Board games have long been fertile ground for AI innovation, pushing the boundaries of capabilities such as strategy, adaptability, and real-time decision-making - from Deep Blue's chess mastery to AlphaZero’s domination of Go. Games aren’t just fun: they’re complex, dynamic problems that often mirror real-world challenges, making them interesting from an engineering perspective.

    As avid board gamers, aspiring board game designers, and engineers with careers in open source infrastructure, we’re excited to dive into the latest AI techniques first-hand.

    Our goal is to develop an all-open-source, all-green AWS-based stack powered by some serious hardware to drive our board game experiments forward!


    Project Goals

    1. Set Up the Stack:

      • Install and configure the TAG and PyTAG frameworks on SUSE Linux Enterprise Base Container Images.
      • Integrate with the SUSE AI stack for GPU-accelerated training on AWS.
      • Validate a sample GPU-accelerated PyTAG workload on SUSE AI.
      • Ensure the setup is entirely repeatable with Terraform and configuration scripts, documenting results along the way.
    2. Design and Implement AI Agents:

      • Develop AI agents for the two board games, incorporating Statistical Forward Planning and Deep Reinforcement Learning techniques.
      • Fine-tune model parameters to optimize game-playing performance.
      • Document the advantages and limitations of each technique.
    3. Test, Analyze, and Refine:

      • Conduct AI vs. AI and AI vs. human matches to evaluate agent strategies and performance.
      • Record insights, document learning outcomes, and refine models based on real-world gameplay.

    Technical Stack

    • Frameworks: TAG and PyTAG for AI agent development
    • Platform: SUSE AI
    • Tools: AWS for high-performance GPU acceleration

    Why This Project Matters

    This project not only deepens our understanding of AI techniques by doing but also showcases the power and flexibility of SUSE’s open-source infrastructure for supporting high-level AI projects. By building on an all-open-source stack, we aim to create a pathway for other developers and AI enthusiasts to explore, experiment, and deploy their own innovative projects within the open-source space.


    Our Motivation

    We believe hands-on experimentation is the best teacher.

    Combining our engineering backgrounds with our passion for board games, we’ll explore AI in a way that’s both challenging and creatively rewarding. Our ultimate goal? To hack an AI agent that’s as strategic and adaptable as a real human opponent (if not better!) — and to leverage it to design even better games... for humans to play!


    Rancher/k8s Trouble-Maker by tonyhansen

    Project Description

    When studying for my RHCSA, I found trouble-maker, which is a program that breaks a Linux OS and requires you to fix it. I want to create something similar for Rancher/k8s that can allow for troubleshooting an unknown environment.

    Goal for this Hackweek

    Create a basic framework for creating Rancher/k8s cluster lab environments as needed for the Break/Fix Create at least 5 modules that can be applied to the cluster and require troubleshooting

    Resources

    https://github.com/rancher/terraform-provider-rancher2 https://github.com/rancher/tf-rancher-up