terraform-provider-libvirt supports CoreOS ignition file/content, which end rendered as kernel command line options (the provider does some nice stuff like allowing you to pass the json content and it will take care of putting it into a temporary file).

The idea is to:

  • Implement a generic CmdLine option, that takes a map (key/value) with the kernel options.
  • Implement autoyast/kickstart options, with convenience features eg. inline the content, on top of the generic CmdLine option feature.

https://github.com/dmacvicar/terraform-provider-libvirt/issues/218

Journal

  • Fri 10.11.2017
    • Warm up. Fix local integration tests on openSUSE
    • Researching approach:
    • can we share code between cloud-init, ignition, autoyast, kickstart?
    • how to model common parts? how to abstract differences?
    • are provisioner plugins an option? can they inject data to a resource, or just run stuff later?
  • Mon 13.11.2017
    • Wrote some code for a "install" resource, similar to cloud-init and ignition. Idea would be to merge them somehow later.
    • Figured out that I need to upload 3 artifacts to the storage pool: initrd, kernel, profile
    • What id to use? Need to recover all volumes later. What about using indirection? eg. one volume with metadata pointing to the volumes with randomly generated names/ids.
  • Tue 14.11.2017
    • Looking at linuxrc code
    • Talked to Steffen. He implemented https://hackweek.suse.com/16/projects/implement-qemu-firmware-config-device-support-in-linuxrc-slash-autoyast already! Implemented the missing part in the linuxrc code.
  • Wed 15.11.2017
    • False start trying to implement uploading of boot artifacts (kernel, initrd) by uploading metadata to libvirt secrets. It resulted to be limited in size. Bah!. That is what you get when you abuse APIs. Deserved.
  • Thu 16.11.2017
    • Re-evaluating if it is worth to implement all the virt-install functionality of allowing a install from a url by downloading kernel/initrd and uploading it to a volume. Another alternative is to use the QEMU http backend to directly boot from an ISO.
    • Implemented support of remote network disks using QEMU native http
    • Start implementing kernel, initrd and cmdline support. That combined with volumes should be enough for a custom boot. Pull Request

Looking for hackers with the skills:

terraform libvirt autoyast

This project is part of:

Hack Week 15 Hack Week 16

Activity

  • almost 8 years ago: a_z left this project.
  • almost 8 years ago: hsehic joined this project.
  • almost 8 years ago: hsehic liked this project.
  • almost 8 years ago: dmaiocchi liked this project.
  • almost 8 years ago: dmacvicar joined this project.
  • almost 8 years ago: a_z started this project.
  • almost 8 years ago: moio liked this project.
  • over 8 years ago: dmacvicar added keyword "autoyast" to this project.
  • over 8 years ago: dmacvicar added keyword "terraform" to this project.
  • over 8 years ago: dmacvicar added keyword "libvirt" to this project.
  • over 8 years ago: dmacvicar originated this project.

  • Comments

    Be the first to comment!

    Similar Projects

    terraform-provider-feilong by e_bischoff

    Project Description

    People need to test operating systems and applications on s390 platform.

    Installation from scratch solutions include:

    • just deploy and provision manually add-emoji (with the help of ftpboot script, if you are at SUSE)
    • use s3270 terminal emulation (used by openQA people?)
    • use LXC from IBM to start CP commands and analyze the results
    • use zPXE to do some PXE-alike booting (used by the orthos team?)
    • use tessia to install from scratch using autoyast
    • use libvirt for s390 to do some nested virtualization on some already deployed z/VM system
    • directly install a Linux kernel on a LPAR and use kvm + libvirt from there

    Deployment from image solutions include:

    • use ICIC web interface (openstack in disguise, contributed by IBM)
    • use ICIC from the openstack terraform provider (used by Rancher QA)
    • use zvm_ansible to control SMAPI
    • connect directly to SMAPI low-level socket interface

    IBM Cloud Infrastructure Center (ICIC) harnesses the Feilong API, but you can use Feilong without installing ICIC, provided you set up a "z/VM cloud connector" into one of your VMs following this schema.

    What about writing a terraform Feilong provider, just like we have the terraform libvirt provider? That would allow to transparently call Feilong from your main.tf files to deploy and destroy resources on your system/z.

    Other Feilong-based solutions include:

    • make libvirt Feilong-aware
    • simply call Feilong from shell scripts with curl
    • use zvmconnector client python library from Feilong
    • use zthin part of Feilong to directly command SMAPI.

    Goal for Hackweek 23

    My final goal is to be able to easily deploy and provision VMs automatically on a z/VM system, in a way that people might enjoy even outside of SUSE.

    My technical preference is to write a terraform provider plugin, as it is the approach that involves the least software components for our deployments, while remaining clean, and compatible with our existing development infrastructure.

    Goals for Hackweek 24

    Feilong provider works and is used internally by SUSE Manager team. Let's push it forward!

    Let's add support for fiberchannel disks and multipath.

    Possible goals for Hackweek 25

    Modernization, maturity, and maintenance.