Project description

The project Verifree is about GPG key server. The goal is build a Key server, where users are able to verify the GPG keys. Admins should be also delete non valid keys. Verifree servers will be an internal servers only and NOT connect into the public GPG infrastructure.

The reason for the "private" GPG key serves is, that at the moment, if we want to used GPG key we trust external GPG key servers and we have no control for data there.

GPG keys on the server will be only for employees and we as admins will be able to control them.

We should have more then one server in the infrastructure and regularly sync data.

The long term goal is made this as a security add-on/application and be able offer it to SUSE customers to run their own GPG key serves

Goal for this Hackweek

Build basic application running on SLE/openSUSE OS. * be able to install it via an RPM * configure the app and have basic functions there. * have a salted it to be able run other servers on demand * setup basic work flow for the key management and verification

Resources

  • as inspiration I found this: https://keys.openpgp.org/about
  • as software I would like to try GNU GPL project: https://gitlab.com/hagrid-keyserver/hagrid

Any help from others is welcome.

Looking for hackers with the skills:

rust gpg security cryptography

This project is part of:

Hack Week 21

Activity

  • over 2 years ago: fbonazzi liked this project.
  • over 2 years ago: dgedon liked this project.
  • over 2 years ago: jzerebecki liked this project.
  • over 2 years ago: jzerebecki added keyword "cryptography" to this project.
  • over 2 years ago: jzerebecki added keyword "gpg" to this project.
  • over 2 years ago: jzerebecki added keyword "security" to this project.
  • over 2 years ago: jzerebecki added keyword "rust" to this project.
  • over 2 years ago: jzerebecki joined this project.
  • over 2 years ago: crameleon joined this project.
  • over 2 years ago: crameleon liked this project.
  • over 2 years ago: ConradBachman joined this project.
  • over 2 years ago: moroni_flores started this project.
  • over 2 years ago: mcaj originated this project.

  • Comments

    • crameleon
      over 2 years ago by crameleon | Reply

      It would be cool if it still had an option to verify keys or signatures against a public, external, GPG server to catch irregularities - the benefit of the federated key servers on the internet is that if one gets compromised, one could get suspicious if they find a different key for a person on a different keyserver. Since we are only going to run one single keyserver internally, if the infrastructure gets compromised, there is no other party to ask for trust.

      • jzerebecki
        over 2 years ago by jzerebecki | Reply

        There are dumps available, at least one of these sources seems to be current: https://github.com/SKS-Keyserver/sks-keyserver/wiki/KeydumpSources

      • mkoutny
        over 2 years ago by mkoutny | Reply

        Ideally, GPG keyserver should not be the source of trust. You have web of trust between individual keys and use the keyserver as an unreliable medium. You verify the keys locally with (sub)set of keys and you don't care if the server is or isn't compromised. (You only may be affected by its unavailability.)

        Also, in my understanding, the internal keyserver is not supposed to serve to distribute any key but just those associated to SUSE accounts.

        • jzerebecki
          over 2 years ago by jzerebecki | Reply

          Yes, GPG/OpenPGP currently has no mechanism to ensure one is up to date on revocations. So it is vulnerable against keyservers being made to not serve some revocations.

          In theory it is possible to fix this, but I know of no practical implementation.

          Even if you do not solve this, it is a good idea to ingest the updates from the keyserver gossip. If you do not you refuse information you can verify, which is even worse.

    • jzerebecki
      over 2 years ago by jzerebecki | Reply

      Note that hagrid is both not GDPR compliant and removes important information (e.g. signatures and revocations used for Web of Trust) in a failed attempt at GDPR compliance: https://gitlab.com/hagrid-keyserver/hagrid/-/issues/151

      I can think of two right now working ways for exchanging keys to avoid problems with GDPR: 1) Exchange keys by exporting, encrypting, ASCII armoring it and then attach it to a mail; exchange certification requests and confirmations by saving it as a text file, signing, encrypting, ASCII armoring it and then attach it to a mail. This reduce many things people stumbled over that were common even before the dos vulnerability of keyservers were more widely known. 2) Maintain the keys involved in a project in a git repository. Have the git repo contain a privacy policy that explains the consequences. For submitting their key people create a commit that is singed by the same key that it adds in ASCII armored form. This means the privacy policy is in the history of the commit they signed, so they could have read it. When someone ask for their info to be deleted, delete the repo and start a new history without this key. This has the advantage that everyone who already had a copy of the repo, notices on the next git pull and can easily understand which key was deleted. Without this advantage you could get into the situation where you never receive updates like revocation for a key without knowing it. (kernel.org uses a similar repo explained at https://lore.kernel.org/lkml/20190830143027.cffqda2vzggrtiko@chatter.i7.local/ .) You still need to use (1) for requesting and confirming certifications.

      One could use a CI to make additions and updates to a git repo like in (2) self-service. Thus you could create a pull request and once CI passes it gets automatically merged. This CI job would allow adding a key that is trusted by existing ones with commit signature by the added key, for anything else require commit signature is valid and from a key already in this repo, allow updating when import old in new key doesn't add anything, allow editing docs, check generated data matches if touched, deny anything else.

      The gpg reimplementation that hagrid uses as a library is https://docs.sequoia-pgp.org/sequoia_openpgp/ which is recommended.

    Similar Projects

    Agama installer on-line demo by lslezak

    Description

    The Agama installer provides a quite complex user interface. We have some screenshots on the web page but as it is basically a web application it would be nice to have some on-line demo where users could click and check it live.

    The problem is that the Agama server directly accesses the hardware (storage probing) and loads installation repositories. We cannot easily mock this in the on-line demo so the easiest way is to have just a read-only demo. You could explore the configuration options but you could not change anything, all changes would be ignored.

    The read-only demo would be a bit limited but I still think it would be useful for potential users get the feeling of the new Agama installer and get familiar with it before using in a real installation.

    As a proof of concept I already created this on-line demo.

    The implementation basically builds Agama in two modes - recording mode where it saves all REST API responses and replay mode where it for the REST API requests returns the previously recorded responses. Recording in the browser is inconvenient and error prone, there should be some scripting instead (see below).

    Goals

    • Create an Agama on-line demo which can be easily tested by users
    • The Agama installer is still in alpha phase and in active development, the online demo needs to be easily rebuilt with the latest Agama version
    • Ideally there should be some automation so the demo page is rebuilt automatically without any developer interactions (once a day or week?)

    TODO

    • Use OpenAPI to get all Agama REST API endpoints, write a script which queries all the endpoints automatically and saves the collected data to a file (see this related PR).
    • Write a script for starting an Agama VM (use libvirt/qemu?), the script should ensure we always use the same virtual HW so if we need to dump the latest REST API state we get the same (or very similar data). This should ensure the demo page does not change much regarding the storage proposal etc...
    • Fix changing the product, currently it gets stuck after clicking the "Select" button.
    • Move the mocking data (the recorded REST API responses) outside the Agama sources, it's too big and will be probably often updated. To avoid messing the history keep it in a separate GitHub repository
    • Allow changing the UI language
    • Display some note (watermark) in the page so it is clear it is a read-only demo (probably with some version or build date to know how old it is)
    • Automation for building new demo page from the latest sources. There should be some check which ensures the recorded data still matches the OpenAPI specification.

    Changing the UI language

    This will be quite tricky because selecting the proper translation file is done on the server side. We would probably need to completely re-implement the logic in the browser side and adapt the server for that.

    Also some REST API responses contain translated texts (storage proposal, pattern names in software). We would need to query the respective endpoints in all supported languages and return the correct response in runtime according to the currently selected language.

    Resources


    Kanidm: A safe and modern IDM system by firstyear

    Kanidm is an IDM system written in Rust for modern systems authentication. The github repo has a detailed "getting started" on the readme.

    Kanidm Github

    In addition Kanidm has spawn a number of adjacent projects in the Rust ecosystem such as LDAP, Kerberos, Webauthn, and cryptography libraries.

    In this hack week, we'll be working on Quokca, a certificate authority that supports PKCS11/TPM storage of keys, issuance of PIV certificates, and ACME without the feature gatekeeping implemented by other CA's like smallstep.

    For anyone who wants to participate in Kanidm, we have documentation and developer guides which can help.

    I'm happy to help and share more, so please get in touch!


    SMB3 Server written entirely in Rust by dmulder

    Description

    Given the number of bugs frequently discovered in the Samba code caused by memory issues, it makes sense to re-write the smbd service purely in Rust code. Meanwhile, it would be wise to abandon backwards compatibility here with insecure protocol versions, and simply implement the SMB3 spec.

    Goals

    Get a simple server up and running and get it merged into upstream Samba (which now has Rust build support).

    Resources


    Hacking on sched_ext by flonnegren

    Description

    Sched_ext upstream has some interesting issues open for grabs:

    Goals

    Send patches to sched_ext upstream

    Also set up perfetto to trace some of the example schedulers.

    Resources

    https://github.com/sched-ext/scx


    Hack on isotest-ng - a rust port of isotovideo (os-autoinst aka testrunner of openQA) by szarate

    Description

    Some time ago, I managed to convince ByteOtter to hack something that resembles isotovideo but in Rust, not because I believe that Perl is dead, but more because there are certain limitations in the perl code (how it was written), and its always hard to add new functionalities when they are about implementing a new backend, or fixing bugs (Along with people complaining that Perl is dead, and that they don't like it)

    In reality, I wanted to see if this could be done, and ByteOtter proved that it could be, while doing an amazing job at hacking a vnc console, and helping me understand better what RuPerl needs to work.

    I plan to keep working on this for the next few years, and while I don't aim for feature completion or replacing isotovideo tih isotest-ng (name in progress), I do plan to be able to use it on a daily basis, using specialized tooling with interfaces, instead of reimplementing everything in the backend

    Todo

    • Add make targets for testability, e.g "spawn qemu and type"
    • Add image search matching algorithm
    • Add a Null test distribution provider
    • Add a Perl Test Distribution Provider
    • Fix unittests https://github.com/os-autoinst/isotest-ng/issues/5
    • Research OpenTofu how to add new hypervisors/baremetal to OpenTofu
    • Add an interface to openQA cli

    Goals

    • Implement at least one of the above, prepare proposals for GSoC
    • Boot a system via it's BMC

    Resources

    See https://github.com/os-autoinst/isotest-ng


    CVE portal for SUSE Rancher products by gmacedo

    Description

    Currently it's a bit difficult for users to quickly see the list of CVEs affecting images in Rancher, RKE2, Harvester and Longhorn releases. Users need to individually look for each CVE in the SUSE CVE database page - https://www.suse.com/security/cve/ . This is not optimal, because those CVE pages are a bit hard to read and contain data for all SLE and BCI products too, making it difficult to easily see only the CVEs affecting the latest release of Rancher, for example. We understand that certain costumers are only looking for CVE data for Rancher and not SLE or BCI.

    Goals

    The objective is to create a simple to read and navigate page that contains only CVE data related to Rancher, RKE2, Harvester and Longhorn, where it's easy to search by a CVE ID, an image name or a release version. The page should also provide the raw data as an exportable CSV file.

    It must be an MVP with the minimal amount of effort/time invested, but still providing great value to our users and saving the wasted time that the Rancher Security team needs to spend by manually sharing such data. It might not be long lived, as it can be replaced in 2-3 years with a better SUSE wide solution.

    Resources

    • The page must be simple and easy to read.
    • The UI/UX must be as straightforward as possible with minimal visual noise.
    • The content must be created automatically from the raw data that we already have internally.
    • It must be updated automatically on a daily basis and on ad-hoc runs (when needed).
    • The CVE status must be aligned with VEX.
    • The raw data must be exportable as CSV file.
    • Ideally it will be written in Go or pure Shell script with basic HTML and no external dependencies in CSS or JS.


    Bot to identify reserved data leak in local files or when publishing on remote repository by mdati

    Description

    Scope here is to prevent reserved data or generally "unwanted", to be pushed and saved on a public repository, i.e. on Github, causing disclosure or leaking of reserved informations.

    The above definition of reserved or "unwanted" may vary, depending on the context: sometime secret keys or password are stored in data or configuration files or hardcoded in source code and depending on the scope of the archive or the level of security, it can be either wanted, permitted or not at all.

    As main target here, secrets will be registration keys or passwords, to be detected and managed locally or in a C.I. pipeline.

    Goals

    • Detection:

      • Local detection: detect secret words present in local files;
      • Remote detection: detect secrets in files, in pipelines, going to be transferred on a remote repository, i.e. via git push;
    • Reporting:

      • report the result of detection on stderr and/or log files, noticed excluding the secret values.
    • Acton:

      • Manage the detection, by either deleting or masking the impacted code or deleting/moving the file itself or simply notify it.

    Resources

    • Project repository, published on Github (link): m-dati/hkwk24;
    • Reference folder: hkwk24/chksecret;
    • First pull request (link): PR#1;
    • Second PR, for improvements: PR#2;
    • README.md and TESTS.md documentation files available in the repo root;
    • Test subproject repository, for testing CI on push [TBD].

    Notes

    We use here some examples of secret words, that still can be improved.
    The various patterns to match desired reserved words are written in a separated module, to be on demand updated or customized.

    [Legend: TBD = to be done]


    Model checking the BPF verifier by shunghsiyu

    Project Description

    BPF verifier plays a crucial role in securing the system (though less so now that unprivileged BPF is disabled by default in both upstream and SLES), and bugs in the verifier has lead to privilege escalation vulnerabilities in the past (e.g. CVE-2021-3490).

    One way to check whether the verifer has bugs to use model checking (a formal verification technique), in other words, build a abstract model of how the verifier operates, and then see if certain condition can occur (e.g. incorrect calculation during value tracking of registers) by giving both the model and condition to a solver.

    For the solver I will be using the Z3 SMT solver to do the checking since it provide a Python binding that's relatively easy to use.

    Goal for this Hackweek

    Learn how to use the Z3 Python binding (i.e. Z3Py) to build a model of (part of) the BPF verifier, probably the part that's related to value tracking using tristate numbers (aka tnum), and then check that the algorithm work as intended.

    Resources


    Contributing to Linux Kernel security by pperego

    Description

    A couple of weeks ago, I found this blog post by Gustavo Silva, a Linux Kernel contributor.

    I always strived to start again into hacking the Linux Kernel, so I asked Coverity scan dashboard access and I want to contribute to Linux Kernel by fixing some minor issues.

    I want also to create a Linux Kernel fuzzing lab using qemu and syzkaller

    Goals

    1. Fix at least 2 security bugs
    2. Create the fuzzing lab and having it running

    The story so far

    • Day 1: setting up a virtual machine for kernel development using Tumbleweed. Reading a lot of documentation, taking confidence with Coverity dashboard and with procedures to submit a kernel patch
    • Day 2: I read really a lot of documentation and I triaged some findings on Coverity SAST dashboard. I have to confirm that SAST tool are great false positives generator, even for low hanging fruits.
    • Day 3: Working on trivial changes after I read this blog post: https://www.toblux.com/posts/2024/02/linux-kernel-patches.html. I have to take confidence with the patch preparation and submit process yet.
      • First trivial patch sent: using strtruefalse() macro instead of hard-coded strings in a staging driver for a lcd display
      • Fix for a dereference before null check issue discovered by Coverity (CID 1601566) https://scan7.scan.coverity.com/#/project-view/52110/11354?selectedIssue=1601566
    • Day 4: Triaging more issues found by Coverity.
      • The patch for CID 1601566 was refused. The check against the NULL pointer was pointless so I prepared a version 2 of the patch removing the check.
      • Fixed another dereference before NULL check in iwlmvmparsewowlaninfo_notif() routine (CID 1601547). This one was already submitted by another kernel hacker :(
    • Day 5: Wrapping up. I had to do some minor rework on patch for CID 1601566. I found a stalker bothering me in private emails and people I interacted with me, advised he is a well known bothering person. Markus Elfring for the record.
    • Wrapping up: being back doing kernel hacking is amazing and I don't want to stop it. My battery pack is completely drained but changing the scope gave me a great twist and I really want to feel this energy not doing a single task for months.

      I failed in setting up a fuzzing lab but I was too optimistic for the patch submission process.

    The patches

    1


    OIDC Loginproxy by toe

    Description

    Reverse proxies can be a useful option to separate authentication logic from application logic. SUSE and openSUSE use "loginproxies" as an authentication layer in front of several services.

    Currently, loginproxies exist which support LDAP authentication or SAML authentication.

    Goals

    The goal of this Hack Week project is, to create another loginproxy which supports OpenID Connect authentication which can then act as a drop-in replacement for the existing LDAP or SAML loginproxies.

    Testing is intended to focus on the integration with OIDC IDPs from Okta, KanIDM and Authentik.

    Resources