Create a course in moodle.opensuse.org

Moodle is the world's most popular learning management system. Start creating your online learning site in minutes! --> https://moodle.org/

Free knowledge is important to our society. So let't create some free trainings that also can be used by our trainees.

Moodle is a mighty tool to do that.

Goal for this Hackweek

  • Get familiar with moodle.
  • At the end of the week I want to know all opportunities moodle offers and to have a "network basics" course online with a quiz at the end.
  • Plus: Have a remote lab ready. (always reset-able)

Resources

  • https://github.com/lethliel/moodle-networking-basics
  • https://moodle.opensuse.org/

Looking for hackers with the skills:

networking routing switching training moodle

This project is part of:

Hack Week 20

Activity

  • over 4 years ago: rangelino liked this project.
  • over 4 years ago: mstrigl started this project.
  • over 4 years ago: mstrigl added keyword "networking" to this project.
  • over 4 years ago: mstrigl added keyword "routing" to this project.
  • over 4 years ago: mstrigl added keyword "switching" to this project.
  • over 4 years ago: mstrigl added keyword "training" to this project.
  • over 4 years ago: mstrigl added keyword "moodle" to this project.
  • over 4 years ago: mstrigl originated this project.

  • Comments

    • ONalmpantis
      over 4 years ago by ONalmpantis | Reply

      Nice I would like to take the network course when its done !

    Similar Projects

    Try AI training with ROCm and LoRA by bmwiedemann

    Description

    I want to setup a Radeon RX 9600 XT 16 GB at home with ROCm on Slowroll.

    Goals

    I want to test how fast AI inference can get with the GPU and if I can use LoRA to re-train an existing free model for some task.

    Resources

    • https://rocm.docs.amd.com/en/latest/compatibility/compatibility-matrix.html
    • https://build.opensuse.org/project/show/science:GPU:ROCm
    • https://src.opensuse.org/ROCm/
    • https://www.suse.com/c/lora-fine-tuning-llms-for-text-classification/

    Results

    got inference working with llama.cpp:

    export LLAMACPP_ROCM_ARCH=gfx1200
    HIPCXX="$(hipconfig -l)/clang" HIP_PATH="$(hipconfig -R)" \
    cmake -S . -B build -DGGML_HIP=ON -DAMDGPU_TARGETS=$LLAMACPP_ROCM_ARCH \
    -DCMAKE_BUILD_TYPE=Release -DLLAMA_CURL=ON \
    -Dhipblas_DIR=/usr/lib64/cmake/hipblaslt/ \
    && cmake --build build --config Release -j8
    m=models/gpt-oss-20b-mxfp4.gguf
    cd $P/llama.cpp && build/bin/llama-server --model $m --threads 8 --port 8005 --host 0.0.0.0 --device ROCm0 --n-gpu-layers 999
    

    Without the --device option it faulted. Maybe because my APU also appears there?

    I updated/fixed various related packages: https://src.opensuse.org/ROCm/rocm-examples/pulls/1 https://src.opensuse.org/ROCm/hipblaslt/pulls/1 SR 1320959

    benchmark

    I benchmarked inference with llama.cpp + gpt-oss-20b-mxfp4.gguf and ROCm offloading to a Radeon RX 9060 XT 16GB. I varied the number of layers that went to the GPU:

    • 0 layers 14.49 tokens/s (8 CPU cores)
    • 9 layers 17.79 tokens/s 34% VRAM
    • 15 layers 22.39 tokens/s 51% VRAM
    • 20 layers 27.49 tokens/s 64% VRAM
    • 24 layers 41.18 tokens/s 74% VRAM
    • 25+ layers 86.63 tokens/s 75% VRAM (only 200% CPU load)

    So there is a significant performance-boost if the whole model fits into the GPU's VRAM.