Project Description
There are multiple network benchmark tools already, most popular probably being netperf and iperf. Each of them has its pros and cons but the biggest drawback probably is that netperf runs only one connection (flow) and while iperf can use multiple connections, it still runs in a single thread. For benchmarking of contemporary fast networks like 40Gb/s or 100Gb/s ethernet, this can be a severe limitation as the performance is often CPU bound. Even on 10Gb/s ethernet, we are often unable to saturate the medium if tunneling or complex packet processing is involved.
Out of the two, only netperf seems to have public development and rewriting an application which was never intended to be multithreaded is hard and bug prone. Recently I also learned about uperf but it looks abandoned and feels rather complicated to use. So why not use the hackweek as an opportunity to see how hard is it to write a simple bencharking utility?
Goal for Hackweek 20
Essential features:
- implementation of a multithreaded client and server
- TCP throughput and latency tests (like netperf
TCP_STREAM
andTCP_RR
) - configurable basic parameters (message size, receive/send buffer size)
Nice to have features (if time allows, leave for later otherwise):
- statistical evaluation (like netperf
-i
and-I
) - flexible and future proof control protocol (netlink based?)
- optional output suitable for automated processing (XML / json?)
- more test modes (UDP, SCTP,
TCP_CRR
,TCP_CC
) - man page
- a package in OBS (compatible down to SLE11-SP4-LTSS)
End of Hackweek 20 status
Code available in github repository, tag hackweek20-end
.
Done:
- multithreaded client (nperf) and server (nperfd) utility
TCP_STREAM
andTCP_RR
tests- configurable basic test parameters
- simple, per iteration, per thread and raw data output
- basic statistical processing (averages, mean deviation)
- help text (
nperf -h
/nperfd -h
)
ToDo:
- daemonized nperfd
- statistical evaluation (like netperf
-i
and-I
) - more test modes (UDP, SCTP,
TCP_CRR
,TCP_CC
) - flexible and future proof control protocol (netlink based?)
- optional output suitable for automated processing (XML / json?)
- man page
- a package in OBS (compatible down to SLE11-SP4-LTSS)
The utilities can be used for throughput and roundtrip time measurement but there is still work to be done before they are suitable for general use.
Resources
- netperf
- iperf
github repository - project code on Github
Looking for hackers with the skills:
This project is part of:
Hack Week 20
Activity
Comments
Be the first to comment!
Similar Projects
Remote control for Adam Audio active monitor speakers by dmach
Description
I own a pair of Adam Audio A7V active studio monitor speakers. They have ethernet connectors that allow changing their settings remotely using the A Control software. From Windows :-( I couldn't find any open source alternative for Linux besides AES70.js library.
Goals
- Create a command-line tool for controlling the speakers.
- Python is the language of choice.
- Implement only a simple tool with the desired functionality rather than a full coverage of AES70 standard.
TODO
- ✅ discover the device
- ❌ get device manufacturer and model
- ✅ get serial number
- ✅ get description
- ✅ set description
- ✅ set mute
- ✅ set sleep
- ✅ set input (XRL (balanced), RCA (unbalanced))
- ✅ set room adaptation
- bass (1, 0, -1, -2)
- desk (0, -1, -2)
- presence (1, 0, -1)
- treble (1, 0, -1)
- ✅ set voicing (Pure, UNR, Ext)
- ❌ the Ext voicing enables the following extended functionality:
- gain
- equalizer bands
- on/off
- type
- freq
- q
- gain
- ❌ udev rules to sleep/wakeup the speakers together with the sound card
Resources
- https://www.adam-audio.com/en/a-series/a7v/
- https://www.adam-audio.com/en/technology/a-control-remote-software/
- https://github.com/DeutscheSoft/AES70.js
- https://www.aes.org/publications/standards/search.cfm?docID=101 - paid
- https://www.aes.org/standards/webinars/AESStandardsWebinarSC0212L20220531.pdf
- https://ocaalliance.github.io/downloads/AES143%20Network%20track%20NA10%20-%20AES70%20Controller.pdf
Result
- The code is available on GitHub: https://github.com/dmach/pacontrol
Add a machine-readable output to dmidecode by jdelvare
Description
There have been repeated requests for a machine-friendly dmidecode output over the last decade. During Hack Week 19, 5 years ago, I prepared the code to support alternative output formats, but didn't have the time to go further. Last year, Jiri Hnidek from Red Hat Linux posted a proof-of-concept implementation to add JSON output support. This is a fairly large pull request which needs to be carefully reviewed and tested.
Goals
Review Jiri's work and provide constructive feedback. Merge the code if acceptable. Evaluate the costs and benefits of using a library such as json-c.
ESETv2 Emulator / interpreter by m.crivellari
Description
ESETv2 is an intriguing challenge developed by ESET, available on their website under the "Challenge" menu.
The challenge involves an "assembly-like" language and a Python compiler that generates .evm
binary files.
This is an example using one of their samples (it prints N Fibonacci numbers):
.dataSize 0
.code
loadConst 0, r1 # first
loadConst 1, r2 # second
loadConst 1, r14 # loop helper
consoleRead r3
loop:
jumpEqual end, r3, r15
add r1, r2, r4
mov r2, r1
mov r4, r2
consoleWrite r1
sub r3, r14, r3
jump loop
end:
hlt
This language also supports multi-threading. It includes instructions such as createThread
to start a new thread, joinThread
to wait until a thread completes, and lock
/unlock
to facilitate synchronization between threads.
Goals
- create a full interpreter able to run all the available samples provided by ESET.
- improve / optimize memory (eg. using bitfields where needed as well as avoid unnecessary memory allocations)
Resources
- Challenge URL: https://join.eset.com/en/challenges/core-software-engineer
- My github project: https://github.com/DispatchCode/eset_vm2 (not 100% complete)
Achivements
Project still not complete. Added lock / unlock instruction implementation but further debug is needed; there is a bug somewhere. Actually the code it works for almost all the examples in the samples folder. 1 of them is not yet runnable (due to a missing "write" opcode implementation), another will cause the bug to show up; still not investigated, anyhow.
FizzBuzz OS by mssola
Project Description
FizzBuzz OS (or just fbos
) is an idea I've had in order to better grasp the fundamentals of the low level of a RISC-V machine. In practice, I'd like to build a small Operating System kernel that is able to launch three processes: one that simply prints "Fizz", another that prints "Buzz", and the third which prints "FizzBuzz". These processes are unaware of each other and it's up to the kernel to schedule them by using the timer interrupts as given on openSBI (fizz on % 3 seconds, buzz on % 5 seconds, and fizzbuzz on % 15 seconds).
This kernel provides just one system call, write
, which allows any program to pass the string to be written into stdout.
This project is free software and you can find it here.
Goal for this Hackweek
- Better understand the RISC-V SBI interface.
- Better understand RISC-V in privileged mode.
- Have fun.
Resources
Results
The project was a resounding success Lots of learning, and the initial target was met.
FastFileCheck work by pstivanin
Description
FastFileCheck is a high-performance, multithreaded file integrity checker for Linux. Designed for speed and efficiency, it utilizes parallel processing and a lightweight database to quickly hash and verify large volumes of files, ensuring their integrity over time.
https://github.com/paolostivanin/FastFileCheck
Goals
- Release v1.0.0
Design overwiew:
- Main thread (producer): traverses directories and feeds the queue (one thread is more than enough for most use cases)
- Dedicated consumer thread: manages queue and distributes work to threadpool
- Worker threads: compute hashes in parallel
This separation of concerns is efficient because:
- Directory traversal is I/O bound and works well in a single thread
- Queue management is centralized, preventing race conditions
- Hash computation is CPU-intensive and properly parallelized