Project Description
For this Hackweek, I want to focus on building a small application around Raspberry PI, motion-sensors and video capture.
It will use the motion sensor to trigger a clip recording (maybe a 10s clip) that will be uploaded to some kind of storage (probably Cloudinary).
Motivation:
I have many kinds of birds coming to my window for food and water. I wanted to have a way to be able to see them even when I'm not around and to be sure that I don't miss the fun :P
The initial idea was to build it around Node.js since JS is my primary programming language but this might be a good time to learn and use Python since it is meant to be used for this kind of things.
Goal for this Hackweek
Have a public repository with the application that can be used as a DIY starting point.
If the time allows:
- Create a public space to visualize the clips
- Use a pre-trained ML model to tag and categorize the birds species
Resources
All the code will be made available here https://github.com/en3sis/pi-watcher
Hardware:
- https://www.raspberrypi.org/products/raspberry-pi-3-model-b/
- https://www.elektor.com/hc-sr501-pir-motion-sensor-module
- Logitech 720p webcam
Looking for hackers with the skills:
This project is part of:
Hack Week 20
Activity
Comments
-
over 4 years ago by scuescu | Reply
Day 1 update
As for the first day, I prepared all the hardware (pi, motion-sensor) and start looking into how to get them to work together in Python. After a lot of work and tweaks, I managed to have the sensor triggering on movement and send the signal to the PI. The first part of the code (for the motion) is available at https://github.com/en3sis/pi-watcher

Day 2
For today I want to get the video recording part ready. For that, I'm looking into OpenCV (probably a little bit overkill but could work as a base for features :P )
-
over 4 years ago by scuescu | Reply
Day 3
Yesterday I finished the clip recording part, now a ~10s clip will trigger when motion is detected.
The plans for today are:
- Upload the clip to Cloudinary
- Improve the clip recording part
- Set a cooldown of X time after a recording starts.The latest code was uploaded to https://github.com/en3sis/pi-watcher
-
over 4 years ago by scuescu | Reply
Day 4
Yesterday I finished the uploading part and some other improvements plus, a static website where we can preview the uploaded clips. A first demo can be found at https://serverless-en3sis.vercel.app/ with our first Rock-Start
An issue I ran into is that the motion sensor won't trigger if the target is behind the window (it's an infrared motion-sensor, the more you learn... :P ). Since I already started to build the video recording part around OpenCV, I'm now looking into using Object Motion detection.
Plans for today:
- Have an Object Detection system with OpenCV
- If I'm done early, bring Pytorch and look for an ML model that can help me label the bird's species.
- Improve the preview application (?) -
over 4 years ago by scuescu | Reply
Day 5:
Yesterday I had to implement the motion-detection with OpenCV, I ran into some issues but after all, I could finish it and now I have a solid application (with a looooooot of room for improvements :P ).
To wrap it up, today I'll focus on the preview space and whatever time I have left for improvements. I have a long list of improvements that I want to take care of.
Some new clips can be found at https://serverless-en3sis.vercel.app/
Similar Projects
Build a Single Camera 3D Scanner (Photogrammetry). by lparkin
Description
I want to see how fast I can develop a single-camera (pi camera module v3) rig with a stepper motor controlling a turntable that rotates the model being scanned. The trick here is not to be super fancy with 100's of sensors and data inputs, quite the opposite. I want to see how accurate I can scan objects into 3D-printable models using only a camera and as many fixed and known parameters as possible.
Speed to be augmented with agentic AI coding companion. As it stands, I have a 3D printer, pretty much all the electronics I need.
Goals
- Design and print working/workable camera rig
- Design and print working/workable turntable (considering printing my own cylinder-style bearings as well)
- Assemble rig components into MVP assembly
- Develop application that can hook into existing tools, or leverage a library like openCV, to process 2D images into a 3D model.
- Iterate until models are good enough to 3D print.
Resources
- https://www.instructables.com/3D-scanning-Photogrammetry-with-a-rotating-platfor/
- https://www.instructables.com/3d-Scan-Anything-Using-Just-a-Camera/
- https://www.instructables.com/Build-a-DIY-Desktop-3d-Scanner-With-Infinite-Resol/
- https://www.instructables.com/3D-Laser-Scanning-DIY/
Kubernetes-Based ML Lifecycle Automation by lmiranda
Description
This project aims to build a complete end-to-end Machine Learning pipeline running entirely on Kubernetes, using Go, and containerized ML components.
The pipeline will automate the lifecycle of a machine learning model, including:
- Data ingestion/collection
- Model training as a Kubernetes Job
- Model artifact storage in an S3-compatible registry (e.g. Minio)
- A Go-based deployment controller that automatically deploys new model versions to Kubernetes using Rancher
- A lightweight inference service that loads and serves the latest model
- Monitoring of model performance and service health through Prometheus/Grafana
The outcome is a working prototype of an MLOps workflow that demonstrates how AI workloads can be trained, versioned, deployed, and monitored using the Kubernetes ecosystem.
Goals
By the end of Hack Week, the project should:
Produce a fully functional ML pipeline running on Kubernetes with:
- Data collection job
- Training job container
- Storage and versioning of trained models
- Automated deployment of new model versions
- Model inference API service
- Basic monitoring dashboards
Showcase a Go-based deployment automation component, which scans the model registry and automatically generates & applies Kubernetes manifests for new model versions.
Enable continuous improvement by making the system modular and extensible (e.g., additional models, metrics, autoscaling, or drift detection can be added later).
Prepare a short demo explaining the end-to-end process and how new models flow through the system.
Resources
Updates
- Training pipeline and datasets
- Inference Service py
HTTP API for nftables by crameleon
Background
The idea originated in https://progress.opensuse.org/issues/164060 and is about building RESTful API which translates authorized HTTP requests to operations in nftables, possibly utilizing libnftables-json(5).
Originally, I started developing such an interface in Go, utilizing https://github.com/google/nftables. The conversion of string networks to nftables set elements was problematic (unfortunately no record of details), and I started a second attempt in Python, which made interaction much simpler thanks to native nftables Python bindings.
Goals
- Find and track the issue with google/nftables
- Revisit and polish the Go or Python code (prefer Go, but possibly depends on implementing missing functionality), primarily the server component
- Finish functionality to interact with nftables sets (retrieving and updating elements), which are of interest for the originating issue
- Align test suite
- Packaging
Resources
- https://git.netfilter.org/nftables/tree/py/src/nftables.py
- https://git.com.de/Georg/nftables-http-api (to be moved to GitHub)
- https://build.opensuse.org/package/show/home:crameleon:containers/pytest-nftables-container
Results
- Started new https://github.com/tacerus/nftables-http-api.
- First Go nftables issue was related to set elements needing to be added with different start and end addresses - coincidentally, this was recently discovered by someone else, who added a useful helper function for this: https://github.com/google/nftables/pull/342.
- Further improvements submitted: https://github.com/google/nftables/pull/347.
Side results
Upon starting to unify the structure and implementing more functionality, missing JSON output support was noticed for some subcommands in libnftables. Submitted patches here as well:
- https://lore.kernel.org/netfilter-devel/20251203131736.4036382-2-georg@syscid.com/T/#u