an invention by jordimassaguerpla
For my 40th birthday I got from my friends a very special present, an USB Accelerator that brings machine learning inferencing to existing systems:
https://coral.ai/products/accelerator
From its website:
> The on-board Edge TPU coprocessor is capable of performing 4 trillion operations (tera-operations) per second (TOPS), using 0.5 watts for each TOPS (2 TOPS per watt). For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at 400 FPS, in a power efficient manner. See more performance benchmarks.
So I am going to connect this through the USB port to my NAS system:
https://www.qnap.com/en/product/ts-231p
Then, using the container station:
https://www.qnap.com/solution/container_station/en-us/
I will install this container:
https://hub.docker.com/r/lemariva/raspbian-edgetpu
So that I will have a jupyter notebook available to run on the TPU this Machine Learning algorithm:
https://github.com/jantic/DeOldify
From its webpage:
>Simply put, the mission of this project is to colorize and restore old images and film footage.
And finally, I have some old photos from "la Selva del Camp" that I would like to colorify.
Sounds fun, doesn't it?
Looking for hackers with the skills:
This project is part of:
Hack Week 19
Activity
Comments
-
almost 6 years ago by jordimassaguerpla | Reply
How to run the container:
docker run -d --privileged -p 2222:22 -p 3333:8080 -p 4444:8888 -e PASSWORD=secret --restart unless-stopped -v /dev/bus/usb:/dev/bus/usb lemariva/raspbian-edgetpu
-
almost 6 years ago by jordimassaguerpla | Reply
How to test it works
ssh -p 2222 root@NAS password: root mkdir tmp cd tmp && git clone https://github.com/google-coral/tflite.git cd tflite/python/examples/classification ./install_requirements.sh python3 classify_image.py --model models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --labels models/inat_bird_labels.txt --input images/parrot.jpgAnd you should see
INFO: Initialized TensorFlow Lite runtime.Otherwise, if you see
ValueError: Failed to load delegate from libedgetpu.so.1Means the USB is either not connected or not detected.
-
almost 6 years ago by jordimassaguerpla | Reply
Next step, connect to the jupyter notebook at:
https://192.168.1.32:4444
then, as a test, I uploaded the classification files and created a new jupyter notebook based on the classify_image example.
-
almost 6 years ago by jordimassaguerpla | Reply
After a day compiling python native extensions for arm or PyTorch and other math python extensions, cause the NAS has an arm processor, I was able to have all dependencies installed and try to run the ImageColorizer notebook.
Unfortunately, I got this error message
RuntimeError: [enforce fail at CPUAllocator.cpp:64] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 37632 bytes. Error code 12 (Cannot allocate memory)
So, not enough memory in my NAS to run this algorithm :(
I also suspect that PyTorch is not using the TPU, as the TPU works with tensorflow lite libraries...
Thus, I will try to run this algorithm on a workstation with an nvidia card ... and for this project... we can considered it done :(
Similar Projects
Song Search with CLAP by gcolangiuli
Description
Contrastive Language-Audio Pretraining (CLAP) is an open-source library that enables the training of a neural network on both Audio and Text descriptions, making it possible to search for Audio using a Text input. Several pre-trained models for song search are already available on huggingface
Goals
Evaluate how CLAP can be used for song searching and determine which types of queries yield the best results by developing a Minimum Viable Product (MVP) in Python. Based on the results of this MVP, future steps could include:
- Music Tagging;
- Free text search;
- Integration with an LLM (for example, with MCP or the OpenAI API) for music suggestions based on your own library.
The code for this project will be entirely written using AI to better explore and demonstrate AI capabilities.
Resources
- CLAP: The main model being researched;
- huggingface: Pre-trained models for CLAP;
- Free Music Archive: Creative Commons songs that can be used for testing;
- Colab: To be used as the development environment;
- hw25-song-search: Github repo of the project.