Using the camsource and vlc packages as an example. If camsource is configured to use a given dev/video[n] device with a choice of resolution in the width and height fields of a camsource conf file then if camsource is the first application to access the device captured images are as-expected. However, if an application like vlc is used for the same device (and exited) before starting camsource (e.g. to check the view from the camera before starting camsource) and if the camera supports higher resolutions than used in the camsource configuration what occurs is that camsource will not setup the camera to use the configured resolution and has a broken view of the camera output based on the vlc chosen resolution chopped to the camsource configuration. For example, with camsource configuration set to use 640x480 on a camera that supports 720x480 and vlc is used and exited before camsource is started then the captured camsource images contain two non-continuous partial image blocks with a horizontal border dividing them. I assume either vlc fails to fully reset the device configuration when exiting or camsource fails to initialize the device "from scratch" when starting. The two applications use different video device APIs but the setup and cleanup for the camera in each case is a very limited part of the application's functionality.
Looking for hackers with the skills:
This project is part of:
Hack Week 19
Activity
Comments
-
almost 5 years ago by dmair | Reply
I just found the cause of this.
When camsource opens and initializes the video source it gets the configured frame size and sets the width and height in a struct videowindow (v4l1). It continues as far as making the channel settings with a VIDIOCSCHAN ioctl. If that succeeds it does a VIDIOCGWIN ioctl to "Get the video overlay window" into the same structure as it previously set the configured width and height in. Thus, replacing the width and height with the current settings on the device (rendering the configuration settings of no value). It then performs a VIDIOCSWIN ioctl to "Set the video overlay window" using the structure it just filled using VIDIOCGWIN, IOW the set is redundant it sets the overlay window to it's current value. The configured frame size wanted in the struct videowindow (and any other populated in it before the VIDIOCGWIN) need to be retained and set in the struct video_window between the VIDIOCGWIN and VIDIOCSWIN operations.
-
almost 5 years ago by dmair | Reply
The description above explains the loss of configuration settings and use of previous application's settings from the video device but there's more to it as well. There are two attempts to make video channel settings in opendev(). The first one is based on device features. The second one just repeats the previous VIDIOCSCHAN ioctl. But the second one is followed by a failure path starting with the message "ioctl set grab window failed: ...Trying again without the fps option...". Which is immediately followed by a VIDIOCSWIN ioctl setup with the fps removed from the struct video_window. The message claims this VIDIOCSWIN is the "again without fps" behavior of a VIDIOCSCHAN which has no fps setting anyway. According to the potential output messaging, what was intended was a VIDIOCSWIN with fps set and if it fails, another VIDIOCSWIN without fps. If implemented that way then the success block of the VIDIOCSCHAN (now replaced with a VIDIOCSWIN) isn't needed to perform a VIDIOCGWIN/VIDIOCSWIN that overwrites the loaded configuration and the block can be removed. Placing VIDIOCGWIN before the loading of configuration means the current settings are replaced by configuration settings, a set grab window using any fps setting is performed, if it fails it is repeated without the fps. It matches the runtime behavior described by the messaging the code can generate and it now uses the configured grab window size regardless of what was run before on the same camera.
Similar Projects
Update my own python audio and video time-lapse and motion capture apps and publish by dmair
Project Description
Many years ago, in my own time, I wrote a Qt python application to periodically capture frames from a V4L2 video device (e.g. a webcam) and used it to create daily weather timelapse videos from windows at my home. I have maintained it at home in my own time and this year have added motion detection making it a functional video security tool but with no guarantees. I also wrote a linux audio monitoring app in python using Qt in my own time that captures live signal strength along with 24 hour history of audio signal level/range and audio spectrum. I recently added background noise filtering to the app. In due course I aim to include voice detection, currently I'm assuming via Google's public audio interface. Neither of these is a professional home security app but between them they permit a user to freely monitor video and audio data from a home in a manageable way. Both projects are on github but out-of-date with personal work, I would like to organize and update the github versions of these projects.
Goal for this Hackweek
It would probably help to migrate all the v4l2py module based video code to linuxpy.video based code and that looks like a re-write of large areas of the video code. It would also be good to remove a lot of python lint that is several years old to improve the projects with the main goal being to push the recent changes with better organized code to github. If there is enough time I'd like to take the in-line Qt QSettings persistent state code used per-app and write a python class that encapsulates the Qt QSettings class in a value_of(name)/name=value manner for shared use in projects so that persistent state can be accessed read or write anywhere within the apps using a simple interface.
Resources
I'm not specifically looking for help but welcome other input.