8

When using apis like the C++ or raspicam api, you poll the camera with a grab(), or similar, method. When a frame is ready, the method returns. Is there a way of checking if the camera is ready without grabbing the frame?

This can be a command line tool, a C++ call, a python library, literally any method.

I ask because I have 4 raspberry pis with 4 cameras and want to take frame by frame video with each frame at the exact same moment in time. The cameras are not fast enough for my application to do it any other way.

user2290362
  • 323
  • 3
  • 11

1 Answers1

2

I think it's best to answer this question by giving some insight into how things work a little lower down. First a caveat though: I'm not a firmware expert by any stretch of the imagination; my rather rough understanding of how the Pi camera module works is based on my experience of writing the picamera library and interacting with the much more knowledgeable firmware developers on the Pi forums. If you hear contradictory information from the firmware devs, they're the authority on this, not me! With that out of the way...

As soon as the Pi's camera module is initialized it is capturing frames. These frames are (as far as the end user is concerned) dumped but inside the camera's firmware there's a lot more going on. The frames are measured to determine the gain to apply to the sensor (AGC), the white-balance to feed to the AWB correction algorithm, etc. For example, if you start up the camera and immediately start recording you'll typically see the white-balance correct itself over the first few frames of the recording:

import picamera
import time

with picamera.PiCamera() as camera:
    camera.resolution = (1280, 720)
    camera.start_recording('video1.h264')
    time.sleep(5)
    camera.stop_recording()

However, if you place a delay before you start recording you'll see that the white-balance is stable by the time the recording starts:

import picamera
import time

with picamera.PiCamera() as camera:
    camera.resolution = (1280, 720)
    time.sleep(5)
    camera.start_recording('video2.h264')
    time.sleep(5)
    camera.stop_recording()

So, given that the camera is always capturing frames even when we're not capturing images or recording videos, what actually happens when we elect to capture an image? We tell the firmware to activate capture, and the firmware waits for the next frame to complete before passing it back to us (actually, if you're capturing images from the still port instead of the video port there's a lot more that goes on including mode switches, but you're concerned with the video port so let's ignore that).

Consider what this means for synchronization (your particular use case). The camera isn't "ready" to capture a frame at any particular point. It's already capturing a frame and when you ask for one it'll hand you the next complete one that becomes available. In order to synchronize the cameras' frames all the cameras would have to be initialized at exactly the same time, and then their internal clocks would have to run precisely in sync (the cameras have their own internal clock; they don't rely on the Pi's clock).

Sadly, I don't think this really is a realistic prospect. If I recall correctly, the Pi compute module (which has 2 camera ports on-board and supports 2 camera modules simultaneously) uses some special calls in the firmware to get the 2 modules to use a single clock signal (I have no idea how this works at the hardware level but I assume it's using something specific to the compute module); I can't imagine how you'd do something similar across 4 Pis.

Update:

I should add that it is possible to do rough synchronization with some reasonable networking knowledge (e.g. UDP broadcast packets). In other words, it's possible to get all Pi's on a network to trigger a capture within a millisecond of each other (assuming a decent low latency network like Ethernet), but as described above that still won't guarantee that all the cameras will actually capture a frame at the same time; there'll be up to a frame's worth of lag (plus network latency) between the start times of the resulting captures.

If that level of synchronization is enough for people, they may want to check out the compoundpi project which is another project I wrote on top of picamera for just this purpose.

Dave Jones
  • 3,978
  • 15
  • 22
  • Can you tell about the multi-camera "frame sync" in the still mode (not video). I guess the sensor may be again running in a "free running" mode even for stills, just with full resolution and lower FPS (maybe 15 FPS? that would give longer frame lags than 30 FPS video). Are you able to confirm this assumption? I am interested in C++ solution as Python only adds a level of time-uncertainty on top of that... – Kozuch Jan 26 '17 at 09:10
  • In the intervening years I've learned quite a bit more and should probably update this answer at some point. For starters the assertion that there's sync on the compute module's dual cameras is wrong: there isn't, they're just started synchronously and will eventually (over several hours) drift apart. On stills, the camera is streaming frames until the capture but then does a mode switch to sensor mode 2 or 3 (framerate dependent) during the capture. – Dave Jones Jan 26 '17 at 09:20
  • I've been writing an expanded version of the camera hardware chapter for the next picamera release based on feedback from the camera firmware devs - might be worth a read through (though it's still not complete) as it covers some of this detail. – Dave Jones Jan 26 '17 at 09:21
  • Your docs are quite an extensive read but I dont have the resources to deep dive into them now - I did just a quick read. I see there are both video and still modes (still port). We know about the video port (free running sensor) but can you explain what still port mean and how it works? Can it be used somehow for more precise trigger (less shutter lag) that video port maybe? I asked raspicam C++ devs for the same topic but do not have answer yet. – Kozuch Jan 26 '17 at 09:38
  • No: the still port is just an MMAL artifact that causes a different imaging pipeline on the GPU to produce "better" still output. Hence, when using the still port to capture, the sensor mode is temporarily switched, a stronger denoise algorithm is used, etc. It won't give you any difference in shutter lag (the mode switch will probably complicate it if anything). – Dave Jones Jan 26 '17 at 09:42
  • Ok, so the sensor is running freely even for stills. This is bad but understandable given the grade of the camera. However for frame sync or predictable shutter lag I think the sensor initialization may be used (you say the stereoscopic mode uses it - 2 sensors will drift after some time but this is not problem for small timeframe). I guess PiFace bullet photo setup uses this - they simply call raspistill at the same time on all cameras. This can be used for predictable shutter lag too - such behavior may be ok for many people that look for synchronization... – Kozuch Jan 26 '17 at 09:50
  • Yes - if you can start all the sensors at more or less the same instant they can be synchronized (for a while). This has been done even with large rigs (I've seen it done on a 100 camera rig). As long as everything's got the same camera firmware the shutter lag is generally similar enough to achieve decent sync. – Dave Jones Jan 26 '17 at 09:53