18

I'm working on a project in which I need to take about 30 images per second (no movie) using the Raspberry Pi camera module.

I'm using the Picamera library (http://picamera.readthedocs.org/en/latest/api.html) for that but the problem is, that taking a picture takes about 0.2 - 0.4 seconds which is way to long. I have already set the use_video_port property to True, which helped a bit, but the time is still to long.

How to take pictures in a short time (about 0.025s) using Python and the Raspberry Pi camera module?

jsotola
  • 646
  • 1
  • 7
  • 12
Timo Denk
  • 285
  • 1
  • 3
  • 9

2 Answers2

22

To take pictures in 0.025s with picamera you'll need a frame-rate greater than or equal to 80fps. The reason for requiring 80 rather 40fps (given that 1/0.025=40) is that currently there's some issue which causes every other frame to get skipped in the multi-image encoder so the effective capture rate winds up as half the camera's framerate.

The Pi's camera module is capable of 80fps in later firmwares (see camera modes in the picamera docs), but only at a VGA resolution (requests for higher resolutions with framerates >30fps will result in upscaling from VGA to the requested resolution, so this is a limitation you'd face even at 40fps). The other problem you'll likely encounter is SD card speed limitations. In other words, you'll probably need to capture to something faster like a network port or in-memory streams (assuming all the images you need to capture will fit in RAM).

The following script gets me a capture rate of ~38fps (i.e. just above 0.025s per pic) on a Pi with overclocking set to 900Mhz:

import io
import time
import picamera

with picamera.PiCamera() as camera:
    # Set the camera's resolution to VGA @40fps and give it a couple
    # of seconds to measure exposure etc.
    camera.resolution = (640, 480)
    camera.framerate = 80
    time.sleep(2)
    # Set up 40 in-memory streams
    outputs = [io.BytesIO() for i in range(40)]
    start = time.time()
    camera.capture_sequence(outputs, 'jpeg', use_video_port=True)
    finish = time.time()
    # How fast were we?
    print('Captured 40 images at %.2ffps' % (40 / (finish - start)))

If you wish to do something in between each frame, this is possible even with capture_sequence by providing a generator function instead of a list of outputs:

import io
import time
import picamera
#from PIL import Image

def outputs():
    stream = io.BytesIO()
    for i in range(40):
        # This returns the stream for the camera to capture to
        yield stream
        # Once the capture is complete, the loop continues here
        # (read up on generator functions in Python to understand
        # the yield statement). Here you could do some processing
        # on the image...
        #stream.seek(0)
        #img = Image.open(stream)
        # Finally, reset the stream for the next capture
        stream.seek(0)
        stream.truncate()

with picamera.PiCamera() as camera:
    camera.resolution = (640, 480)
    camera.framerate = 80
    time.sleep(2)
    start = time.time()
    camera.capture_sequence(outputs(), 'jpeg', use_video_port=True)
    finish = time.time()
    print('Captured 40 images at %.2ffps' % (40 / (finish - start)))

Bear in mind that in the example above, the processing is occurring serially before the next capture (i.e. any processing you do will necessarily delay the next capture). It is possible to reduce this latency with threading tricks but doing so involves a certain amount of complexity.

You may also wish to look into unencoded captures for processing (which remove the overhead of encoding and then decoding JPEGs). However, bear in mind that the Pi's CPU is small (especially compared to the VideoCore GPU). While you may be able to capture at 40fps, there is no way you're going to be able to perform any serious processing of those frames at 40fps even with all the tricks mentioned above. The only realistic way of performing frame processing at that rate is to ship the frames over a network to a faster machine, or perform the processing on the GPU.

Dave Jones
  • 3,978
  • 15
  • 22
  • Thanks for your fast reply! But in your program I will not be able to process the individual pictures while .capture_sequence runs, right? Is there a way to do this? Because I need to work with every individual picture before the next is token. – Timo Denk Aug 05 '14 at 06:52
  • 1
    Amended the answer to include a method of performing processing between frames with a generator function. – Dave Jones Aug 06 '14 at 01:15
  • .capture_sequence appears to ignore KeyboardInterrupts. Do you know how to work around this? – Cerin Oct 19 '15 at 05:36
  • @Cerin what would the power consumption on something like this be? – Ted Taylor of Life Aug 30 '16 at 20:03
  • The fps is fast for this solution but how to save images to files from stream? – Lightsout Jun 29 '18 at 00:25
  • @Dave Jones: I know this thread is very old, but I have a question: Does this save the images to a certain file location? I can't find any saved images. – Rusty May 24 '21 at 16:09
  • @Rusty No, it's saving each frame to a temporary in memory steam (io.BytesIO) - the comments in the second example demonstrate how you might open and process data from one of these streams – Dave Jones May 24 '21 at 19:16
4

According to this StackOverflow answer you can use gstreamer and the following command to accomplish what you want:

raspivid -n -t 1000000 -vf -b 2000000 -fps 25 -o - | gst-launch-1.0 fdsrc ! video/x-h264,framerate=25/1,stream-format=byte-stream ! decodebin ! videorate ! video/x-raw,framerate=10/1 ! videoconvert ! jpegenc ! multifilesink location=img_%04d.jpg

This command appears to take the video output of raspivid to generate a video stream with 25 frames per second and then use gstreamer to convert the video to individual jpeg images.

This article gives instructions on how to install gstreamer1.0 from an alternate repository.

HeatfanJohn
  • 3,125
  • 3
  • 24
  • 36