PiCam: Difference between revisions

From XPUB & Lens-Based wiki
No edit summary
Line 41: Line 41:


== using python + opencv ==
== using python + opencv ==
  apt install python-opencv picamera


This [http://www.pyimagesearch.com/2015/03/30/accessing-the-raspberry-pi-camera-with-opencv-and-python/ tutorial on pyimagesearch.com] is very useful.
This [http://www.pyimagesearch.com/2015/03/30/accessing-the-raspberry-pi-camera-with-opencv-and-python/ tutorial on pyimagesearch.com] is very useful.

Revision as of 00:33, 21 March 2017

connecting the camera

gently lift the black clip by pushing upward. insert the ribbon cable of the camera firmly down. the blue side goes towards the headphone jack! push the clip down again.

See: connect-camera.jpg

setup

warning: this info is in progress -- may not (yet) represent a useful thing

  • ok this link in the end was totally misleading -- in fact you (maybe?) just need to add some stuff to /boot/config.txt

(the audio thing is unrelated to the camera, but may be useful -- I think it forces audio to the mini jack?! ie not to HDMI)

/boot/config.txt

# Enable audio (loads snd_bcm2835)
dtparam=audio=on
start_x=1
gpu_mem=128


test the camera

 raspistill -o test.jpg

connecting via python

See this tutorial on raspberrypi.com. The bird box tutorial is also quite interesting relating to how the camera responds to infrared, and the fact that the "fixed focus" camera can be adapted to manually focus at various lengths.

from picamera import PiCamera
from time import sleep

camera = PiCamera()
camera.rotation=270
camera.start_preview()
sleep(10)
camera.stop_preview()

using python + opencv

 apt install python-opencv picamera

This tutorial on pyimagesearch.com is very useful. Also the official picamera docs include a recipe for capturing to an opencv array.

Adapting the opt_flow.py opencv sample to do motion detection on the pi.

from picamera.array import PiRGBArray
from picamera import PiCamera
import time
import cv2
import numpy as np

# initialize the camera and grab a reference to the raw camera capture
camera = PiCamera()
# framesize = (640, 480)
framesize = (160, 120)

camera.resolution = framesize
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=framesize)
 
# allow the camera to warmup
time.sleep(0.25)

prevgray = None
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
    img = frame.array
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    if prevgray != None:
        flow = cv2.calcOpticalFlowFarneback(prevgray, gray, 0.5, 3, 15, 3, 5, 1.2, 0)
        fx = flow[:,:,0]
        fy = flow[:,:,1]
        mag = np.sqrt(fx*fx+fy+fy)
        mag = np.nan_to_num(mag)
        print mag.max()
        # print mag[0][0], mag[10][10]
    prevgray = gray
 
    # clear the stream in preparation for the next frame
    rawCapture.truncate(0)

Sending the data from python to puredata via OSC

Using Mr. Stock's classic OSC.py module. You can simply download it and place in the same folder as your python script (no need to pip if you don't wanna).

( This code is for opencv's camera interface, TODO: adjust for picamera )

#!/usr/bin/env python

from __future__ import print_function
import numpy as np
import cv2, sys
from argparse import ArgumentParser

ap = ArgumentParser("use opencv to do optical flow on the camera input")
ap.add_argument("--show", default=False, action="store_true", help="show it")
ap.add_argument("--print", default=False, action="store_true", help="print values to stdout")
ap.add_argument("--video", default=0, help="video source")
ap.add_argument("--width", default=640, type=int, help="video width")
ap.add_argument("--height", default=480, type=int, help="video height")
ap.add_argument("--sendosc", default=False, action="store_true", help="send OSC messages")
ap.add_argument("--oschost", default="localhost", help="default: localhost")
ap.add_argument("--oscport", default=8001, help="default: 8001")

args = ap.parse_args()

if args.sendosc:
    import OSC

    client = OSC.OSCClient()
    client.connect((args.oschost, args.oscport))
    print ("OSC: Connected", file=sys.stderr)

cam = cv2.VideoCapture(args.video)
cam.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, args.width)
cam.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, args.height)
prevgray = None

def draw_flow(img, flow, step=16):
    h, w = img.shape[:2]
    y, x = np.mgrid[step/2:h:step, step/2:w:step].reshape(2,-1)
    fx, fy = flow[y,x].T
    lines = np.vstack([x, y, x+fx, y+fy]).T.reshape(-1, 2, 2)
    lines = np.int32(lines + 0.5)
    vis = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
    cv2.polylines(vis, lines, 0, (0, 255, 0))
    for (x1, y1), (x2, y2) in lines:
        cv2.circle(vis, (x1, y1), 1, (0, 255, 0), -1)
    return vis

while True:
    ret, img = cam.read()
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    if prevgray != None:
        flow = cv2.calcOpticalFlowFarneback(prevgray, gray, 0.5, 3, 15, 3, 5, 1.2, 0)
        fx = flow[:,:,0]
        fy = flow[:,:,1]
        mag = np.sqrt(fx*fx+fy+fy)
        mag = np.nan_to_num(mag)

        maxv = int(mag.max())
        total = int(mag.sum())
        if args.print:
            print (maxv, total)

        if args.sendosc:
            msg = OSC.OSCMessage("/optflow")
            msg.extend([maxv, total])
            client.send(msg)

        if args.show:
            cv2.imshow('flow', draw_flow(gray, flow))
            ch = 0xFF & cv2.waitKey(5)
            if ch == 27:
                break

    prevgray = gray

cv2.destroyAllWindows()

UNCLASSIFIED NOTES

remember to put on udp on your pdsend (otherwise it's trying tcp and says connection refused ... annoying)

 pdsend 8001 localhost udp