Setting up a DigitalOcean server to host your website/domain

I started using DigitalOcean (DO) when my previous hosting supplier suddenly raised its prices. I already had experience working with server installations myself so I gave DO a chance as it is not that expensive and setting up a server for myself was a fun experiment to do. It wasn’t very complicated as there are many DO tutorials scattered around.

I’m writing this post to document all the tutorials I used to set up my server and domain and share with you my referral link, which gives you an initial $10 credit to start with and once they spend $25 on the platform the referrer gets $25 as well. My referral link is https://m.do.co/c/0b71ad4b3f4b , if you decide to use it.

First you will need to create an account (you can do so clicking the link above to net you the $10 credit).

After you have an account, follow the steps here to create an SSH key used for logging in to your future server and read what’s below when you are about to create you new server.


Tip: When using PUTTY you can paste text in the clipboard by right clicking.


You could choose any droplet configuration you want, or even choose to have a pre-configured server but for now I’ll stick to a blank Ubuntu 16.04  x64 server. At the time of writing the latest version is 16.04.3.

I’ll choose the $5dll per month plan as it is enough for my webserver. You can change upgrade this later if you need to.

I chose NYC data server but you can choose one that’s near you.

Choose the SSH key you added.

Choose a hostname and click CREATE button at the bottom.


That’s it to get your droplet. Now you can follow the next tutorials to setup the webserver (I’m not the owner of the content in the links).

  1. Initial Server Setup with Ubuntu 16.04
    • When you get to install SSH for the new user, use Option 2 and copy the public key’s text straight from PuttyGEN.
    • Then, you should edit your Putty configuration to login as your new user instead of as root (otherwise it makes no sense adding a new user).
  2. How To Set Up a Host Name with DigitalOcean
    • I just added an A record (@) and a CNAME record (*).
  3. How To Install Linux, Apache, MySQL, PHP (LAMP) stack on Ubuntu 16.04
  4. How To Secure Apache with Let’s Encrypt on Ubuntu 16.04
  5. How To Use Filezilla to Transfer and Manage Files Securely on your VPS
  6. Optional: How To Install WordPress on Ubuntu 14.04

Important: You need to change permissions on the /var/www/html/ folder to add files throuhg SFTP (source):

  1. Change the directory owner and group:
    sudo chown -R www-data:www-data /var/www/html
  2. allow the group to write to the directory with appropriate permissions:
    sudo chmod -R 775 /var/www/html
  3. Add myself to the www-data group:
    sudo usermod -a -G www-data [my username]

Now just upload your webpage to your server’s /var/www/html/ folder and your done!

Drones – Update Eachine i6 firmware

I’m writing this just as a record since I struggled a lot to bind my Eachine i6 transmitter to the receiver included in the Eachine EX100 (iRangeX FS-RX). The main problem was that the receiver communicates with the flight controller through PPM (UART3 port in FC) and the i6 doesn’t support PPM with the standard firmware.

What I had to do was upload the FlySky i6 firmware to the transmitter, setup PPM, and bind to the receiver.

To upload new firmware go here and download entire repo as zip.

Then, using USB-UART converter, hook up the transmitter using this pinout:

eachine-i6-pinout

From the download, open: FlySky-i6-Mod–master\Firmwares\Flysky i6 Original Firmware\FS-i6-Original -Firmware.exe

Open port, program.

That’s it, new firmware should be loaded.

To set-up transmitter, Turn ON, Hold OK button to enter menu, scroll down to RX Setup, OK, select AFHDS, OK, select OFF, long press Cancel to save. Turn off transmitter.

Now, connect BIND pin in receiver to ground and power it up, LED should start blinking.

Then, while holding BIND button in transmitter, turn it on. After a moment, LED in receiver should stop blinking and stay on. On transmitter, long press CANCEL. Then turn off transmitter. Then turn off receiver and remove BIND-GND pin connection.

TX/RX should be bonded now.

 

 

People Counter 9 – Counting

Last chapter! Again, sorry for delays..

Last time I showed you how to follow an object’s movement, although really it’s only saving a list of that object’s previous coordinates. Now, we have to take a look at that list and determine if the object’s moving up or down in the image.

To do this I’ll first create two imaginary lines that’ll indicate when to evaluate the object’s direction (line_up, line_down).

I also set two limiting lines to tell my code when to stop tracking an object (up_limit, down_limit).

I also use two methods on the Person class, going_UP(a,b) and going_DOWN(a,b). Both receive line_down and line_up and return true is they evaluate if the object has crossed line_up or line_down in the correct direction. If so, then a counter is incremented.. and we’re counting people.

Also, the Person class has a State attribute which is used to know when the object is outside the counting limits of the image and release allocated memory.

Here’s code:

##Contador de personas
##Federico Mejia
import numpy as np
import cv2
import Person
import time

#Contadores de entrada y salida
cnt_up   = 0
cnt_down = 0

#Fuente de video
#cap = cv2.VideoCapture(0)
cap = cv2.VideoCapture('peopleCounter.avi')

#Propiedades del video
##cap.set(3,160) #Width
##cap.set(4,120) #Height

#Imprime las propiedades de captura a consola
for i in range(19):
    print i, cap.get(i)

w = cap.get(3)
h = cap.get(4)
frameArea = h*w
areaTH = frameArea/250
print 'Area Threshold', areaTH

#Lineas de entrada/salida
line_up = int(2*(h/5))
line_down   = int(3*(h/5))

up_limit =   int(1*(h/5))
down_limit = int(4*(h/5))

print "Red line y:",str(line_down)
print "Blue line y:", str(line_up)
line_down_color = (255,0,0)
line_up_color = (0,0,255)
pt1 =  [0, line_down];
pt2 =  [w, line_down];
pts_L1 = np.array([pt1,pt2], np.int32)
pts_L1 = pts_L1.reshape((-1,1,2))
pt3 =  [0, line_up];
pt4 =  [w, line_up];
pts_L2 = np.array([pt3,pt4], np.int32)
pts_L2 = pts_L2.reshape((-1,1,2))

pt5 =  [0, up_limit];
pt6 =  [w, up_limit];
pts_L3 = np.array([pt5,pt6], np.int32)
pts_L3 = pts_L3.reshape((-1,1,2))
pt7 =  [0, down_limit];
pt8 =  [w, down_limit];
pts_L4 = np.array([pt7,pt8], np.int32)
pts_L4 = pts_L4.reshape((-1,1,2))

#Substractor de fondo
fgbg = cv2.createBackgroundSubtractorMOG2(detectShadows = True)

#Elementos estructurantes para filtros morfoogicos
kernelOp = np.ones((3,3),np.uint8)
kernelOp2 = np.ones((5,5),np.uint8)
kernelCl = np.ones((11,11),np.uint8)

#Variables
font = cv2.FONT_HERSHEY_SIMPLEX
persons = []
max_p_age = 5
pid = 1

while(cap.isOpened()):
##for image in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
    #Lee una imagen de la fuente de video
    ret, frame = cap.read()
##    frame = image.array

    for i in persons:
        i.age_one() #age every person one frame
    #########################
    #   PRE-PROCESAMIENTO   #
    #########################
    
    #Aplica substraccion de fondo
    fgmask = fgbg.apply(frame)
    fgmask2 = fgbg.apply(frame)

    #Binariazcion para eliminar sombras (color gris)
    try:
        ret,imBin= cv2.threshold(fgmask,200,255,cv2.THRESH_BINARY)
        ret,imBin2 = cv2.threshold(fgmask2,200,255,cv2.THRESH_BINARY)
        #Opening (erode->dilate) para quitar ruido.
        mask = cv2.morphologyEx(imBin, cv2.MORPH_OPEN, kernelOp)
        mask2 = cv2.morphologyEx(imBin2, cv2.MORPH_OPEN, kernelOp)
        #Closing (dilate -> erode) para juntar regiones blancas.
        mask =  cv2.morphologyEx(mask , cv2.MORPH_CLOSE, kernelCl)
        mask2 = cv2.morphologyEx(mask2, cv2.MORPH_CLOSE, kernelCl)
    except:
        print('EOF')
        print 'UP:',cnt_up
        print 'DOWN:',cnt_down
        break
    #################
    #   CONTORNOS   #
    #################
    
    # RETR_EXTERNAL returns only extreme outer flags. All child contours are left behind.
    _, contours0, hierarchy = cv2.findContours(mask2,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
    for cnt in contours0:
        area = cv2.contourArea(cnt)
        if area > areaTH:
            #################
            #   TRACKING    #
            #################
            
            #Falta agregar condiciones para multipersonas, salidas y entradas de pantalla.
            
            M = cv2.moments(cnt)
            cx = int(M['m10']/M['m00'])
            cy = int(M['m01']/M['m00'])
            x,y,w,h = cv2.boundingRect(cnt)

            new = True
            if cy in range(up_limit,down_limit):
                for i in persons:
                    if abs(cx-i.getX()) <= w and abs(cy-i.getY()) <= h:
                        # el objeto esta cerca de uno que ya se detecto antes
                        new = False
                        i.updateCoords(cx,cy)   #actualiza coordenadas en el objeto and resets age
                        if i.going_UP(line_down,line_up) == True:
                            cnt_up += 1;
                            print "ID:",i.getId(),'crossed going up at',time.strftime("%c")
                        elif i.going_DOWN(line_down,line_up) == True:
                            cnt_down += 1;
                            print "ID:",i.getId(),'crossed going down at',time.strftime("%c")
                        break
                    if i.getState() == '1':
                        if i.getDir() == 'down' and i.getY() > down_limit:
                            i.setDone()
                        elif i.getDir() == 'up' and i.getY() < up_limit:
                            i.setDone()
                    if i.timedOut():
                        #sacar i de la lista persons
                        index = persons.index(i)
                        persons.pop(index)
                        del i     #liberar la memoria de i
                if new == True:
                    p = Person.MyPerson(pid,cx,cy, max_p_age)
                    persons.append(p)
                    pid += 1     
            #################
            #   DIBUJOS     #
            #################
            cv2.circle(frame,(cx,cy), 5, (0,0,255), -1)
            img = cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,0),2)            
            #cv2.drawContours(frame, cnt, -1, (0,255,0), 3)
            
    #END for cnt in contours0
            
    #########################
    # DIBUJAR TRAYECTORIAS  #
    #########################
    for i in persons:
##        if len(i.getTracks()) >= 2:
##            pts = np.array(i.getTracks(), np.int32)
##            pts = pts.reshape((-1,1,2))
##            frame = cv2.polylines(frame,[pts],False,i.getRGB())
##        if i.getId() == 9:
##            print str(i.getX()), ',', str(i.getY())
        cv2.putText(frame, str(i.getId()),(i.getX(),i.getY()),font,0.3,i.getRGB(),1,cv2.LINE_AA)
        
    #################
    #   IMAGANES    #
    #################
    str_up = 'UP: '+ str(cnt_up)
    str_down = 'DOWN: '+ str(cnt_down)
    frame = cv2.polylines(frame,[pts_L1],False,line_down_color,thickness=2)
    frame = cv2.polylines(frame,[pts_L2],False,line_up_color,thickness=2)
    frame = cv2.polylines(frame,[pts_L3],False,(255,255,255),thickness=1)
    frame = cv2.polylines(frame,[pts_L4],False,(255,255,255),thickness=1)
    cv2.putText(frame, str_up ,(10,40),font,0.5,(255,255,255),2,cv2.LINE_AA)
    cv2.putText(frame, str_up ,(10,40),font,0.5,(0,0,255),1,cv2.LINE_AA)
    cv2.putText(frame, str_down ,(10,90),font,0.5,(255,255,255),2,cv2.LINE_AA)
    cv2.putText(frame, str_down ,(10,90),font,0.5,(255,0,0),1,cv2.LINE_AA)

    cv2.imshow('Frame',frame)
    #cv2.imshow('Mask',mask)    
    
    #preisonar ESC para salir
    k = cv2.waitKey(30) & 0xff
    if k == 27:
        break
#END while(cap.isOpened())
    
#################
#   LIMPIEZA    #
#################
cap.release()
cv2.destroyAllWindows()

 

(You’ll need the Person.py file located in another part of this tutorial to run the code)

As you may see, the code is not very precise when counting. You can always add more lines to make it more exact, but more processing will be needed, such as here:

Please let me know if you have any suggestions on how to count more efficiently.

I hope you’ve enjoyed this tutorial, even if it took me this much to finish. I’ll be very glad to know if you use this code, or your own, in some application. 🙂

I’ll try to answer any further questions through the comment section.

People counter 8 – Following movement

Here starts the tricky part 🙂

You already know when there’s a person in the image now you want to know in what direction they’re moving (up/down).

In the first frame you detect someone you need to give that person an ID and store it’s initial position in the image.

Then, on the following frames, you want to keep track of that person, you need to match the person’s contour in the following frames to the ID you set when it first appeared, as well as keep storing that person’s coordinates.

Then, after the person crosses a limit (or a certain amount limits) in the image, you want evaluate, using all of the stored positions, if he/she is moving up or down.

To handle all of this IDing and storing of coordinates I created a class called Person. It might not be optimized, but you can take a look at it here.

Here´s a code you should try:

import numpy as np
import cv2
import Person
import time

# http://docs.opencv.org/master/d3/dc0/group__imgproc__shape.html#ga17ed9f5d79ae97bd4c7cf18403e1689a&gsc.tab=0
##http://docs.opencv.org/master/d4/d73/tutorial_py_contours_begin.html#gsc.tab=0

cap = cv2.VideoCapture('peopleCounter.avi') #Open video file
fgbg = cv2.createBackgroundSubtractorMOG2(detectShadows = True) #Create the background substractor
kernelOp = np.ones((3,3),np.uint8)
kernelCl = np.ones((11,11),np.uint8)

#Variables
font = cv2.FONT_HERSHEY_SIMPLEX
persons = []
max_p_age = 5
pid = 1
areaTH = 500

while(cap.isOpened()):
    ret, frame = cap.read() #read a frame
    
    fgmask = fgbg.apply(frame) #Use the substractor
    try:
        ret,imBin= cv2.threshold(fgmask,200,255,cv2.THRESH_BINARY)
        #Opening (erode->dilate) para quitar ruido.
        mask = cv2.morphologyEx(imBin, cv2.MORPH_OPEN, kernelOp)
        #Closing (dilate -> erode) para juntar regiones blancas.
        mask =  cv2.morphologyEx(mask , cv2.MORPH_CLOSE, kernelCl)
    except:
        #if there are no more frames to show...
        print('EOF')
        break

    _, contours0, hierarchy = cv2.findContours(mask,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)
    for cnt in contours0:
        cv2.drawContours(frame, cnt, -1, (0,255,0), 3, 8)
        area = cv2.contourArea(cnt)
        if area > areaTH:
            #################
            #   TRACKING    #
            #################            
            M = cv2.moments(cnt)
            cx = int(M['m10']/M['m00'])
            cy = int(M['m01']/M['m00'])
            x,y,w,h = cv2.boundingRect(cnt)
            
            new = True
            for i in persons:
                if abs(x-i.getX()) <= w and abs(y-i.getY()) <= h:
                    # el objeto esta cerca de uno que ya se detecto antes
                    new = False
                    i.updateCoords(cx,cy)   #actualiza coordenadas en el objeto and resets age
                    break
            if new == True:
                p = Person.MyPerson(pid,cx,cy, max_p_age)
                persons.append(p)
                pid += 1     
            #################
            #   DIBUJOS     #
            #################
            cv2.circle(frame,(cx,cy), 5, (0,0,255), -1)
            img = cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,0),2)            
            cv2.drawContours(frame, cnt, -1, (0,255,0), 3)

    #########################
    # DIBUJAR TRAYECTORIAS  #
    #########################
    for i in persons:
        if len(i.getTracks()) >= 2:
            pts = np.array(i.getTracks(), np.int32)
            pts = pts.reshape((-1,1,2))
            frame = cv2.polylines(frame,[pts],False,i.getRGB())
        if i.getId() == 9:
            print str(i.getX()), ',', str(i.getY())
        cv2.putText(frame, str(i.getId()),(i.getX(),i.getY()),font,0.3,i.getRGB(),1,cv2.LINE_AA)
     
    
    cv2.imshow('Frame',frame)
    
    #Abort and exit with 'Q' or ESC
    k = cv2.waitKey(30) & 0xff
    if k == 27:
        break

cap.release() #release video file
cv2.destroyAllWindows() #close all openCV windows

 

Important part’s here:

for i in persons:
                if abs(x-i.getX()) <= w and abs(y-i.getY()) <= h:
                    # el objeto esta cerca de uno que ya se detecto antes
                    new = False
                    i.updateCoords(cx,cy)   #actualiza coordenadas en el objeto and resets age
                    break
            if new == True:
                p = Person.MyPerson(pid,cx,cy, max_p_age)
                persons.append(p)
                pid += 1

 

Here we look for a detected contour’s coordinates and try to match them to a previously detected person. If no person is matched then we create a new one.

trayectories

People counter 7 – Defining a person

Now comes the interesting part, how do we classify a contour as a person?

A simple, but effective, step could be defining a minimum area the contour must have:

areaTH = #some number
    _, contours0, hierarchy = cv2.findContours(mask2,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
    for cnt in contours0:
        area = cv2.contourArea(cnt)
        if area > areaTH:
            #################
            #   TRACKING    #
            #################

 

Define a minimum area, find contours, for each contour get the area and if it’s more than the threshold do something.

A threshold value is not universal, meaning that is depends on your video stream, you need to test different values until it works with your video.

For example, setting a low threshold will get you things like this:badth

While setting on too high will get you:
badth2

No more than that.

Here´s the code:

import numpy as np
import cv2

cap = cv2.VideoCapture('peopleCounter.avi') #Open video file
fgbg = cv2.createBackgroundSubtractorMOG2(detectShadows = True) #Create the background substractor
kernelOp = np.ones((3,3),np.uint8)
kernelCl = np.ones((11,11),np.uint8)
areaTH = 500

while(cap.isOpened()):
    ret, frame = cap.read() #read a frame
    
    fgmask = fgbg.apply(frame) #Use the substractor
    try:
        ret,imBin= cv2.threshold(fgmask,200,255,cv2.THRESH_BINARY)
        #Opening (erode->dilate) para quitar ruido.
        mask = cv2.morphologyEx(imBin, cv2.MORPH_OPEN, kernelOp)
        #Closing (dilate -> erode) para juntar regiones blancas.
        mask =  cv2.morphologyEx(mask , cv2.MORPH_CLOSE, kernelCl)
    except:
        #if there are no more frames to show...
        print('EOF')
        break

    _, contours0, hierarchy = cv2.findContours(mask,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)
    for cnt in contours0:
        cv2.drawContours(frame, cnt, -1, (0,255,0), 3, 8)
        area = cv2.contourArea(cnt)
        print area
        if area > areaTH:
            #################
            #   TRACKING    #
            #################            
            M = cv2.moments(cnt)
            cx = int(M['m10']/M['m00'])
            cy = int(M['m01']/M['m00'])
            x,y,w,h = cv2.boundingRect(cnt)
            cv2.circle(frame,(cx,cy), 5, (0,0,255), -1)            
            img = cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,0),2)
        
    cv2.imshow('Frame',frame)
    
    #Abort and exit with 'Q' or ESC
    k = cv2.waitKey(30) & 0xff
    if k == 27:
        break

cap.release() #release video file
cv2.destroyAllWindows() #close all openCV windows

People counter 6 – Find contours

Again, sorry for the delay between posts, I’ve been busy with other projects :S

So far, we have filtered the video stream so we only get movement:

peoplecount filter1

Now it’s time to detect contours on the frames. This is really simple with openCV’s findCountours function.

Here´s the code:

import numpy as np
import cv2

# http://docs.opencv.org/master/d3/dc0/group__imgproc__shape.html#ga17ed9f5d79ae97bd4c7cf18403e1689a&gsc.tab=0
##http://docs.opencv.org/master/d4/d73/tutorial_py_contours_begin.html#gsc.tab=0

cap = cv2.VideoCapture('peopleCounter.avi') #Open video file
fgbg = cv2.createBackgroundSubtractorMOG2(detectShadows = True) #Create the background substractor
kernelOp = np.ones((3,3),np.uint8)
kernelCl = np.ones((11,11),np.uint8)

while(cap.isOpened()):
ret, frame = cap.read() #read a frame

fgmask = fgbg.apply(frame) #Use the substractor
try:
ret,imBin= cv2.threshold(fgmask,200,255,cv2.THRESH_BINARY)
#Opening (erode->dilate) para quitar ruido.
mask = cv2.morphologyEx(imBin, cv2.MORPH_OPEN, kernelOp)
#Closing (dilate -> erode) para juntar regiones blancas.
mask = cv2.morphologyEx(mask , cv2.MORPH_CLOSE, kernelCl)
except:
#if there are no more frames to show...
print('EOF')
break

_, contours0, hierarchy = cv2.findContours(mask,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)
for cnt in contours0:
cv2.drawContours(frame, cnt, -1, (0,255,0), 3, 8)

cv2.imshow('Frame',frame)

#Abort and exit with 'Q' or ESC
k = cv2.waitKey(30) & 0xff
if k == 27:
break

cap.release() #release video file
cv2.destroyAllWindows() #close all openCV windows

 

The important thing here are these lines:

    _, contours0, hierarchy = cv2.findContours(mask,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)
    for cnt in contours0:
        cv2.drawContours(frame, cnt, -1, (0,255,0), 3, 8)

 

We give the function our mask, cv2.RETR_EXTERNAL means we only care about external contours (contours within contours will not be detected), and cv2.CHAIN_APPROX_NONE is the algorithm used to “make” the contour (you can change it to another one as a test).

Draw contours is only used to visually appreciate the contours on the image we display.

All of the other lines of the code have been previously explained in the series.

contours

People Counter 5 – Morphological Transformations

First of all, sorry for taking so long for this post, I´ve been really busy with work and couldn´t find the time to write this.

Let´s start! This fifth part of the tutorial will cover basic transformations.

I recommend reading this and  this first to get an overview of what a transformation is and what it might be used for (You´ll only need to read about Erosion and Dilation, Opening and Closing).

Esentially, we will use erosion and dialtion on binarized images (black and white). In a very general manner, erosion expands a black portion of the image into a white portion. Dilation, on the other hand, expands a white portion of the image into a black portion.

To do this operations you also need to specify a kernel or structuring element (strel). This is a matrix that is convoluted on the image of size n*n that defines the area to use when calculating the value of each pixel.

Let try them on this image:

noise

Use this code:

import cv2
import numpy as np

img = cv2.imread("noise.png")
ret,thresh1 = cv2.threshold(img,127,255,cv2.THRESH_BINARY)

kernel = np.ones((3,3),np.uint8)

erosion = cv2.erode(img,kernel,iterations = 1)

dilation = cv2.dilate(img,kernel,iterations = 1)

cv2.imwrite("erode.png",erosion)
cv2.imwrite("dilate.png",dilation)

 

We´re useing the threshold method to binarize the image. Convert it from color to only black and white, two values (not the same as grayscale). Look at how it works here.

Run it and look at the output images. Also, try changing the kernel´s size (5,5 or 9,9, for example) and see what happens.

You can also combine both operations, running one after the other.

Doing and erosion and then a dilation is called opening.

On the other hand, doing a dilation and then an erosion is called closing.

Let´s try them on this image:

letters

Use the following code:

import cv2
import numpy as np

img = cv2.imread("letters.jpg")
ret,thresh1 = cv2.threshold(img,200,255,cv2.THRESH_BINARY)

kernel = np.ones((5,5),np.uint8)

opening = cv2.morphologyEx(thresh1, cv2.MORPH_OPEN, kernel)
closing = cv2.morphologyEx(thresh1, cv2.MORPH_CLOSE, kernel)

cv2.imwrite("letters_closing.png",closing)
cv2.imwrite("letters_opening.png",opening)

 

Look at the output and try to understand what happened during closing and opening.

Now try incorporating this code to our people counter, right after background substraction, to take the shadows (gray color) out and make the video stream clear (take out any noise), to take it from this:

peoplecount filter2

To this:

peoplecount filter3

It´s right to even try combining opening and closing operations, even with different sizes of kernel. Experiment with them!

You can also use this operations on color images to get cool results, although that´s not useful for the counter.

(Click on the images to view them full size)

Original Opening (9,9) Closing (9,9)
fordGT fordGT_opening fordGT_closing

That´s it for now. If you have any questions leave them in the comments.

Next, we´ll look at finding contours from the clean clean image to later detect them as moving people.

People Counter 4 – Background Susbtraction

This part of the tutorial is also very simple to do, thanks to OpenCV.

A background subtractor, as its name sugests, lets you identify the foreground and background of and image. A background is considered to be as anything constant in a series of images, anything that stays static. The foreground is everything that changes (moves).

Doing background substraction in OpenCV onlyt requires 2 lines:

import numpy as np
import cv2

cap = cv2.VideoCapture('peopleCounter.avi') #Open video file

fgbg = cv2.createBackgroundSubtractorMOG2(detectShadows = True) #Create the background substractor

while(cap.isOpened()):
    ret, frame = cap.read() #read a frame
    
    fgmask = fgbg.apply(frame) #Use the substractor
    
    try:        
        cv2.imshow('Frame',frame)
        cv2.imshow('Background Substraction',fgmask)
    except:
        #if there are no more frames to show...
        print('EOF')
        break
    
    #Abort and exit with 'Q' or ESC
    k = cv2.waitKey(30) & 0xff
    if k == 27:
        break

cap.release() #release video file
cv2.destroyAllWindows() #close all openCV windows

 

Running this code:

bsubs

In the new image black represents the background, white are objects in the foreground and gray are shadows cast by those objects.

The good thing about using the MOG2 substractor in OpenCV is that the background is constantly being calculated, meaning that subtle changes in lighting (such as those caused by the Sun) won´t affect your calculations over time.

This is really the first step in making a people counter. Hope you like it.

Next, we’ll clean the image produced by the substractor to be able to use it in the actual counting.

People Counter 3 – Drawing in the video window.

This part of the tutorial is going to be very simple. We’ll just be drawing a simple interface in the video window, to display some information.

Let’s start by using this code from the last chapter of this tutorial with some minor modifications:

import numpy as np
import cv2

cap = cv2.VideoCapture('peopleCounter.avi') #Open video file

w = cap.get(3) #get width
h = cap.get(4) #get height

mx = int(w/2)
my = int(h/2)

count = 0

while(cap.isOpened()):
    ret, frame = cap.read() #read a frame
    try:
        count = count + 1
        text = "Hello World " + str(count)
        cv2.putText(frame, text ,(mx,my),cv2.FONT_HERSHEY_SIMPLEX
                    ,1,(255,255,255),1,cv2.LINE_AA)
        cv2.imshow('Frame',frame)
    except:
        #if there are no more frames to show...
        print('EOF')
        break

    #Abort and exit with 'Q' or ESC
    k = cv2.waitKey(30) & 0xff
    if k == 27:
        break

cap.release() #release video file
cv2.destroyAllWindows() #close all openCV windows

 

First we use the cap.get() methods to calculate the middle coordinates in our video (width/2, height/2).

Then, before we call imshow() we use cv2.putText(). As the name suggests, this method writes text on the video frame. Usage is: cv.PutText(img, text, org, font, color), where org is the origin (bottom-left corner) of the text to write.

If you run the code, you’ll see this:

putText

We can also draw lines, circles, etc. into the video frame, OpenCV has many methods to draw geometric shapes.

Let’s draw some lines:

import numpy as np
import cv2

cap = cv2.VideoCapture('peopleCounter.avi') #Open video file

while(cap.isOpened()):
    ret, frame = cap.read() #read a frame
    try:        
        cv2.imshow('Frame',frame)
        frame2 = frame
    except:
        #if there are no more frames to show...
        print('EOF')
        break

    line1 = np.array([[100,100],[300,100],[350,200]], np.int32).reshape((-1,1,2))
    line2 = np.array([[400,50],[450,300]], np.int32).reshape((-1,1,2))

    frame2 = cv2.polylines(frame2,[line1],False,(255,0,0),thickness=2)
    frame2 = cv2.polylines(frame2,[line2],False,(0,0,255),thickness=1)
    
    cv2.imshow('Frame 2',frame2)
    
    #Abort and exit with 'Q' or ESC
    k = cv2.waitKey(30) & 0xff
    if k == 27:
        break

cap.release() #release video file
cv2.destroyAllWindows() #close all openCV windows

 

This time, we’re working outside the try block and using two video windows, one to display the raw video and another one to show the modified one with lines.

In order for polylines to work, it needs to receive a numpy array with the coordinate pairs (x and y) for each point in the line, in our case, beginning and end. If you want to specify the points like I did, you also need to call reshape(-1,1,2) for it to work with polylines().

If you run this code you´ll see:

lines

Here´s OpenCV’s documentation for drawing functions.

This is the end of this chapter. In the next one we’ll use background substraction to actually start with the counter.

People Counter 2 – Opening a video stream

In the second part of the tutorial we’ll cover how to open a video stream in OpenCV.

A video stream may be a video recording in a file or video from a webcam. I’ll show you how to work with both.

First, let’s work with a video file. Download this sample video. We’ll be using it throughout the tutorials.

Create a new Python script in the same location as the video and add the following code:

import numpy as np
import cv2

cap = cv2.VideoCapture('peopleCounter.avi') #Open video file

while(cap.isOpened()):
    ret, frame = cap.read() #read a frame
    try:
        cv2.imshow('Frame',frame)
    except:
        #if there are no more frames to show...
        print('EOF')
        break

    #Abort and exit with 'Q' or ESC
    k = cv2.waitKey(30) & 0xff
    if k == 27:
        break

cap.release() #release video file
cv2.destroyAllWindows() #close all openCV windows

As you can see, we begin by importing numpy and openCV to our script.

Then we open the video file with the VideoCapture object, giving it the location of the video file as a parameter.

The we read frames from the video and show them, one by one, until we reach the end of it. At that point, we exit the while loop and close the video file and the video window.

Using a webcam is very similar:

import numpy as np
import cv2

cap = cv2.VideoCapture(0)
    
while(cap.isOpened()):
    ret, frame = cap.read()
    try:
        cv2.imshow('Frame',frame)
    except:
        print('EOF')
        break

    k = cv2.waitKey(30) & 0xff
    if k == 27:
        break

cap.release()
cv2.destroyAllWindows()

The only difference is how we create the VideoCapture object. This time we pass it 0 as parameter. This indicates that we want to use the webcam with ID 0. Is you hav emultiple webcams on your computer, such as a USB webcam and another one embedded onyour screen, you’ll need to pass 0 or 1, depending on which one you want to use.

A VideoCapture object has several properties that you can access and sometimes change:

  • CAP_PROP_POS_MSEC Current position of the video file in milliseconds or video capture timestamp.
  • CAP_PROP_POS_FRAMES 0-based index of the frame to be decoded/captured next.
  • CAP_PROP_POS_AVI_RATIO Relative position of the video file: 0 – start of the film, 1 – end of the film.
  • CAP_PROP_FRAME_WIDTH Width of the frames in the video stream.
  • CAP_PROP_FRAME_HEIGHT Height of the frames in the video stream.
  • CAP_PROP_FPS Frame rate.
  • CAP_PROP_FOURCC 4-character code of codec.
  • CAP_PROP_FRAME_COUNT Number of frames in the video file.
  • CAP_PROP_FORMAT Format of the Mat objects returned by retrieve() .
  • CAP_PROP_MODE Backend-specific value indicating the current capture mode.
  • CAP_PROP_BRIGHTNESS Brightness of the image (only for cameras).
  • CAP_PROP_CONTRAST Contrast of the image (only for cameras).
  • CAP_PROP_SATURATION Saturation of the image (only for cameras).
  • CAP_PROP_HUE Hue of the image (only for cameras).
  • CAP_PROP_GAIN Gain of the image (only for cameras).
  • CAP_PROP_EXPOSURE Exposure (only for cameras).
  • CAP_PROP_CONVERT_RGB Boolean flags indicating whether images should be converted to RGB.
  • CAP_PROP_WHITE_BALANCE Currently not supported
  • CAP_PROP_RECTIFICATION Rectification flag for stereo cameras (note: only supported by DC1394 v 2.x backend currently)

You can use the get method to show them and set to give them a new value.

Here´s an example on showing all of the properties’ values and changing the width and height of the video stream from the webcam:

import numpy as np
import cv2

cap = cv2.VideoCapture(0)

#show all video properties
for i in range(19):
    print i, cap.get(i)

cap.set(3,160) #set width
cap.set(4,120) #set height
    
while(cap.isOpened()):
    ret, frame = cap.read()
    try:
        cv2.imshow('Frame',frame)
    except:
        print('EOF')
        break

    k = cv2.waitKey(30) & 0xff
    if k == 27:
        break

cap.release()
cv2.destroyAllWindows()

 

This is all for this tutorial. Next time we’ll work on the video stream, drawing a GUI on it.

Please leave any comments or questions you have. 🙂

 

PS. If you have trouble opening the AVI file in openCV, try updating your video codecs. Downloading DIVX codec helped me when I had this issue.