Set up Amazon Web Services – Part 2

Home Run into the Cloud

Article from Issue 197/2017

DIY Python scripts run in container environments on Amazon's Lambda service – this snapshot example deploys an AI program for motion analysis in video surveillance recordings.

After some initial steps in a previous article [1] to set up an AWS account, an S3 storage server with a static web server, and the first Lambda function, I'll now show you how to set up an API server on Amazon to track down interesting scenes in videos from a surveillance camera.

The Lambda function triggered either by a web request from the browser or a command-line tool like curl retrieves a video from the web, runs it through an artificial intelligence (AI) algorithm implemented by the OpenCV library, generates a motion profile, and returns the URL of a contact sheet generated as a JPEG with all the interesting movements from the recording (Figures 1 and 2).

Figure 1: The AI program for motion analysis runs on Amazon servers behind a REST API.
Figure 2: The contact sheet produced on AWS displays the seconds in the surveillance video during which something actually moved.

Sandbox Games

Unlike Amazon's EC2 instances with their full-blooded (albeit virtual) Linux servers, the Lambda Service [2] provides only a containerized environment. Inside a container, Node.js, Python, or Java programs run in a sandbox, which Amazon pushes around at will between physical servers, eventually going as far as putting the container to sleep in case of inactivity – just to conjure it up again when next accessed. Leaving data on the virtual disk of the container and hoping to find it still there next time would thus result in an unstable application. Instead, Lambda functions communicate with AWS offerings such as S3 storage or the Dynamo database to secure data and are otherwise "stateless."

Developers can upload things that an application cannot describe in a Python script to the (as rumor has it) CentOS-based containers as ZIP files (Figure 3).

Figure 3: Uploading code in a ZIP file to the Lambda server via an Amazon S3 bucket.

A Lambda function that uses artificial intelligence capabilities from the OpenCV library, like the example, needs to compile the required binaries or libraries up front in a Unix environment similar to the Lambda container, package and upload the results, and call it with the Python script at run time. Existing Python bindings to shared libraries are used here, or the Python script calls precompiled binaries as external processes.

Lean and Mean

To prevent the AI program [3] from using too much compute time after installation in the Amazon cloud – and thus also using up money after exceeding the "free tier" quota – the improved code [4] (updated in Listing 1 from the previous article) no longer looks for movements in every frame (i.e., 50 times a second) but hops through the movie in increments of half a second in line 99. After a frame with detected motion, line 96 even skips forward two seconds. To accomplish this, vid.grab() called in line 50 no longer painstakingly decodes the frame in a complex process, as did previously, but discards it to retrieve the next one.

Listing 1


001 #include "opencv2/opencv.hpp"
002 #include <stdio.h>
004 using namespace std;
005 using namespace cv;
007 const int MAX_FEATURES = 500;
008 const int MAX_MOVEMENT = 100;
010 int move_test(Mat& oframe, Mat& frame) {
011     // Select features for optical flow
012   vector<Point2f> ofeatures;
013   goodFeaturesToTrack(oframe,
014     ofeatures, MAX_FEATURES, 0.1, 0.2 );
016     // Parameters for LK
017   vector<Point2f> new_features;
018   vector<uchar> status;
019   vector<float> err;
020   TermCriteria criteria(TermCriteria::COUNT
021       | TermCriteria::EPS, 20, 0.03);
022   Size window(10,10);
023   int max_level   = 3;
024   int flags       = 0;
025   double min_eigT = 0.004;
027     // Lucas-Kanade method
028   calcOpticalFlowPyrLK(oframe, frame,
029     ofeatures, new_features, status, err,
030     window, max_level, criteria, flags,
031     min_eigT );
033   double max_move = 0;
034   double movement = 0;
035   for(int i=0; i<ofeatures.size(); i++) {
036     Point pointA
037       (ofeatures[i].x, ofeatures[i].y);
038     Point pointB
039       (new_features[i].x, new_features[i].y);
041     movement = norm(pointA-pointB);
042     if(movement > max_move)
043         max_move = movement;
044   }
045   return max_move > MAX_MOVEMENT;
046 }
048 int frames_skip( VideoCapture vid, int n, int *i ) {
049     for( int c = 0; c < n; c++ ) {
050       if (!vid.grab())
051         break;
052       (*i)++;
053     }
054 }
056 int main(int argc, char *argv[]) {
057   int i = 0;
058   Mat frame;
059   Mat cframe;
060   Mat oframe;
062   if (argc != 2) {
063     cout << "USAGE: <cmd> <file_in>\n";
064     return -1;
065   }
067   VideoCapture vid(argv[1]);
068   if (!vid.isOpened()) {
069     cout << "Video corrupt\n";
070     return -1;
071   }
073   int fps = (int)vid.get(CV_CAP_PROP_FPS);
075   i++;
076   if(! return 1;
078   cvtColor(oframe, oframe, COLOR_BGR2GRAY);
080   while (1) {
081     if (!
082       break;
083     i++;
085     int movie_second = i / fps;
087     cframe = frame.clone();
088     cvtColor(frame,frame,COLOR_BGR2GRAY);
089     if(move_test(oframe, frame)) {
090       cout << movie_second << "\n";
092      char filename[80];
093      sprintf( filename, "%04d.jpg", i/fps );
094      imwrite( filename, cframe );
096       frames_skip( vid, 2*fps, &i );
097     } else {
098         // fast-forward to next 1/2 sec
099       frames_skip( vid, fps/2, &i );
100     }
102     oframe = frame;
103   }
105   return 0;
106 }

Whereas the first version [3] only printed the number of seconds into the video in which the algorithm detected motion, to subsequently use MPlayer to extract the frames as JPEG files, lines 92 to 94 now use the imwrite() image processing functions included with OpenCV to write detected frames immediately as 000x.jpg to the virtual disk. A second run and the shenanigans for installing MPlayer in the Lambda container are thus no longer required.

Based on these JPEG images, another Python script,, then produces a contact sheet, also in .jpg format, with the help of the ImageMagick library. The Lambda program puts this file into Amazon's S3 cloud storage, and then sends a link to the file to the calling client.

RAM Is Money

How does a Python programmer now pick up a document from the web? A first approach would be the read() method provided by urlopen(), which then sends all the bytes it has obtained to a local file through write(). But, this would mean that a potentially large video file would be completely read into memory before Python finally starts writing it to disk.

The ample supply of RAM needed for this costs money on Amazon. To avoid this, the urlretrieve() method from the urllib module used in Listing 2 can buffer smaller data chunks – in a hopefully more or less intelligent way.

Listing 2

01 #!/usr/bin/python
02 import urllib
03 import tempfile
04 import shutil
05 import subprocess
06 import boto3
07 import os
09 def lambda_handler(event, context):
10     tmpd = tempfile.mkdtemp()
12     # fetch movie
13     movie_url  = event['movie_url']
14     movie_file = os.path.join(tmpd,
15         os.path.basename(movie_url))
16     urllib.urlretrieve(movie_url,movie_file)
18     # motion analysis
19     print subprocess.check_output([
20         "bin/",
21         movie_file])
23     # generate montage
24     print subprocess.check_output([
25         "bin/",tmpd])
27      # store montage in s3
28     s3 = boto3.resource('s3')
29     bucket = ""
30     data = open(os.path.join(
31          tmpd,'montage.jpg')).read()
32     s3.Bucket(bucket).put_object(
33         Key="montage.jpg",
34         Body=data,ContentType="image/jpeg")
36     result = { "montage_url":
37       "" +
38       "/" +
39       "montage.jpg"}
41     shutil.rmtree(tmpd)
42     return result

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Perl – Video Preview

    Rather than stare at boring surveillance videos, in which nothing happens 90 percent of the time, Mike Schilli tries the OpenCV image recognition software, which automatically extracts the most exciting action sequences.

  • Python 3

    What do Python 2.x programmers need to know about Python 3?

  • Out-of-Bounds Photos

    Including out-of-bounds effects in slide shows and presentations is bound to get the undivided attention of the audience. Gimp has simple tools to create these image effects.

  • Programming Snapshot – Alexa

    Asking Alexa only for built-in functions like the weather report gets old quickly, and add-on skills from the skills store only go so far. With a few lines of code, Mike teaches this digital pet some new tricks.

  • Manage Amazon S3 with s3cmd
comments powered by Disqus

Direct Download

Read full article as PDF:

Price $2.95


njobs Europe
Njobs Netherlands Njobs Deutschland Njobs United Kingdom Njobs Italia Njobs France Njobs Espana Njobs Poland
Njobs Austria Njobs Denmark Njobs Belgium Njobs Czech Republic Njobs Mexico Njobs India Njobs Colombia