Motion - Frequently Asked Questions
You are here: Foswiki>Motion Web>FrequentlyAskedQuestions (22 Feb 2017, RodolfoLeibner)Edit Attach

Frequently Asked Questions about Motion

How do I get Motion to work with two or more camera at a time?

If I run with one camera Motion works but when I add one more only the first is working. There are two common reasons for this.

config files

When you have only one camera you only need one config file: motion.conf. The minute you have more than one you need one thread config file per camera. So if you have two camera you need 3 config files: motion.conf and two thread config files. motion.conf will normally contain all the common or default options. Each thread file will contain the options unique to each camera. See the config file section in the Motion Guide.

USB cameras

The situation has improved from when USB1.1 was common. Most even low-end motherboards have multiple USB2/3 controllers, which are entirely independent and can run multiple cameras without concern. USB2 can support multiple cameras at low resolution and framerate. High framerate high resolution cameras sending uncompressed images may still saturate the bus. Though the below hints are less relevant than they were, they may still be applicable for very, very old systems, and systems where you need to share one port with high resolution cameras.

A USB camera uses all the bandwidth a USB1.1 controller can give. Even at low framerates the camera reserves more than half the 11 Mb/s. This means that the 2nd camera gets rejected. Few motherboards have more than one controller. Often 2 or 4 physical connections on a motherboard shares one and the same USB controller. To add more cameras you need to put USB adapter cards. One per camera. There exists cards with full bandwidth per USB socket. These present themselves as for example 4 USB controllers to Linux and they work fine with 4 cameras. Also, many (if not most) cheap PCI USB1/2 cards ($10 range) have a controller capable of supporting 2 x USB1 cameras and an additional USB2 camera per card. With those cards and USB1 extender cards (allowing extension of a USB1 device for up to 100m, typically 50m) you can have a capable surveillance setup using only USB cameras.

Increasing available USB bandwidth

If you don't need sound, USB bandwidth can be increased by disabling USB audio drivers. Doing this frees bandwith allocated for audio traffic and makes it available for camera video. To do this, issue the command $>lsmod to list loaded modules. From the list and with the help of Google, identify the audio driver. For Ubuntu 12.04 the audio driver is called snd-usb-audio. A loaded module is unloaded with the command $>modprobe -r snd-usb-audio. To prevent the module being loaded after reboot, edit the file /etc/modprobe.d/blacklist.conf by adding the entry blacklist snd-usb-audio . As a result of this change, the USB bandwidth available for video traffic will be increased. I have successsfully run 2x Logitech C500 webcams at 1280x1024 resolution, 100% quality, on one generic USB ver 2 PCI card. This was on an old Dell Optiplex PC. -- DazzConway - 02 May 2013

Ubuntu uvc quirks

Ubuntu is now installed out of the box with the video driver uvc. Bandwidth can be increased with the application of camera specific quirks.

Details can be found here: -- DazzConway - 05 May 2013

Multiple Instances for Multiple Cameras

If you find that running a single instance of Motion with multiple thread<x>.conf files doesn’t work, there is an alternative. The solution is to run one instance of motion for each camera. This explanation applies to Ubuntu 12.04 with two cameras. The steps are to make a copy of the relevant motion files renamed to motion2.
  1. Copy /etc/motion/motion.conf to /etc/motion2/motion2.conf .
  2. Edit motion2.conf to select the right video device.
  3. Change the displayed camera number. Change the location of the PID file to match the location specified in the init.d startup script ( /var/run/motion/). Change the image filename format and/or directory to something different to the motion.conf settings.
  4. Create an aliais (symlink) in the /usr/bin/ directory to point to the motion program using the command sudo ln –s /usr/bin/motion2 /usr/bin/motion .
  5. Make copies of the symlinks in the rc<x>.d sub directories of /etc/init.d . The copies should be changed to point to the motion2 init.d startup script.
  6. Edit the new init.d startup script to use the motion2 command with an argument specifying the /etc/motion2/motion2.conf file. If the configuration file is not specified, it will default to /etc/motion/motion.conf.
  7. Make a copy of motion called motion2 in /etc/default/ . Ensure that the motion and motion2 files contain the line start_motion_daemon=yes .
  8. Ensure all permissions of the new and edited files are correct.
If all is correct, you should be able to run the commands $>motion and $>motion2 . Each command will start an instance of motion. Each instance will have its own motion<x>.conf file. Each instance will connect to the specified camera. After reboot, both instances should be displayed with the command $>sudo ps -A | grep motion. These changes makes it look like there are two separate programs (motion and motion2).

At the bottom of the page are two attachments. One is the standard startup script for motion located in the /etc/init.d/ directory. The other attachment is a startup script for a second camera. Both files are copied from a working Ubuntu 12.04 system. There are subtle differences between the two files that are there for a reason. -- DazzConway - 02 May 2013

How do I use my Network Camera with motion?

Axis Network Camera

netcam_url returns only ONE single image.

For Motion to get a video stream you should point the netcam_url to
  • or
  • or
  • or

ACTi Network Camera

See ACTi

Canon Network Camera

Tested with the VB-C50i range. (

Add this to your motion.conf

netcam_url returns only ONE single image.

For Motion to get a video stream you should point the netcam_url to


netcam_url http://<localIP>/MJPEG.cgi?.mjpeg

Safesky IP Network Camera

In your motion.conf (or thread_n.conf) include:

netcam_url rtsp://<localIP>/user=<usr>&password=<pwd>&channel=1&stream=1.sdp
replacing the "< >" fields with your own values.

Network camera with Motion Jpeg (MJPEG) doesn't work

Try trunk version that includes :

How can I get a big movie per day, not all the small ones?

There is no builtin feature to do this in motion, but it's still quite easy:

If you are using MPEG1 format you can cat them directly:

cat *.mpg > daily.mpg or

cat newmpeg.mpg >> daily.mpg

If you are using another MPEG format then you need a tool to cat them. Vincenzo Di Massa proposed on 23.01.2005 15:14 at the Motion discussion list a command that could be used on command line as well as inside a simple (bash) script:

cat file1 file2 file3 | mencoder - -noidx -o file.avi -ovc copy -oac null


On newer versions of mencoder use the following command:

mencoder file1 file2 file3 -noidx -o file.avi -ovc copy -oac copy

(verified with mencoder from Ubuntu Edgy repository)

mencoder is an application which is part of mplayer

Another usefull tool is avimerge.

avimerge -o daily.avi -i *.avi

Even better, the task may be done "on the fly" as proposed by RodolfoLeibner in the DailyVideoMotionScript .

How do I disable or enable saving jpeg files when motion is detected?

The config option output_normal controls this.

How do I disable or enable making mpeg files when motion is detected?

The config option ffmpeg_cap_new controls this. You need ffmpeg libraries installed for this feature. See MpegFilmsFFmpeg.

How do I delete mpeg files older than x days?

A cron job calling the following would do it

/usr/bin/find /path/to/mpegs -mtime +X -and -type f -and -name "*mpg" | xargs /bin/rm -f

or a similar approach

/usr/bin/find /path/to/mpegs -type f -name '*mpg' -mtime +X -exec rm {} \;

A slightly more in depth alternative:

If you like, you can create a simple bash script and add it to cron to run on a consistent basis. As an example, my script runs just after midnight every day. The script is as follows:

find /media/storage/motion_feeds/ -name '*.avi' -mtime +30 -exec rm {} \;
find /media/storage/motion_feeds/ -name '*.jpg' -mtime +30 -exec rm {} \;
find /media/storage/motion_feeds/ -name '*.mpg' -mtime +180 -exec rm {} \;

As you can see, I have three lines that are identical aside from the file type section. What each find command will do is it will seek out any files matching the file type (such as avi files) within the directory (AND sub directories) you specify in the path. Using the first line as an example, any avi files over 30 days old will get removed, as per the -exec rm section. In my case, all of my cameras have their own target_dir within /media/storage/motion_feeds. This command, as seen above, will remove these older files through all camera target_dir's at once since they are all within motion_feeds.

As mentioned, this command does search through sub directories, so if you have multiple cameras and you want to save camera A feeds for 15 days, but camera B feeds for 30 days, you'll need to add another line (one for each camera) along with their full path and their specified age, i.e. find /media/storage/motion_feeds/camera-A, etc.

Note: The differences in the file type used in each line of this script (avi, mpg, jpg) come from the different file types in use by motion activated feeds, timelapse feeds, and jpg images.

How do you control the pan and tilt of a Logitech Sphere camera?

I have made a special page that explains how the camera is controlled via the web. See the LogitechSphereControl topic.

Note that there are several versions of Logitech Sphere and they use different drivers. Older use pwc. Newer uses uvcvideo.

I get errors with Logitech Sphere and pwc driver version 9.0.X/10.0.X when I try to pan and tilt.

Motion is distributed with a pwc_ioctl.h file which matches the 8.X version of pwc because this is the one that most distributions have. In the Motion source directory there is a file called pwc-ioctl.h-pwc9.0.1. Copy this to (overwrite) pwc-ioctl.h and build Motion again. That should cure it. The file should also work with 9.0.2. From Motion 3.1.18 there is an additional pwc-ioctl.h-pwc10.0.5 file.

Logitech Quickcam Sphere MP - Pan & Tilt do not work

The Quickcam Sphere MP uses a different driver than the original Sphere cameras. The older uses the pwc driver. The newer uses the uvcvideo driver. See SupportQuestion2008x01x29x112921 for good info on how to get the newer Quickcam Sphere pan and tilt working with Motion.

Is it possible to get the Motion webserver to just serve one jpeg image/frame?

You should check out the related project that Kenneth made called MJPEG-ProxyGrab. The grab part connects to Motion - grabs ONE frame and output it on standard out. Used as a nph-cgi for Apache it means that the client browser simply gets a jpeg. It is all explained in the package.

There is also a proxy program which use nph-cgi method to feed an mjpeg stream through Apache. This means that the client Cambozola applet connects to the webserver IP and gets the stream via a connect on port 80. Good for proxies and firewalls.

You find the MJPEG-Proxygrab on this page:

I installed Motion without ffmpeg or mysql support. I have now installed ffmpeg or mysql and re-run configure, make and make install. No errors reported. But it does not work at all??

When you run make again the compiler will only rebuild those object files (.o) where the source file (.c and .h) have changed since last time you ran make. When you enable a new feature with ./configure you also enable some conditionally included code in the source files (all those #IFDEF lines in the sources). The compiler cannot know this so the end result is that the code does not get rebuilt to include the new feature. This is easy to fix. After you re-run ./configure you run the command make clean. This removes all the object files (.o) so that everything gets built again. Then run make and make install.

How do I use USB cameras when USB cables can only be maximum 5 m?

Use a USB extender. They exists in different flavours. The simple ones are like a 1 port hub powered through the USB cable. Each extender allows additional 5 m of cable and you can put at 2-4 in series giving a total of 15-25 m. Examples and

A new type has been introduced recently called a CAT5 USB Extender. Using standard CAT5/5e/6 network cable, the CAT5 USB Extender allows the distance between a USB device and a host computer to be extended by up to 50m. It consists of a local unit with a USB Type A Male (host computer side) and a remote unit (device side) with a built-in USB Type A Female connector. A low cost network patch cable is required to connect the two units together.

An example supplier (I have no knowledge about this particular supplier):

-- NoahBeach - 11 Feb 2011 Another supplier for a USB over Ethernet is Monoprice, I have no affiliation with this company other than I buy a lot of their products. I am using (4) of these right now with Logitech webcams, works great and great price.

-- ChrisHeap - 22 May 2005 You could try although unbranded they are approx 35 GBP ( I have bought 2 and they work very well for me) I use UTP CAT 5 cable over approx 40 metres with ToUcam Pro to an outbuilding and it ran with motion for 100 days+. The only problem I have had was a nearby lightning strike causing a lockup which necessitated a restart.

If you are large of bravery and small of wallet you can also try a plain USB extension cable such as:

I (WillCooke) am using a combination of these and the USB extender detailed above, with two 3M cables (so 6M) in between the computer and the extender and my cameras are working at a length of 11M using only 1 proper extender. You might also try putting a USB mini-hub at the end of the plain extension cables in place of the USB extender. They serve the same purpose and are often lower cost. This is not supposed to work, but it does for me. Your milage may vary.

Make sure you use shielded cable. I (I'm JorgenVanDerVelde) once had 5m USB connection consisting of two cables, one 4m and 1m. Every once in a when I switched on the TL light, my camera software (Motion of coarse) crashed. Another cam using 8 m cable was not influenced at all. The 1 m segment did appeared not to be shielded. You can check: take the cable out, measure the resistance (using a multimeter or a battery and a lightbulb) between the both connectors (the metal casings that is): if there is no connection between the casings, your cable definitely is not shielded.

Many of the cheaper USB-Ethernet-USB extenders are USB1.1 which is only fast enough for 800x600 MPEG images. Higher resolutions require USB2. -- DazzConway - 03 May 2013

mpegs are playing back too fast.

When an mpeg is created Motion must tell ffmpeg what framerate to use. If Motion then sends much less frames than required the mpegs runs too fast. So we need to help ffmpeg get it right.

If you set 'low_cpu = on' in motion.conf Motion will use the 'framerate' value for the mpeg. So it is important that you set framerate equal to the possible framerate of your camera.

If you set 'low_cpu = off' Motion uses the ACTUAL framerate from the previous finished second. The actual framerate is the lowest number of 'framerate' and the rate the camera can actually deliver.

If you have a slow computer and/or slow disk, Motion may be able to fetch a quite high framerate from the camera during idle. But once it detects Motion and generates jpegs, mpegs, external scripts etc it may not be able to keep up the pace. So it is important not to set the framerate higher than Motion is actually able to keep up with during Motion detection.

The fast mpegs happens like this. Imagine that you run a camera at framerate 10. This goes well during idle but the actual framerate during motion detection drops to 5.

When Motion is detected the previous second is an idle second so the framerate of 10 is used for the mpeg creation. The next seconds Motion feeds only max 5 frames to ffmpeg for each second. So in two seconds you add 10 frames to the mpeg.

When you play back the mpeg the file tells the player that it is coded for 10 frames per second (fps). So the two seconds are played in one second.

On a busy computer the ACTUAL framerate may be unpredictable. Then you can use this trick. Find out the actual average framerate your computer and camera is able to do during Motion detection. Now set both 'low_cpu' and 'framerate' equal to this value. This will give you a more stable playback speed in your mpegs.

Also note that with CCTV cameras you should run max 30 for NTSC and max 25 for PAL. Running at double frequency will give interlace blur of anything that moves and screw up the mpeg speed no matter how fast your computer is. And even 30/25 is a high number.

I hope this explanation helps many getting their Motion options adjusted so that Motion generates natural speed mpegs. In general people set the framerate value (and their expections) too high from what the hardware/software can actually deliver during Motion detection.

How do I make time based motion settings?

You use a combination of the Motion remote control and the Linux cron daemon which all Linux systems have. Let us take the example that we want to enable motion detection at 9:00 in the morning and turn it off at 18:00. Motion 3.1 must be built with xmlrpc. If you use 3.2 Motion is remote controlled using a browser.

In the file /etc/crontab we add these lines.

motion 3.1:

0 9 * * * root /usr/local/bin/motion-control detection resume 
0 18 * * * root /usr/local/bin/motion-control detection pause 

motion 3.2

From motion 3.2 there is no longer a motion-control program. Instead you use a program that can fetch a webpage. We simply just throw away the html page that Motion returns. Programs commonly available on Linux machines are wget and lwp-request.

0 9 * * * root /usr/bin/lwp-request http://localhost:8080/0/detection/start > /dev/null 
0 18 * * * root /usr/bin/lwp-request http://localhost:8080/0/detection/pause > /dev/null 

Most Motion config options can be changed while Motion is running except options related to the size of the captured images. So only your fantasy sets the limit to what you can change combining cron and the remote control interface for Motion.

RedHat keeps losing my permissions for /dev/video

This foxed me for a while, but I think i've tracked it down to the default state which means only a user on the console is granted permissions to read from the device /dev/video (or /video0 in my case). I tried to change this manually, and had chmod 660 the device, but found that after a reboot I was stuck with no permissions again.

This seems to be down to the PAM module, and in particular /etc/security/console.perms Changing the appropriate line in there to give +rw to the group, and adding your motion user to that group seems to sort things. For example, presuming that the user running motion is a member of video (you could choose anything) change

0600 0600 root


0600 0660

This may vary if you're not using a Video4Linux device If you have trouble still, check in the /etc/security/console.perms file for your device, and see which group it belongs to, then change the default for that somewhere below "# permission definitions"

(This on Linux 2.6.9-5.0.3.ELsmp #1 SMP Mon Feb 14 10:03:50 EST 2005 i686 i686 i386 GNU/Linux)

How do I break an MPEG movie into individual images or JPEG files?

One way is to use transcode (

transcode -i <movie-file> -y jpg -F 75 -o <output-filename-prefix>

Keep in mind the "-F 75" is a parameter to the "-y jpg" output plugin--it means the jpeg compression level. For instance, let's say you have the movie "frontdoor-200505181231.mpg" and you'd like to break it into jpegs with the name "cominghome-XXXXXX.jpg", you'd use the following.

transcode -i frontdoor-200505181231.mpg -y jpg -F 75 -o cominghome-

If you are having trouble with the transcode import portion it may have to do with the default using ffmpeg to import. The following works in this case, by using the mplayer import plugin instead.

transcode -x mplayer -i <movie-file> -y jpg -F 75 -o <output-filename-prefix>

Another option is to use mplayers jpeg video output.

In my case I just wanted the first nine frames, so that I could email myself a sample of the video at work from a post processing script

mplayer sourcefile.avi -vo jpeg -ao null -frames 9

Also if capturing in full res, and you have interlace tearing, then you can just grab one field for the jpg frame. Of course theres a million other things you can do with mplayer or transcode which are beyond the scope of this faq question.

mplayer sourcefile.avi -vo jpeg -ao null -vf field,scale=352:288 -frames 9 

I got this message when i tried to build motion : " checking size of short int... configure: error: cannot compute sizeof (short int), 77" .

Then problem is that there're some missing libraries , so you should look to config.log or to the output of configure and install the missing library/ies . Btw you have to use a recent version of autoconf ( >= 2.59 / gcc >= 3.x ). To find the missing shared library , try to search the word "error" in config.log so you'll find any line like :

error while loading shared libraries: cannot open shared object file: No such file or directory

Motion detection is working. But when you view the mpeg recording there seems to be several seconds missing, it skips many frames and jumps ahead several seconds.

The reason for this if often that the pre_capture option is used and set way too high. This feature was designed for pre-capturing 2-5 frames. It does not work well with values like 20 or 50. Motion processes all the pre-captured images after the first motion detection and it cannot capture frames while this is going on. Building an mpeg of maybe 50 frames takes so long that Motion misses several frames. Keep pre_capture low! The option post_capture does not have this problem and you can use quite large values for post_capture.

Motion crashes with segmentation fault the first time it detects Motion

This is normally always an issue with a mismatch of ffmpeg version vs Motion. If you download and install a Motion RPM or deb it is essential that the version of ffmpeg you have installed is compatible with the one that was used when the RPM or deb was built. It is also essential that a ffmpeg rpm and the corresponding ffmpeg-devel rpm is the exact same version (date). For Motion 3.2.3 there are two RPMs (-1 and -2) that each corresponds to a specific ffmpeg rpm. The safest way to ensure that the versions match is to build Motion from sources. Then Motion will be built to match whatever version of ffmpeg you have installed as long as it from August 2005 or earlier. Later may or may not work. The ffmpeg team keeps on changing the API for their libs with no warning or documentation.

How do I create mpegs from jpeg files instead of using ffmpeg?

The idea is that motion generates jpegs, then at the end of the event launches the encoding in .avi (using mencoder)

  • let motion generate jpegs (disable ffmpeg_cap_motion option)
  • add this option: on_event_end (e.g. I use: on_event_end /home/guillo/bin/motion_encode_and_delete_jpgs ) This is the bash script:
DATER=`date +%d-%m-%s` 
cd ${FOLDER} 
ls ${FOLDER}/*.jpg >tmplist_$DATER 
mencoder "mf://${FOLDER}/*.jpg" -ovc lavc -o motion${DATER}.avi 
cat $FOLDER/tmplist_$DATER|xargs rm -- 
rm tmplist_$DATER 

  • modify the FOLDER variable with your motion folder, save the bash script somewhere (e.g. /home/guillo/bin/motion_encode_and_delete_jpgs ) and execute
     chmod a+x <bash script path> 
    That's it. Notes:
  • you need mencoder installed
  • please try the script before using it automatically. just issue a ./ . It saves the avi in the folder where it finds the images
  • the on_event_end is performed "gap" seconds after the end of the motion that is detected. That means that the avi will appear ony after "gap" seconds. The option gap is 60 by default. So, if you think that the script is not working but it works when it's launched manually, probably you need to lower the "gap". I use gap 10. To be extra clear, my config file contains:
     on_event_end /home/guillo/bin/motion_encode_and_delete_jpgs gap 10 

I got this message when I tried to build motion : "avformat.h: No such file or directory"

You do not have any or do not have the correct ffmpeg development files installed. Be sure to check the Motion Installation Guide - Preparation for Install section. And also read the very detailed guide to the installation and use of ffmpeg with Motion in the MpegFilmsFFmpeg section.

In general these are the most common reasons.

  • You run a distro such as Fedora, Redhat etc and you have installed the rpm package ffmpeg but forgotten the package ffmpeg-devel which is needed when you build Motion from sources.
  • You have ffmpeg installed in a older version as rpm. You then install it additionally from sources in /usr/local/ffmpeg. When the linker looks for the libraries it finds the old one in /usr and then assumes the header files are in /usr/include instead of /usr/local/include. So uninstall the old rpm version with rpm -e ffmpeg so you know you only have one ffmpeg version. Or have only the rpm version incl ffmpeg-devel.
  • You installed ffmpeg from sources when forgot to add the --enable-shared when you ran configure.
  • You do not have /usr/local/lib in the /etc/ file (or in a file included in Remember to run the command ldconfig after you add the text.
  • You do not have ffmpeg header files or the location is not a common location /usr/include or /usr/local/include , so you have to run configure with --with-ffmpeg=.

My pgm mask files fail to load

If your pgm files exist, and look valid but motion wont load them with:

Mar 6 09:50:14 symons motion: [1] Failed reading size in pgm file: Success 
Mar 6 09:50:14 symons motion: [1] Failed to read mask image. Mask feature disabled. 

Motion can not handle blank lines at the top of the file. Mine had a blank line under the comment and before the size which caused this error. Edit it by hand, and make the first few lines fit this format. The comment (#) line is optional.

 P5 # created by some app 352 288 255 

Also some old apps use a different variant, with a P2 header. If this happens pass the file through image magick/convert which will bump it up to P5, but will not remove blank lines:

 [ant@symons motion]$ convert front_mask.pgm front_mask2.pgm 

Firefox shows a still image, that never stops loading, if I press F5 it starts refreshing.

In Firefox goto about:config (you write this in like it was a URL - there is no menu entry for it), and change the default value of browser.cache.check_doc_frequency to 1 (double click on the line to change setting)

Problems with motion-3.2.9 creating new movie files

Look on release notes for 3.2.9 .

Get the patch and apply to motion-3.2.9 :

tar xfvz motion-3.2.9.tar.gz
cp  3.2.9-ffmpeg-creation-newfile.diff motion-3.2.9/.
patch <  3.2.9-ffmpeg-creation-newfile.diff
./configure ; make ; make install

sync error in proc xxxx: No space left on device

That's a common problem for some drivers/usb disposition because requeste bandwidth could not be satisfied. So try to connect that camera
in another usb host ( not hub ) or get a pci usb host. Other solution is to patch you webcam module.

- Best solution :

Support question

- Some alternative solutions :

look here

Cannot create mpeg4 videos or motion has segfault, errors , etc

Probably you are using a ffmpeg version not supported or compatible with motion. However you can have

a ffmpeg installed without mpeg encoder support , debian does :

Problems running motion with kernel 2.6.27 and above

You probably have issues with palette conversion , still not supported in motion but you can use libv4l.

Read this support question :

Resolution and palette issues with some drivers and USB 1.x / 2.0

I have a webcam that support a resolution up to 640x480 and i setup motion.conf ( width 640 and height 480 ) but it's reducing the image when

i plug my camera to USB 1.1 but works good when it's plugged to USB 2.0.

Most usual issues running motion in Debian / Ubuntu

Read this topic for most usual issues running motion in debian / ubuntu

How do I see more than one camera stream at a time?

Method 1 - Presentational HTML

If you have more than one camera being watched by motion, its quite nice to see them all at once on one web page. A simple way to do this is put this code into a page and store it on your motion server, or even as a file on your desktop.
<body bgcolor="#000000">
<a href=""> <img src= border="0" width=49%></a>
<a href=""> <img src= border="0" width=49%></a> <br>
<a href=""> <img src= border="0" width=49%></a>
<a href=""> <img src= border="0" width=49%></a>

and I have also four files called ip1.html through ip4.html which all contain this code:

<html> <body bgcolor="#000000"> <img src= border="0"> </body> </html>

This is done because it seems to produce a more stable image. Otherwise if you use a line like this

 <a href=""> <img src="" width="49%"> </a>

in the original file, I found the second image once you clicked on the thumbnail to be quite jerky and tended to not fully draw.

Other ideas
  • You can also combine image streams from several different servers running motion, just reference different addresses
  • Adjust the resolutions and scaling to suit different sized camera outputs, if you have them.

Method 2 - SImplicity at its finest... basic HTML

Below is probably the simplest way to view multiple camera feeds at once. It's a simple HTML file that does one thing and one thing only; view multiple cameras at once. All you have to do is paste these few lines into a text file, alter the IP and port numbers to match your setup, and give it a name. Just make sure it ends with .html, such as my file, which is named "motion.html"

<body bgcolor=000000>
<img src= border="0" width=49%></a>
<img src= border="0" width=49%></a>
<img src= border="0" width=49%></a>
<img src= border="0" width=49%></a>

The IP is the same in every line because you're referencing the IP of your Motion server via the webcam feature in the config file. The port numbers change to correspond with each camera. In the above example, you would effectively get a 2x2 grid. Why? Because of the 49% width. If you would like a 3x3 grid, adjust your width percentage accordingly (32% would likely be optimal so there's a slim border between each image). Using percentages is nice because it will auto scale to fit whatever monitor you are using. This allows me to use the same motion.html file on my desktop with very high resolution monitors, but also my laptop, who's resolution is quite different.

You can also get fancy with this and bind a hot key to launch your browser with this custom html file. I bound CTRL ALT M to this command: firefox file:///home/jason/Documents/motion.html. All I have to do is hit CTRL ALT M and in an instant I have full view of my Motion feeds.

Note: This example has been revised. In my original example that I have now altered, I had <a href= tags in each <img src line so you could click on the camera feed to pull it up full screen. I've had nothing but problems with the full screen mode since (at the time of this writing) recent versions of Chrome simply don't work and Firefox has what seems to be a memory leak which will happily max out the RAM on your system. As a result, I just stick to only the <img src lines and it works perfectly in my experience with both Firefox and Chrome.

Method 3 - Slider

Here's a similar set, but gives you a slider to dynamically size each image.

Note if your cameras are 4:3 ratio then use "(ui.value / 4) * 3;" instead of "(ui.value / 5) * 4;"

This example is thanks to Primer on #motion, on the freenode IRC network.

<!DOCTYPE html>
<link href="" rel="stylesheet" type="text/css"/>
<script src=""></script>
<script src=""></script>
<style type="text/css">
#resize {
        margin-top: 0px;
        margin-bottom: 20px;
        display: inline-block;
#slider {
        margin-top: 0px;
        margin-bottom: 0px;
        margin-left: 20px;
        width: 200px;
        display: inline-block;
#sliderValue {
        margin-left: 20px;
        display: inline-block;
$(document).ready(function() {
                'max': 720,
                'min': 160,
                'step': 1,
                'value': 360,
                slide: function (event, ui) {
                        var height = parseInt (ui.value / 5) * 4;
                        $('#sliderValue').text (ui.value + ' x ' + height);
                change: function (event, ui) {
                        var height = parseInt (ui.value / 5) * 4;
                        $('#cam1').attr ('width', ui.value);
                        $('#cam1').attr ('height', height);
                        $('#cam2').attr ('width', ui.value);
                        $('#cam2').attr ('height', height);
                        $('#cam3').attr ('width', ui.value);
                        $('#cam3').attr ('height', height);
                        $('#cam4').attr ('width', ui.value);
                        $('#cam4').attr ('height', height);
<div id="resize">Resize:</div><div id="slider"></div><div id="sliderValue">720 x 576</div></br>

<img id=cam1 src= width=360 border="0">
<img id=cam2 src= width=360 border="0">
<img id=cam3 src= width=360 border="0">
<img id=cam4 src= width=360 border="0">

Compile ffmpeg from sources

Latest trunk motion has support for ffmpeg compiled from sources. So you can pick latest version from and build motion using your own ffmpeg libraries.

When configure process can't detect ffmpeg libraries you will see this warning:
* libavcodec.a or or           *
* libavformat.a or not found: *

Download and compile ffmpeg:
cd /home/user/git
git clone ffmpeg
cd ffmpeg
./configure --prefix=/home/user/git/ffmpeg/out
make install

Now you can configure motion and specify path for ffmpeg libraries:
./configure --with-ffmpeg=/home/user/git/ffmpeg/out

If you get linking errors, like this:
/home/user/git/ffmpeg/out/lib/libavcodec.a(opusdec.o): In function `opus_decode_subpacket':
/home/user/ffmpeg/libavcodec/opusdec.c:376: undefined reference to `swr_is_initialized'

Try to modify motion's Makefile and add extra libraries, like:
LIBS         = -lswresample -lrt

-- TosiaraT - 22 Apr 2015

conf_cmdparse: Unknown config option

If you see such warnings when starting motion - that means you are most probably using an old config with a newer version of motion. Because some config params have been renamed. For example, "jpeg_filename" became "picture_filename". Just update your config to match all the recent changes

-- TosiaraT - 22 Apr 2015

Corrupt JPEG data: 4 extraneous bytes before marker 0xd4

This can happen with some MJPEG web cameras. Example, Logitech C910. As for now motion requires a patch to handle those. See details:

-- TosiaraT - 23 Jul 2015
I Attachment Action Size Date Who Comment
motionEXT motion manage 2 K 02 May 2013 - 23:23 UnknownUser standard /etc/init.d/motion script
motion2EXT motion2 manage 2 K 02 May 2013 - 23:26 UnknownUser modified /etc/init.d/motion2 start up script for 2nd camera
Topic revision: r85 - 22 Feb 2017, RodolfoLeibner
Copyright © 1999-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Please do not email Kenneth for support questions (read why). Use the Support Requests page or join the Mailing List.
This website only use harmless session cookies. See Cookie Policy for details. By using this website you accept the use of these cookies.