Feature Request: Ability to output_normal the middle frame
I have a camera observing a street, but I would like to be able to distinguish movies of people from those of passing cars in the preview images. However, 'output_normal best' usually gives a blank image, where the vehicle just left the scene. The frame I would like to see, where the vehicle is centered in the image, is probably the frame midway through the video, not counting pre- and post-capture frames. Would it be feasible and useful to add this feature to the output_normal setting?
- 19 Jan 2007
You are probably getting the frame with the most movement, because the car 'was' in the frame and now 'it's not'. I have seen similar things on my captures.
It sounds like a test for the 'best' frame in your case would not be the one with the most motion, but this:
" the frame with the motion area most centred in the image "
This might be possible to code by testing the x- and y-position of the motion area, but it's not an area of the program I've looked at.
Perhaps also a 'pre-capture' setting of 1 would help, since then there is, usually, a frame with no car in it (or part of a car).
- 02 Feb 2007
Centering the motion might help, but it still wouldn't distinguish the appearance of an object from its disappearance. How about the frame most different from a baseline frame, saved from before motion was detected? That way if a person comes into a room, you would see the frame where the person is closest to the camera and/or most fully in the field. That would be nice.
- 16 May 2007
'>> Centering the motion might help, but it still wouldn't distinguish the appearance of an object from its disappearance
True. I have the same problem as you, some of my motion frames are when an object has disappeared, but I don't know what the object was! I run a low-fps system, so sometimes miss an object's appearance.
Your suggestion is good, there are already a number of saved frames in the memory space of each motion instance. Adding another might be ok in a high-memory machine but might cause problems with lesser hardware. Also there's the maintenance of that frame - how would it be done? There is already a reference frame maintained with some rules of persistence etc., against which new frames are compared to check for motion. Somehow that process isn't working well enough in our cases.
I don't know precisely how motion detects motion at present, but I believe it's in black and white. I will have to look at the code to see. (The routines of interest are "alg_std" and its relations, in alg.c)
Here's a possible change: motion could keep a count of motion-pixels where the movement is made them darker, and then another count where they've gone light. Then each of those counts could be compared against the threshold to see if motion has occurred. (We would have to keep track of both of these to be able to detect light objects appearing in a black frame, and dark objects in a light frame. We could not sum the 2, or some movements would balance out resulting in non-detection). What do you think of that?
- 29 May 2007
The problem with the 'best' preview image is the way we detect motion today. To be more precise: It's the way we calculate the reference frame. Unfortunately this is THE HEART of Motion and has to be considered carefully befor we mess with it. The good news is that I am working on that right now and there may be a solution for many of those problems that are related to improper location of objects in the near future.
I suggest to be patient for a while and not add any further workarounds to the location problem. I will definitely be resolved when the implementation of a new reference frame algo is finished.
- 30 Sep 2007