Starting in 1996, Alexa Internet has been donating their crawl data to the Internet Archive. Flowing in every day, these data are added to the Wayback Machine after an embargo period.

Optical flow (OF) is the observable displacement field measured in the
image coordinate frame in a video sequence. OF results from independent
motions of objects within the scene as well as from sensor ego-motion, but
can also be impacted by temporally varying illumination.

Optical flow estimation requires some kind of spatial regularization: the motion of a unique pixel or even of a low textured zone is in general not observable (a problem referred to as "the aperture problem" in computer vision). Classical estimation can be clustered into two large families associated to two seminal publications the 80's.

Methods of the Horn and Schunk type rely on a global formulation of the estimation as the search for a vector field that aligns images (registration term) and fulfils some regularity constraints (spatial regularity term);

Methods of the Lucas-Kanade type rely on parametric registration on local windows (centered around each pixel in the context of dense flow estimation).

Older techniques like Horn and Schunck, only consider small displacements (typically
a pixel or less) and rely on a first-order Taylor expansion of the registration
term. Since the early 90's, iterative and multiresolution techniques, that perform
multiple re-linearization of the registration term and use image pyramids, have been
developped in order to estimate large displacements.

2- FOLKI Algorithm

FOLKI is an algorithm developed by G. Le Besnerais and F. Champagnat in 2005 [1] in order to provide a fast dense optical flow estimates using a window-based iterative and multiresolution Lucas-Kanade type registration. FOLKI relies on a specific Taylor expansion of the window registration term which leads to a consistent iterative scheme while saving many image interpolation operations. This specificity enables FOLKI to be much faster than other iterative Lucas-Kanade type algorithms, as, for instance, the Pyramidal Lucas-Kanade algorithm of Open CV and also more accurate as shown on the comparative evaluations published on the website of the University of Middlebury. An example of result obtained on a data set of this site is presented on figure 1.

Figure 1. Result of FOLKI on the army data set of the University of Middlebury

Another key feature of FOLKI is its highly parallel structure that fully benefits from modern hardware architectures.

3- CUDA Implementation

As shown in refs. [1,2] an iteration of FOLKI uses pixelwise scalar operations, separable convolutions and image interpolations. Pixelwise operations and convolutions are handled using a one-to-one mapping between pixels and threads. Convolution is based on the separable convolution code of CUDA's SDK. As for image interpolation, we rely on the bilinear convolution offered by the API CUDA. It uses the texture memory of a GPU, which allows to store data defined on 1D, 2D or 3D integer indexes and then to read data values corresponding to non integer indexes thanks to a very fast interpolation with limited accuracy. The accuracy - essentially 1/256 times the index step - appears sufficient for optical flow computation. The resulting code is denoted FOLKI-GPU in the sequel.

4- Performances

While being real-time on a standard laptop, FOLKI-GPU provides results very close to the original results of FOLKI - published for instance in Middlebury University website - in terms of flow accuracy. Hence FOLKI-GPU allows a real-time dense and accurate estimation of optical flow even for large displacements, in contrast with many previously published real-time optical flow codes which are only able to detect motion on highly textured areas. Computation time depends on many parameters, most notably on the radius of the local registration window and on the number of iterations at each image resolution level, see Figure 3. With parameters set so as to provide good results, the typical processing time of FOLKI-GPU is 32 ms for a full HD video (1920x1080 pixels), which means that flow computation at video rate (24 fps) is possible for such a format. In our knowledge, this rate has never been reported before for accurate flow computation on full HD video. Figure 2 shows the gain expressed in computation time per pixel between a C implementation of FOLKI (blue curve) and FOLKI-GPU (red curve). For (relatively) small images (512x512) the gain is close to 10 but it reaches 100 for bigger images (2048x4096)!

Figure 2. Per pixel computation time (in log scale) as a function of the total number of pixels per frame for a C implementation of FOLKI (blue curve) and FOLKI-GPU (red curve).

Figure 3. Blue curve: impact of the number of iteration per resolution level;
red curve: impact of the window radius.

5- Applications

FOLKI and its CUDA implementation FOLKI-GPU have already been used on several image sequences from various application fields, such as video-surveillance, robotic vision - especially in an aerial robotic context - and metrology. These applications are illustrated in the following sequences.

Video 1. Pedestrian motion estimation on a PETS sequence (dataset #3, PETS 2007). Optical flow norm is represented by red color level (from red to white) and a subsampled vector field is also displayed so as to show the estimated motion orientation.

Video 2. Height estimation from a downward aerial video recorded by ONERA's drone Ressac. As the motion of the drone is essentially a translation at constant altitude and the scene is rigid, height is related to the norm of the estimated optical flow (structure from motion context).

Video 3. Metrology based on imagery and optical flow estimation in fluid mechanics. The displacement field of a turbulent fluid is imaged in a PIV (particule imagery velocimetry) experiment (FLUIDS dataset, Package 3, B.Wieneke, LaVision). Color scale show the vorticity of the OF field estimated with FOLKI-GPU. A subsampled version of the estimated vector field is also displayed.

[1] Guy Le Besnerais and Frederic Champagnat, "Dense optical flow estimation by iterative local window registration," in Proceedings of IEEE ICIP05, vol. 1, pp. , Genova, Italy, Sept. 2005.

[2] F. Champagnat, A. Plyer, G. Le Besnerais, B. Leclaire et Y. Le Sant, "How to calculate dense PIV vector fields at video rate," 8th Int. Symp. On Particle Image Velocimetry, PIV09, Melbourne, Victoria (Australia), August 25-28, 2009.