Persistics: A Revoltion in Motion Imagery Processing

by | Jul 9, 2011

By Roger Gant, a contract employee supporting the InnoVision communication team for the National Geospatial-Intelligence Agency (NGA), Bethesda, Md.

 

As technology continues to steadily multiply the amount of information available from current and future intelligence sensors, how can analysts expect to keep up? More importantly, how can they make the leap from analyzing discrete events to fully understanding the larger patterns of activity that battlefield commanders need to conduct asymmetric warfare?

The InnoVision Directorate at the National Geospatial-Intelligence Agency (NGA) is quietly enabling that leap today in the realm of motion imagery”an increasingly valuable source of information for users, whether they're confronting the Taliban in Afghanistan or tackling natural disasters in the United States or elsewhere. Through an advanced data processing initiative called persistics, analysts will be able to apply automation to exploit large swaths of wide-area motion imagery and improve their ability to perform activity-based analysis. Automated processing techniques within persistics will significantly reduce the amount of motion imagery that analysts need to search, retrieve and display on their workstations.

How Persistics Works

Persistics is a partnership success story between NGA and the Department of Energy's Lawrence Livermore National Laboratory (LLNL). According to John Rush, InnoVision liaison to the NGA Office of Defense, the initiative represents a fundamental change in processing that has the potential to significantly advance geospatial intelligence (GEOINT) analytical tradecraft.

With motion imagery, it's the motion that's  important”not the imagery, says Rush. Handling wide-area motion imagery like traditional imagery quickly overwhelms communications and storage.

The persistics effort is aimed at reducing the data-handling burden on networks, servers and analysts by separating the movement in a video scene from the unchanging background. The process reduces the time needed to analyze motion imagery from hours”or days”to just minutes by relieving analysts of the task of manually creating tracks of thousands of moving objects.

The persistics initiative is particularly timely now. Today, motion imagery analysts primarily exploit narrow-field-of-view systems like the full-motion video data from the MQ-1 Predator, which covers a small area and small number of movers at a time. Current wide-area motion imagery systems, such as Constant Hawk, already have the ability to image thousands of movers, and future wide-area systems will potentially track millions of movers as areas of coverage and mission durations increase. Today's legacy processing can't begin to exploit the volume of data and provide analysts with the right information. As a result, analysts spend many hours manually extracting individual mover tracks from wide-area motion imagery.

Persistics uses a process called dense correspondence, which results in stable imagery. As a wide-area surveillance platform flies a typical orbit, data are collected through a system of multiple cameras, creating large data sets”one terabyte an hour for existing sensors like Constant Hawk. Dense correspondence processing algorithms separate the movers in the data stream from the stable or unchanging background. Then movement data are streamed back to users, along with periodic updates of the unchanging background. This radical processing enables persistics to reduce the amount of data needing to pass through communication paths”to the analyst and eventually into storage”by more than 1,000 times.

Persistics' features include an ability to stabilize imagery and eliminate scene jitter, parallax and other image imperfections common in existing wide-area motion imagery. In addition, it facilitates 3-D extraction, which”literally”adds a new dimension to analysis. These features represent a significant leap forward in the concept of activity-based GEOINT.

Traditionally, every pixel was transmitted, analyzed and stored, says Rush. Persistics allows an analyst to see the pixels that change to observe the activity of actors and entities (the movers) over long periods of time. Activity reveals relationships between movers and specific locations and leads to greater context and knowledge about specific objects and their environment.

Widespread Use

Persistics is on track for transition to the user community during the coming year. NGA's InnoVision Directorate is collaborating with the Army Research Lab on the deployment of a persistics ground processing capability for the Constant Hawk airborne collection platform. InnoVision is also leading an effort to support the Department of Defense Intelligence, Surveillance and Reconnaissance Task Force and the U.S. Air Force to modify the processing chain to accommodate the data from the Defense Advanced Research Projects Administration's Autonomous Real-Time Ground Ubiquitous Surveillance Imaging System sensor, which is expected to be fielded on the MQ-9 Reaper Gorgon Stare II system.

According to Rush, the technology inherent in persistics offers an opportunity to industry. Once the prototype is available, we plan to make the technology available to sensor system and exploitation tool developers for broader application across the community, he says. This is a great way for NGA to exercise functional management through our partnership with industry.

Persistics adds relevance to motion imagery at a time when more data are available than ever before. Motion imagery is made more intuitive by automating activity detection, enhancing standards, facilitating fusion and advancing tradecraft. InnoVision is proving that, with the right processing techniques, the richness of large motion imagery data sets can be fully exploited by automatically capturing the essential elements of movement in a greatly reduced format.

NEWEST V1 MEDIA PUBLICATION

April Issue 2024