By Jay D. Krasnow, Office of Integrated Analytic Services, National Geospatial-Intelligence Agency (www.nga.mi
l), Springfield, Va.
From Osama bin Laden’s fortress to the damage caused by the tsunami at Japan’s Fukushima Nuclear Power Plant, scene visualization helps National Geospatial-Intelligence Agency (NGA) partners obtain situational awareness of unfamiliar terrain and develop threat analyses in 3-D and even 4-D perspectives.
This all happens at NGA’s Office of Integrated Analytic Services, where a team of visualization specialists uses a variety of modeling and geospatial software tools to craft interactive scene visualization products. But it took forward thinking and a lot of experimentation to bring the program to where it is today.
In the late 1980s, modeling and geospatial software were still in their infancy, and building even a 15-second video of a fly-through—a geospatial product that creates the illusion of the viewer flying over a target—was a painstaking process, according to current and former NGA scene visualization specialists.
A Steady Evolution
Through the mid-1990s, analysts at the Defense Mapping Agency and other NGA predecessor agencies typically reviewed imagery flats on analog light tables. To create three dimensions from the imagery, scene visualization specialists had to squint through the eyepiece of a stereo lens to measure structures and topography point by point using a room-sized stereo comparator, a piece of equipment that generated 3-D coordinates.
The stereo comparator enabled simultaneous viewing of two satellite images of the same area taken from slightly different viewpoints, giving the illusion of a 3-D perspective. To achieve the illusion, scene visualization specialists placed each film flat of the different shots on its own glass stage. Then they moved the stages remotely with track balls similar to the modern computer mouse.
To pinpoint a ground location, the specialist maneuvered each stage independently until a dot of light—called a reticule—fused onto the location. The comparator generated a focused beam of light and projected it onto the images. To create a digital elevation model of the terrain, the specialist input these points into a computer-assisted design application. Then the scene visualization specialist recorded the 3-D coordinate and saved these manually measured image points to bulky reel-to-reel magnetic tapes that were downloaded into a computer-aided design application that created the digital elevation model and building structures.
In the late 1990s and early 2000s, the stereo comparator gave way to digital imagery and stereo workstations. Scene visualization specialists now could scan film flats using high-resolution scanners and pipe the scanned images to stereo-capable UNIX workstations, powerful computers NGA analysts used to review images in 3-D.
Despite technological advances, the specialists still obtained ephemeris data, the technical information on how an image was taken, through a painstaking manual process. Ephemeris data are important because they enable precise geopositioning and mensuration, the scientific term for measuring the lengths or position of objects observed in imagery.
In the early 2000s, digital point positioning database files—highly refined stereo imagery used for precise geopositioning—only were available on 8mm tapes. And digital terrain elevation data only were available on CD. Analysts had to check out these files from agency libraries in Bethesda, Md., or St. Louis.
Today, analysts no longer need to scan images, copy ephemeris data or obtain georeferencing data because all geospatial intelligence (GEOINT) analyst workstations automatically integrate that information into their digital light tables.
“What used to take days, now takes minutes,” said the branch chief for GEOINT Scene Visualization at the Office of Integrated Analytic Services.
Extracting the measurements and entering the data into a computer weren’t the only time-intensive steps involved in building scene visualizations. Rendering the data—i.e., building an executable file—could take hours. In fact, in the mid-1990s, it took two to three minutes to render just one frame of a fly-through, and it took 24 frames to compose one second of video. The process took so long, explained a former chief photogrammetrist at NGA, that visualization specialists would “press a button and let the file render overnight.”
Additionally, because few computer networks existed in the mid-1990s, a project scientist in the Industry Outreach Division of InnoVision, an NGA directorate, recalls the obstacles they faced in terms of product delivery.
“Due to feeble or no network connectivity, we had to build in time to deliver products on external media by hand or by mail,” he said. “We also had to be concerned with troubleshooting hardware and software issues on the customer’s end.”
“Nowadays, thanks to modern network communications and the proliferation of visualization software, these issues are pretty much gone,” added the branch chief.
Google Earth’s Impact
The genesis of network communications in scene visualization dates to the early 2000s, when NGA invested $667,500 in Keyhole, Inc., an investment that eventually grew to $2.15 million. Keyhole was a Mountain View, Calif.-based software development firm that specialized in geospatial data visualization applications. Google bought Keyhole in 2004 and re-released its flagship product one year later as Google Earth, a striking example of how funding for government research in new technologies spurred innovation in the private sector.
As the new technology, including Google Earth, improved the NGA scene visualization team’s ability to create products quickly and efficiently, the demand for scene visualization products grew, especially for fly-throughs. A steady stream of military, intelligence and policymaker customers solicited the team to help them prepare for military operations and major events like the 19th International Federation of Association Football World Cup inSouth Africa.
“It’s amazing how far scene visualization has come in the last five years,” said the team lead for scene visualization. “Nowadays, specially built high-end computers aren’t necessary to view detailed 3-D visualization products, and soon this technology will be commonplace on many mobile platforms.”
Thanks to the NGA Pathfinder
staff for their assistance with this column.
Click on Ad to enlarge.