Moving from Pixels to GEOINT Information

by | Jun 9, 2015

By Jarlath O'Neil-Dunne, director, University of Vermont Spatial Analysis Laboratory, and instructor, Penn State GEOINT program.

From new platforms, such as unmanned aircraft systems (UASs) and cubesats, to new sensors, such as light detection and ranging (LiDAR) technology, the amount of remotely sensed data is increasing exponentially. How do we make sense of all these pieces of data and turn them into geospatial intelligence (GEOINT)? Do we need armies of analysts combing through it all? Can we develop the equivalent of a Staples easy button to automatically extract what we need in one click? The challenges of dealing with today's data deluge are significant, but technology advances are enabling us to develop robust, automated solutions, making skilled GEOINT analysts more important than ever.

Extraction's Evolution

Extracting information from remotely sensed data hasn't always posed challenges for the intelligence community. During the Cold War the need to collect imagery without detection was paramount, leading to the development of cutting-edge technologies such as supersonic spy planes and reconnaissance satellite programs.

During these early years the amount of data was the limiting factor, not the ability to exploit the data. The advent of multispectral digital remote sensing technologies in the 1970s ushered in a new era in which, for the first time, information could be automatically extracted from remotely sensed data by using the digital values contained within the pixels. Throughout the 1980s and 1990s new sensors and new approaches to automating information extraction from those sensors emerged, paving the way for national-scale land cover datasets, the automated detection of moving targets and change
detection tip-off analysis.

The last decade has ushered in a second revolution in digital remote sensing. Long gone are the days in which individual governments controlled geospatial systems and data. Now there's a complex mix of data stewards, ranging from online communities to private industry. New initiatives, such as the crowd-sourced OpenStreetMap project (, have resulted in the development of geospatial datasets on a global scale.

Extraction Limitations

The ability to collect the data is no longer the issue; the challenge is dealing with the data deluge. In this respect, the tools used to automatically extract information from geospatial data, particularly remotely sensed data, have failed to keep pace. Crowd-sourcing (e.g. OpenStreetMap) is a partial solution, but it can't solve all the problems, particularly those that involve secrecy or security.

The limitations of automated feature extraction tools can be traced to two main issues. The first is that much of today's digital image processing technology, which was developed for moderate-resolution multispectral sensors, focuses on the values of individual pixels, ignoring the geometric properties and spatial patterns that exist in the data. Although pixel-based approaches perform reasonably well when applied to moderate-resolution data, they fall apart when applied to higher spatial resolution data in which the spatial information trumps the spectral information.

The second problem is that solutions tend to be data specific. Extracting buildings from LiDAR data is an excellent example of this. Dozens of building-extraction algorithms have been published in the past three years, and all of them perform relatively well. However, they fail to take into account contextual information, such as the presence of driveways, roads and parking lots”features a human analyst would use to identify a building. This limitation results in missed buildings and false positives. Today, GEOINT remote sensing analysts have access to a virtual treasure trove of data. The greatest insights won't come from exploiting these datasets separately, but through true data fusion.

Meeting the Challenge

About a year ago I was asked to redesign Penn State's upper-level GEOINT remote sensing course, GEOG 883. This provided me with a unique opportunity to think critically about the tools, techniques and foundational knowledge a modern GEOINT remote sensing analyst should have. The challenge was that I'd only have 10 weeks to get the information across, and the course would be entirely online.

Narrowing down the subject matter was a daunting task. Thermal, hyperspectral, multispectral, UAS, LiDAR, change detection, spectral unmixing”the list of potential topics was long. A shotgun approach, in which students were exposed to small bits of each topic, would have them finishing the course without learning anything of substance. I decided the course's key learning objective was for students to be able to build remote sensing workflows with which they could automatically extract information from remotely sensed data, and they'd be able to do so using data from a variety of sources.

I decided to include many foundational elements (e.g., the electromagnetic spectrum) that haven't changed since I took my first remote sensing class two decades ago. I also felt it was important that students, many of whom have strong GEOINT backgrounds, have a firm understanding of the Elements of Image Interpretation (EIIs), first outlined by Chuck Olson in 1960. The EIIs describe how humans use a variety of information to recognize features in imagery, such as tone, texture, shadow, pattern, association, shape, size and site.

Most remote sensing courses still teach some variant of the EIIs but then move into pixel-based approaches in which the EIIs no longer play a role. This has led to what I consider the greatest error in remote sensing education of the past two decades”emphasizing statistical approaches of data analysis at the expense of tradecraft. The GEOINT community, particularly those who serve as imagery analysts, have never lost the tradecraft, but the general trend, especially in the literature, has been toward increasingly mathematical, complex approaches to image analysis.

Most of these approaches don't come close, in accuracy or cartographic representation, to what a human analyst can achieve. A massive disconnect exists between the gold standard (the human analyst) and the automated approach.

I decided to depart from the vast majority of similar courses in my approach to teaching automated feature extraction, largely abandoning pixel-based and sensor-specific methods. I wanted students to develop a strong sense of remote sensing tradecraft that would allow them to automate approaches to effectively replicate what a human analyst could derive from complex, multisensor datasets.

Figure 1.A segmentation algorithm was applied to a Landsat scene, acquired following the 2013 Rim Fire near Yosemite National Park, to generate objects.

Figure 1.A segmentation algorithm was applied to a Landsat scene, acquired following the 2013 Rim Fire near Yosemite National Park, to generate objects.

An Object-Based Approach

The most promising approach, which combines tradecraft with automation, is geographical object-based image analysis (GEOBIA). As the name implies, the scale of analysis in GEOBIA is the object (Figure 1). An object comprises a collection of pixels, so it inherently has more information than a single pixel. A single object contains a broad range of spectral information, such as the mean, min, max and standard deviation of all the image data. Objects are also spatially rich sources of information, as they have geometric properties, such as shape and size. Finally, objects are spatially aware in that they contain information about their neighbor objects (e.g. relative border to) and all the objects in a scene.

Objects have another advantage in that they serve as a virtual data model that breaks down barriers between data types. A single object knows all the imagery pixels, LiDAR points and attributes of the vector datasets it intersects. Thus, an object-based approach offers a distinct advantage over traditional geospatial data analysis techniques in that separate raster, vector and point cloud algorithms aren't
required, eliminating the need for data type conversion and allowing users to apply common feature-extraction algorithms. The advantage increases in cases involving multitemporal data, as issues of data completeness and spatial disagreement arise. GEOBIA provides a framework for overcoming these problems, much like a human analyst would.

Figure 2 - Rule Set

Figure 2. A rule-based expert system can be used to identify forest loss from the imagery shown in Figure 1.

Real-World Applications

After working through modules on the development of remote sensing workflows, data preparation and manual image interpretation, the students move on to automated feature extraction using GEOBIA techniques. They develop automated approaches using Trimble's eCognition software (, which, as the first commercial GEOBIA package, founded GEOBIA as a remote sensing discipline.

Rather than use more traditional unsupervised clustering techniques, or sample-based supervised techniques, I have the students develop a rule-based expert system (Figure 2). Within the system, they combine segmentation, classification and morphological algorithms to extract land cover from a combination of imagery, LiDAR and vector datasets. This forces them to translate their understanding of the data into an automated workflow, applying EIIs in the process.

Figure 3. A GEOBIA approach was used to separate water features. Rivers (purple) are separated from ponds (cyan) using a combination of spectral and spatial information.

Figure 3. A GEOBIA approach was used to separate water features. Rivers (purple) are separated from ponds (cyan) using a combination of spectral and spatial information.

An expert system approach allows students to develop a process that mimics a human analyst, using iterative processing to build the amount of information available in each object during the course of the workflow. Figure 3 shows an example of how one student, in his first week of developing GEOBIA workflows, was able to separate rivers from ponds. Rivers and ponds have overlapping spectral properties with each other and with shadows. It's only when an iterative processing approach is used in which objects are created, assigned to a class and merged that the spatial information can be used to distinguish features among the three.

One of the greatest and most rewarding surprises of teaching GEOBIA to GEOINT students is seeing how quickly they leverage the technology to devise solutions to complex remote sensing problems. Their final projects, in which they have only a few weeks of experience using the technology, often surpass work published in peer-reviewed journals.

Figure 4 shows an example of a project in which a student developed an automated approach for runway feature extraction, combining imagery and LiDAR. Mapping runways is a highly cognitive task, requiring the software to recognize macro geometric properties, such as overall shape, along with micro indicators, such as line markings. The task also requires the software to ignore features that aren't part of the runway, such as an airplane preparing for take-off.

Figure 5 shows the output of a workflow a student developed to identify coastal change in New York following Hurricane Sandy, using multitemporal LiDAR and imagery. In this case, the challenge was dealing with datasets acquired with different specifications at different times. Each dataset contained a piece of the puzzle, but a compressive quantification of coastal change was only possible when the datasets were integrated.

Figure 4. A GEOBIA approach was used to automatically extract runways from high-resolution imagery .

Figure 4. A GEOBIA approach was used to automatically extract runways from high-resolution imagery .

A Practical Solution

It's unrealistic to desire a Staples easy button for extracting GEOINT information. For the foreseeable future we'll need humans in the loop, and these humans will need to be highly trained GEOINT professionals.

New approaches, such as GEOBIA, allow us to essentially supercharge a GEOINT analyst, enabling a single person to develop scalable routines that either substantially reduce the effort of, or almost
entirely replace, what would previously take a large team of analysts to accomplish. Technology won't replace human analysts, but combining a GEOBIA approach with a highly skilled analyst offers a practical way to deal with today's remote sensing data deluge.

Figure 5. A GEOBIA approach was used to automatically identify coastal change.

Figure 5. A GEOBIA approach was used to automatically identify coastal change.


May Issue 2023