The annual Earth Imaging Journal polling of its Editorial Advisory Board and industry experts typically asks the same overarching question to everyone. That’s possible when one dominant theme or trend becomes obvious, but Earth imaging is seeing a remarkable period of innovation and disruption, including the rise of unmanned aircraft systems, smallsats, phenomenal growth in the number of commercial imagery providers, cloud computing and Big Data analytics, and more.
To learn more about these varied and specific themes of disruption, Earth Imaging Journal decided to ask our available “specialists on the front lines” questions tailored to their areas of expertise. We also asked them to discuss the current climate as well as where things will be in five years’ time.
Q: With the proliferation of UASs and smallsats, how will the “traditional” satellite remote-sensing industry evolve in the next five years?
Belton: The proliferation of Unmanned Aerial Systems (UASs) and smallsat missions highlights an exciting evolution in remote-sensing technology. These systems serve a growing user base that’s increasingly reliant on timely, fresh, accurate information to monitor dynamic environmental processes and human activity.
Emerging UAS and smallsat capabilities are complementary to existing remote-sensing systems. UAS and smallsats provide more-frequent revisit and more-persistent monitoring over areas and targets of interest, enabling powerful new information services and increasing the operational relevance and value of existing applications. These new capabilities will unlock an entirely new user community previously not served by traditional systems, resulting in an overall increase in the value-added segment of the remote-sensing market.
As new systems go live and become operational, the market for the established satellite remote-sensing industry will respond and evolve in several different ways.
First, as the supply of new sources of remote-sensing data increases, data providers will face increased pricing pressure. However, increased data availability will allow value-added service providers, such as MDA Geospatial Services, to see substantial opportunities for growth. Market dynamics will reward those in the industry who adapt their business models to succeed in the new environment.
Second, each element of the remote-sensing industry will have to focus on their specific area of differentiation and specialization to establish and maintain value to customers. Traditional operators will thrive in markets that require premium or on-demand services not addressed by new industry entrants. Investment and improvements in value-added services, superior image resolution and quality, flexible on-demand image tasking, and very rapid product delivery, will drive continued success for the traditional remote-sensing industry.
The third, and perhaps most important element, is the tremendous market opportunity for the industry as a whole that’s presented by the range of new capabilities of UAS and smallsat missions. As UAS and smallsat constellations unlock new nontraditional opportunities, the solutions offered by existing remote-sensing companies will undoubtedly be included as a key foundational component of the solutions provided to new customers. MDA Geospatial Services will be at the forefront of this transformation—building and delivering value-added services that incorporate imagery from the full suite of available traditional and nontraditional sensors.
Matt Bethel, director of technology, Merrick & Company
Q: What are some of the most-used and useful sensors for data collection? What are some upcoming sensors that will have a larger impact in 2020?
Bethel: Currently, the following are the most widely used sensors by data collectors:
Other useful (but not as commonly used) sensors are hyperspectral, multispectral, thermal and ultraviolet corona cameras.
From the traditional data collector’s perspective, camera pixel arrays will continue to increase with improving collection efficiencies. In addition, flash LiDAR systems will become more mainstream due to their high data-collection rate, scalability, processing parallelization and rigid calibration.
At the sensor, real-time processing and data dissemination will take off, allowing data-fusion products such as color encoding of LiDAR point clouds and synthetically derived 3D images to be collected, processed and streamed in real time to ground bases and users for military, disaster management and other applications.
Miniaturization of sensor technology—more technically referred to as the decrease in SWaP (Size Weight and Power)—is fully underway and will continue. Cell phones have ever-increasing sensor specs, including cameras, GPS and IMUs. Compact mapping-grade cameras, hyperspectral sensors, thermal cameras and even LiDARs now are commonplace, and they’re used on miniaturized ground and airborne vehicle systems—many unmanned. The driverless car industry also is demanding rapid technical inventiveness as well as decreased SWaP and price.
The consumer world will have new offerings that rely on sensor fusion, including integrated 3D depth sensors. This will allow for the increased growth of augmented reality, which is the ultimate integration of data collection and dissemination.
Jackson Cothren, director, Center for Advanced Spatial Technologies (CAST), University of Arkansas
Q: Describe the current forms of data acquisition “automation.” How might they be improved in five years?
Cothren: Our ability to produce useful “decision-making aids” (or in our case intelligent and dynamic maps) from a single, dynamic platform depends largely on the ability to co-register multiple overlapping images to form a single 3D model of the ground (including man-made structures and vegetation). This currently is accomplished through a variety of affine-invariant image features and descriptors (the most successful, and arguably the first of its kind, is David Lowe’s Scale-Invariant Feature Transform, or SIFT) that robustly match image to image.
These techniques have become so powerful that since approximately 2005, it isn’t necessary to specify which images overlap: algorithms can reliably “find” overlapping images without any apriori orientation information. Photogrammetrists who have long worked in automatic feature detection and matching especially appreciate have far we’ve come in the last decade.
As long as images are of comparable scale, time of day or year, and are acquired with the same instrument, the “structure-from-motion” approach (matching and orienting images and then creating a dense point cloud) often is embarrassingly easy. Although it’s true that manual editing of the point cloud still is required, our ability to automatically generate 3D surface models and high-quality orthophotos from images acquired from relatively simple platforms and instruments is remarkable and improving.
However, decision makers most directly benefit from multiple temporal, spatial and spectral views of an area of interest. For example, a search-and-rescue effort needs an up-to-date geodetically referenced optical base map of a natural or human-caused disaster area (likely acquired by a UAS flown after the disaster and more-accurately georeferenced using aerial or satellite imagery acquired before) as well as a thermal or multispectral image layer to search for survivors, gas leaks, cavities in building rubble, etc. These images also would likely be acquired by UASs, but from a different platform and at different times.
If timely and accurate information is to be gleaned from these “multi-modal” sensors, the images they acquire must be quickly (i.e., autonomously) georeferenced against new and existing optical imagery of the disaster area. This is where feature matching currently fails us; or is at least not yet robust enough to trust.
Matching a thermal image against a differently scaled optical image, or even matching optical images from a smallsat and a low-flying UAS, still is a complex and largely unsolved problem. We’re getting closer, but there’s much work left to done by photogrammetrists, image scientists and computer-vision specialists. Many research centers–including CAST and the Arkansas High-Performance Computing Center at the University of Arkansas–are working in this area where traditional photogrammetry and computer vision meet high-performance computing.
There are other areas where further work is required. With respect to UASs, realizing truly autonomous navigation is critical to more-effective and safe use. This will require improvements in cost-effective, real-time kinematic GPS capability on smaller systems as well as computer-vision-based avoidance systems. Both are currently being researched and are under development at various research centers worldwide.
Q: Can you describe how Wide-Area Motion Imagery (WAMI) works, what’s special about this technology, and where it might be in five years?
Delay: WAMI sensors provide persistent high-resolution surveillance of large areas. These sensors typically utilize multiple cameras, each producing individual images that are stabilized, georegistered and image-processed before being combined into a single image.
This class of sensor is unique in its ability to image wide fields of view of entire cities or very long corridors (miles) from aerial platforms (manned or unmanned) or towers with wide fields of view up to 180 degrees. Aerial WAMI sensors provide collection coverage of up to 64 square kilometers at 0.5-meter resolution, and fixed-tower WAMI sensors provide coverage areas up to 20 square kilometer also at 0.5-meter resolution.
The persistent view provided by WAMI geospatial/temporal datasets makes it possible to observe many phenomena during a mission that would not be visible using alternate sensors—think of Google Earth, only updated in near-real time over a city-sized area. The challenge with WAMI sensors is the massive volumes of data they produce—tens of terabytes every day. This requires ultra-high I/O read-write performance for storage systems just to record and store images.
Because data volumes often exceed downlink capacities, WAMI sensors typically provide narrower region of interest (ROI) views—similar to output from traditional full-motion video (FMV) systems—that focus on specific areas of interest. The advantage WAMI sensors have is the ability to stream up to 10 different ROI views, enabling users to monitor or track multiple targets simultaneously. The full dataset collected during a mission typically is only available after it has concluded, making real-time monitoring and exploitation challenging unless using automated tools such as watch-boxes over targets of interest.
The forensic analysis that’s possible with WAMI is one of its major advantages. The high-resolution footage and wide-area coverage enable users and automated tools to track medium- and large-sized objects moving across a scene to establish interconnected patterns of life, including social interactions, destinations and origins of travel.
Analysts can go back in time to watch the location of a crime and track the movements of actors, both before and after a crime occurred. If multiple targets disperse in different directions, WAMI systems can track all targets, unlike traditional video sensors that have a narrower field of view. Such forensic analysis extracts and processes data, which can be further correlated with other sources of information.
With advances in solid-state devices, high-resolution charge-coupled devices, processors and storage, commercially available gigapixel WAMI will become more prevalent during the next 5-10 years. As more WAMI sensors are deployed and become Internet accessible, many opportunities will arise to correlate additional intelligence sources.
There’s a vast amount of commercially available GEOINT data today, including imagery, photos, facility systems, location-aware devices, etc. Linking such information with WAMI datasets will provide persistent intelligence information and offer the opportunity to unlock intelligence insights currently unachievable. In the future, as data are increasingly collected and correlated creating a “wide-area persistent data cube,” the application of data and Big Data analytics will help solve crimes, capture bad actors and improve national security.
Q: Are we making progress in our ability to derive value from Earth-observation data, and where might advancements take us in five years?
Dick: We are making great progress in our ability to derive value from Earth-observation data. When people can access the information they need, more value can be derived from such data. With our own Data Management Solutions, such as DataDoors and GetGeo, and other technologies provided by many of our online and software partners, satellite imagery and other types of geospatial Big Data are easier to access and utilize than ever before.
Cloud and mobile technologies are allowing users to access their data 24/7, no matter where they’re located, while APIs and other technologies allow Earth-observation data providers to team with analytic companies to provide exciting new business-intelligence services for traditional and nontraditional geoinformation users.
During the next five years, we will see more geospatial data and services moving to the cloud, and data increasingly will be sold on a subscription basis as opposed to ad hoc. There will be more emphasis on geospatial analytics utilizing multiples types of data, crowdsourcing techniques and advanced technologies such as deep learning for detailed change-detection applications. It’s an exciting time in the industry, because you no longer have to be a remote-sensing expert to benefit from the power of geoinformation.
Q: What are some key remote-sensing disruptors, and what may be disrupting the industry in five years?
The UAS and smallsat sensor market is growing, and many companies and entrepreneurs are entering the field with unique, “smaller and cheaper hardware,” hoping to provide “software or products on-demand or as a service at a low, low price.” To date, they’re attracting investment capital, which adds to the technology frenzy as newcomers seem to monthly enter the market.
This is good for the remote-sensing industry <I>initially<I>, but many of these early entrants already are suffering as they face unhappy investors as they miss quarterly earnings. They’re realizing that the world isn’t beating a path to their door for their products or services. In five years, the winners will emerge, who targeted a client market and delivered a targeted product to solve that client’s problem, and maybe, just maybe, those winners will be the current well-known global providers who have industry-proven experience. They will not emerge exactly as they are today, but they will still be standing in one form or another.
For example, on the client side, GMI works with a large number of agriculturally focused companies as well as the U.S. government, and they’re confident that not every UAS and smallsat entrant is going to be successful due to their lack of specific knowledge of “precision-agriculture needs.” So five years from now, there will have been a lot of company entrants <I>and<I> exits from these industries who should have done a more-thorough job of understanding the potential target clients’ needs.
The technologists and inventors who develop software “algorithms” involved in imagery analytics and whose algorithm-development cycles are well funded (to meet ever-changing imagery inputs) will have solid market position. The next five years also will see many of these software organizations and providers dissolve.
The investment-capital industry as well as specific industries such as insurance and commodities investors are combing through universities and software developers, searching for software analytics and algorithms providing automated one-click easy imagery solutions that <I>also<I> are highly accurate and valid, and there’s the sticking point. A great algorithm for remote-sensing data analytics that’s well vetted and validated for one type of sensor and/or camera needs more development for each new type of imagery applied. The organizations and institutions whose software can quickly and accurately manage multiple types of sensor imagery and camera inputs on the fly will be the big winners.
Cybersecurity and Hyperscale Architectures
We can’t leave this topic without discussing the biggest technology disruptors and winners in remote sensing, which we believe will be focused on cyber-secure imagery, data analytics and data-management techniques as well as developing hyperscale architectures for scalable analytics and real-time data streaming. Such organizations may not be at the forefront of conferences and industry articles, but they will come out as big winners in the next five years.
Such organizations, which are turning their focus to remote-sensing imagery and analytics, typically aim to improve software development and user productivity as well as minimize obstacles typically encountered by new users of such complex systems to provide everything needed to build enterprise-class, real-time analytic applications for Big Data, which also allows for the introduction of new sensor technologies and datasets impacting remote sensing.
Walter Scott, founder, executive vice president and chief technical officer, DigitalGlobe
Q: Are data providers also becoming data repositories and cloud-based hosts, more than previously? What’s the five-year forecast for providers of “Big Data”? Will one-way data delivery be a thing of the past?
Scott: The commercial Earth-observation (EO) industry has made tremendous progress during the last 20 years, going from an idea to a dependable dataset used by governments and companies worldwide. I remember the early days, brainstorming business models in our original DigitalGlobe office and coming up with per-square-kilometer pricing for our imagery. This was before we even had satellites on orbit!
Surprisingly, this model stuck, and, to this day, nearly the entire industry has adopted this pricing scheme. This makes a lot of sense for the initial commercial EO value proposition of “Show Me There,” where customers want to see a specific spot on Earth to understand everything there is to know about it.
One hundred petabytes later, with orbiting satellites nearly always on collecting data at a furious pace, there’s a huge opportunity to solve the “Show Me Where” problem, where users need to look through vast areas or across large time spans to find, count and measure features on the ground.
A key benefit of remote sensing is its ability to fill in the “white space” in traditional maps: capturing where there has been change as well as extracting detail that has heretofore been impractical (e.g., vehicles, outdoor crowds, trees, shipping containers, oil-storage tanks, outbuildings, etc.) and to do so objectively and at far lower cost than traditional methods.
Three innovations need to align to enable this sophisticated Big Data analysis: 1) computer vision, 2) cloud computing and 3) new business models. At DigitalGlobe, we collect more than 1 billion square kilometers of imagery each year, or nearly seven times the land surface area of the globe, so sorting through each pixel manually isn’t feasible. Recent advancements in remote sensing, computer vision and deep learning are making object detection and remote sensing at scale possible with increasing accuracy and reliability.
As we continue to improve the spatial and spectral resolution of our sensors, the datasets they produce become larger, where today our satellites alone are producing more than 10 petabytes of imagery products each year! To enable computation at scale against massive datasets, you must move the compute to the data rather than the data to the compute; the availability of public cloud infrastructure enables this.
Finally, for these powerful “Show Me Where” applications in which a very large quantity of imagery needs to be processed (and often combined with other datasets), per-square-kilometer pricing often is cost prohibitive. By opening up the data for processing, we’re able to rent the data for processing, rather than selling it for ownership.
This is a huge opportunity to enable an entirely new set of analytics never before available. To do so, we’re making our remotely sensed data more accessible without requiring users to have huge storage, bandwidth and compute infrastructure. This is a trend that’s going to continue in the industry, where simple APIs are used to attract a larger user base than the traditional GIS or remote-sensing user. Our remote-sensing industry is smart, but we can’t do it alone if we’re to realize the true potential of our data.