Earth Imaging Journal: Remote Sensing, Satellite Images, Satellite Imagery
Breaking News
Soar TGE to Generate the World’s First Fully Decentralised Global Super-Map Using Drones
Leading Australian geospatial mapping technology company Soar has announced their highly...
Esri User Conference Highlights Where Mapping Technology is Headed Next
REDLANDS, Calif.- Esri, the global leader in spatial analytics,...
Professional Planners Can Earn Ample CM Credits at GIS-Pro & CalGIS 2018
The American Institute of Certified Planners (AICP) is the...
A New Toolbox for Processing Earth Observation Images in the Cloud
Most of image analysis tasks that required ENVI or...
Maxar’s SSL Ships First of Three Advanced Communications Satellites Scheduled to Launch on the SpaceX Falcon 9 this Summer
Palo Alto, Calif. – SSL, a Maxar Technologies company...

June 29, 2016
Visual Cloud Computing Methods Could Help First Responders in Disaster Scenarios

COLUMBIA, Mo. – In natural or man-made disasters, the ability to process massive amounts of visual electronic data quickly and efficiently could mean the difference between life and death for survivors. Visual data created by numerous security cameras, personal mobile devices and aerial video provide useful data for first responders and law enforcement. That data can be critical in terms of knowing where to send emergency personnel and resources, tracking suspects in man-made disasters, or detecting hazardous materials. Recently, a group of computer science researchers from the University of Missouri developed a visual cloud computing architecture that streamlines the process.

“In disaster scenarios, the amount of visual data generated can create a bottleneck in the network,” said Prasad Calyam, assistant professor of computer science in the MU College of Engineering. “This abundance of visual data, especially high-resolution video streams, is difficult to process even under normal circumstances. In a disaster situation, the computing and networking resources needed to process it may be scarce and even not be available. We are working to develop the most efficient way to process data and study how to quickly present visual information to first responders and law enforcement.”

The research team, including Kannappan Palaniappan and Ye Duan, associate professors in the Department of Computer Science, developed a framework for disaster incident data computation that links the system to mobile devices in a mobile cloud. Algorithms designed by the team help determine what information needs to be processed by the cloud and what information can be processed on local devices, such as laptops and smartphones. This spreads the processing over multiple devices and helps responders receive the information faster.

“Often, we see many of the same images from overlapping cameras,” Palaniappan said. “Responders generally do not need to see two separate pictures but rather the distinctive parts. That mosaic stitching that we helped define happens in the periphery of the network to limit the amount of data that needs to be sent to the cloud. This is a natural way of compressing visual data without losing information. Clever algorithms help determine what types of visual processing to perform in the edge or fog of the network, and what data and computation should be done in the core cloud.”

Incident-supporting visual cloud computing utilizing software-defined networking” recently was published in the journal IEEE Transactions on Circuits and Systems for Video Technology in a special issue on cloud computing for mobile devices. Guna Seetharaman of the U.S. Naval Research Laboratory also contributed to the study. Funding for the project came from a combination of ongoing grants from the National Science Foundation, Air Force Research Laboratory and the U.S. National Academies Jefferson Science Fellowship.  The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding agencies.

Comments are closed.