Researchers have developed a visual cloud computing system to aid response teams in the event of a natural or man-made disaster. The model aims to process data quickly and efficiently to support first responders and law enforcement – providing critical information needed to coordinate emergency personnel, track suspects and identify hazards.
Current challenges in processing visual data in emergencies include duplication of information, poor networking resources in disaster zones, as well as bottlenecking created by the streams of high-resolution video footage. Now, a team of computer scientists at the University of Missouri have built a visual cloud method which can streamline this process.
The team published its findings in a paper titled Incident-Supporting Visual Cloud Computing Utilizing Software-Defined Networking, in which it argues how the balancing of fog computing at the network-edge and core cloud computing can reduce latency, congestion and increase throughput when managing visual analytics in a disaster.
Assistant professor, Prasad Calyam, at the MU College of Engineering noted: ‘In disaster scenarios, the amount of visual data generated can create a bottleneck in the network. We are working to develop the most efficient way to process data and study how to quickly present visual information to first responders and law enforcement.’
The new framework links mobile devices across a mobile cloud based on algorithms which determine what data should be sent for processing in the cloud, and what data can be managed on local devices, including smartphones and laptops.
The algorithms, explained Kannappan Palaniappan, associate professor at the department of computer science, decide what types of ‘visual processing to perform in the edge or fog network, and what data and computation should be done in the core cloud using resources from multiple service providers in a seamless way.’ He added that this software-defined sorting and distribution of data means that the right information can reach responders faster.
‘What you’re seeing is often from overlapping cameras,’ he said. ‘I don’t need to send two separate pictures; I send the distinctive parts. That mosaic stitching happens in the periphery or edge of the network to limit the amount of data that needs to be sent to the cloud.’