Earth Imaging Journal: Remote Sensing, Satellite Images, Satellite Imagery
Breaking News
Geographic Information System Market Worth $14.5 Billion by 2025
A few key factors driving the growth of this...
Leica Geosystems brings the world’s first MultiStation to the next level
The new version of Leica Nova MS60 enables users to...
Announcement of new dates for Geo Connect Asia
Southeast Asia’s inaugural geospatial event set to return on...
SoftServe Showcases Drone Innovation for Autonomous Firefighting at MBZIRC 2020 Robotics Challenge
AUSTIN, Texas - SoftServe, a leading digital authority and consulting...
Enview Gains Momentum as Air Force Deploys AI Analytics for National Disaster Relief and Company Attracts Top Executive
SAN FRANCISCO - Enview, a leading 3D geospatial AI company,...

MIT researchers are fine tuning algorithms to consider arm positions and hand positions separately, which drastically cuts down on the computational complexity of their task.

The problem of interpreting hand signals has two distinct parts. The first is simply inferring the body pose of the signaler from a digital image: Are the hands up or down, the elbows in or out? The second is determining which specific gesture is depicted in a series of images. The MIT researchers are chiefly concerned with the second problem; they present their solution in the March issue of the journal ACM Transactions on Interactive Intelligent Systems. But to test their approach, they also had to address the first problem, which they did in work presented at last year’s IEEE International Conference on Automatic Face and Gesture Recognition.

Read the full story.

Comments are closed.