Earth Imaging Journal: Remote Sensing, Satellite Images, Satellite Imagery
Breaking News
GeoSLAM to Demo Time & Cost Saving 3D Mobile Laser Scanners for Construction Applications at 2018 AGC Convention
NOTTINGHAMSHIRE, U.K. – GeoSLAM will demonstrate how its ZEB-REVO...
Topcon Acquires ClearEdge3D, a Technology Leader in 3D Modeling and Construction Verification Software
TOKYO, Japan  – Topcon Corporation, a world leader in...
Rare and Shrinking Glaciers in the Tropics
Like tropical glaciers elsewhere in the world, the glaciers...
AirGon LLC Appointed North American Distributor for YellowScan Drone LIDAR Systems
Huntsville, AL – AirGon LLC, a wholly owned subsidiary...
Mahr Inc. Announces New MarWin Millimar Cockpit Software and Millimar N 1700 Modules for Easy Custom Measurement Display and Qualification
PROVIDENCE, RI – Mahr Inc. today announced the launch...

MIT researchers are fine tuning algorithms to consider arm positions and hand positions separately, which drastically cuts down on the computational complexity of their task.

The problem of interpreting hand signals has two distinct parts. The first is simply inferring the body pose of the signaler from a digital image: Are the hands up or down, the elbows in or out? The second is determining which specific gesture is depicted in a series of images. The MIT researchers are chiefly concerned with the second problem; they present their solution in the March issue of the journal ACM Transactions on Interactive Intelligent Systems. But to test their approach, they also had to address the first problem, which they did in work presented at last year’s IEEE International Conference on Automatic Face and Gesture Recognition.

Read the full story.

Comments are closed.