Earth Imaging Journal: Remote Sensing, Satellite Images, Satellite Imagery
Breaking News
Frequency Electronics, Inc. Awarded $5.9M Lockheed Martin Contract To Qualify Atomic Clocks for Potential Use on Next Gen GPS IIIF Satellites
MITCHEL FIELD, N.Y.- As a risk reduction effort for...
GA-ASI Part of Aviation Week Laureate Award Winning Team
SAN DIEGO  – Last night at the Aviation Week...
RoboSense Provides LiDAR to GACHA — First Autonomous Driving Shuttle Bus For All Weather Conditions Co-Developed by MUJI & Sensible 4
SHENZHEN, China - RoboSense http://www.robosense.ai, a leader in LiDAR...
Velodyne Lidar CEO Wins Alliance of Automobile Manufacturers’ Autos2050 Award
SAN JOSE, Calif.-The Alliance of Automobile Manufacturers (Auto Alliance),...
FARO® Releases FARO ZONE 3D 2019 for Public Safety
LAKE MARY, Fla. - FARO® (NASDAQ: FARO), the world's most...

MIT researchers are fine tuning algorithms to consider arm positions and hand positions separately, which drastically cuts down on the computational complexity of their task.

The problem of interpreting hand signals has two distinct parts. The first is simply inferring the body pose of the signaler from a digital image: Are the hands up or down, the elbows in or out? The second is determining which specific gesture is depicted in a series of images. The MIT researchers are chiefly concerned with the second problem; they present their solution in the March issue of the journal ACM Transactions on Interactive Intelligent Systems. But to test their approach, they also had to address the first problem, which they did in work presented at last year’s IEEE International Conference on Automatic Face and Gesture Recognition.

Read the full story.

Comments are closed.