June 14, 2016

Natural language communication with robots

Yonatan Bisk, Deniz Yuret, and Daniel Marcu. 2016. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT 2016) pp 751--761, San Diego, California. (PDF, Slides)

Abstract

We propose a framework for devising empirically testable algorithms for bridging the communication gap between humans and robots. We instantiate our framework in the context of a problem setting in which humans give instructions to robots using unrestricted natural language commands, with instruction sequences being subservient to building complex goal configurations in a blocks world. We show how one can collect meaningful training data and we propose three neural architectures for interpreting contextually grounded natural language commands. The proposed architectures allow us to correctly understand/ground the blocks that the robot should move when instructed by a human who uses unrestricted language. The architectures have more diffi- culty in correctly understanding/grounding the spatial relations required to place blocks correctly, especially when the blocks are not easily identifiable.


Full post...

June 06, 2016

Saman Zia, M.S. 2016

Current position: Software Engineer at Microsoft, Seattle (Email, Linkedin)
M.S. Thesis: RGB-D Object Recognition using Deep Convolutional Neural Networks. Koç University, Department of Computer Engineering. June, 2016. (PDF, Presentation, Code)

Abstract:

Recent availability of low cost RGB-D sensors has led to an increased interest in object recognition combining both color and depth modalities. Object recognition from RGB-D images is particularly important in robotic tasks and the inclusion of depth has been proven to increase the performance. The problem of combining depth and color information is being widely researched. This thesis addresses this problem by initializing a 2-D Convolutional Neural Network (CNN) for RGB information via transfer learning and 3-D Convolutional Neural Network for encoding depth infor- mation. The obtained feature representations are fused to report performance over the RGB-D object recognition task. The transferred weights are from CNNs that are trained on large ImageNet classification challenge dataset and produces meaningful features. The depth information is encoded along with the color information in a 3-D voxel and learns joint features from scratch using a 3-D CNN. The approach is evaluated on the Washington RGB-D dataset and the performance for RGB category recognition exceeds the state-of-the-art, while the RGB-D performance is on par with it for category recognition. Due to good features learnt by the 3-D CNN, the po- tential of transfer learning from 2-D pre-trained CNN to 3-D CNN to include depth information is also addressed.


Full post...