December 07, 2018

Grounded language learning datasets

Touchdown 201811 (arXiv, github, streetview, Cornell, navi)

An agent must first follow navigation instructions in a real-life visual urban environment to a goal position, and then identify in the observed image a location described in natural language to find a hidden object. The data contains 9,326 examples of English instructions and spatial descriptions paired with demonstrations.

VCR 201811 (home, arXiv, UW/AI2)

Visual commonsense reasoning dataset. Visual Commonsense Reasoning (VCR) is a new task and large-scale dataset for cognition-level visual understanding. With one glance at an image, we can effortlessly imagine the world beyond the pixels (e.g. that [person1] ordered pancakes). While this task is easy for humans, it is tremendously difficult for today's vision systems, requiring higher-order cognition and commonsense reasoning about the world. We formalize this task as Visual Commonsense Reasoning. In addition to answering challenging visual questions expressed in natural language, a model must provide a rationale explaining why its answer is true.

  • 290k multiple choice questions
  • 290k correct answers and rationales: one per question
  • 110k images
  • Counterfactual choices obtained with minimal bias, via our new Adversarial Matching approach
  • Answers are 7.5 words on average; rationales are 16 words.
  • High human agreement (>90%)
  • Scaffolded on top of 80 object categories from COCO
  • Is now (as of Dec 3, 2018) available for download!

NLVR2 201811 (home, arXiv, github, Cornell, qa)

The data contains 107,296 examples of English sentences paired with web photographs. The task is to determine whether a natural language caption is true about a photograph. The data was collected through crowdsourcings, and solving the task requires reasoning about sets of objects, comparisons, and spatial relations. There are two related corpora: NLVR, with synthetically generated images, and NLVR2, which includes natural photographs.

HOW2 201811 (arXiv (2 cit), github, CMU)

In this paper, we introduce How2, a multimodal collection of instructional videos with English subtitles and crowdsourced Portuguese translations. We also present integrated sequence-to-sequence baselines for machine translation, automatic speech recognition, spoken language translation, and multimodal summarization. By making available data and code for several multimodal natural language tasks, we hope to stimulate more research on these and similar challenges, to obtain a deeper understanding of multimodality in language processing. The corpus consists of around 80,000 instructional videos (about 2,000 hours) with associated English sub-titles and summaries. About 300 hours have also been translated into Portuguese using crowd-sourcing, and used during the JSALT 2018 Workshop.

TVQA 201809 (home, arXiv (2 cit), videoqa, UNC)

TVQA: Localized, Compositional Video Question Answering. TVQA is a large-scale video QA dataset based on 6 popular TV shows (Friends, The Big Bang Theory, How I Met Your Mother, House M.D., Grey's Anatomy, Castle). It consists of 152.5K QA pairs from 21.8K video clips, spanning over 460 hours of video. The questions are designed to be compositional, requiring systems to jointly localize relevant moments within a clip, comprehend subtitles-based dialogue, and recognize relevant visual concepts.

TEMPO 201809 (home, arXiv (1 cit), github, data, video, Berkeley)

TEMPOral reasoning in video and language (TEMPO) dataset. Localizing moments in a longer video via natural language queries is a new, challenging task at the intersection of language and video understanding. Though moment localization with natural language is similar to other language and vision tasks like natural language object retrieval in images, moment localization offers an interesting opportunity to model temporal dependencies and reasoning in text. Our dataset consists of two parts: a dataset with real videos and template sentences (TEMPO - Template Language) which allows for controlled studies on temporal language, and a human language dataset which consists of temporal sentences annotated by humans (TEMPO - Human Language).

RecipeQA 201809 (home, arXiv, slides, Hacettepe, qa)

RecipeQA is a dataset for multimodal comprehension of cooking recipes. It consists of over 36K question-answer pairs automatically generated from approximately 20K unique recipes with step-by-step instructions and images. Each question in RecipeQA involves multiple modalities such as titles, descriptions or images, and working towards an answer requires (i) joint understanding of images and text, (ii) capturing the temporal flow of events, and (iii) making sense of procedural knowledge.

CIFF (LANI and CHAI) 201809 (arXiv (3 cit), github, data and simulators, Cornell, navi+exec)

We propose to decompose instruction execution to goal prediction and action generation. We design a model that maps raw visual observations to goals using LINGUNET, a language-conditioned image generation network, and then generates the actions required to complete them. Our model is trained from demonstration only without external resources. To evaluate our approach, we introduce two benchmarks for instruction following: LANI, a navigation task; and CHAI, where an agent executes household instructions.

Talk the Walk 201808 (arXiv (3 cit), github, FAIR, navi+dial)

Two agents, a "tourist" and a "guide", interact with each other via natural language in order to have the tourist navigate towards the correct location. The guide has access to a map and knows the target location but not the tourist location, while the tourist does not know the way but can navigate in a 360-degree street view environment. The task involves "perception" for the tourist observing the world, "action" for the tourist to navigate through the environment, and "interactive dialogue" for the tourist and guide to work towards their common goal.

DRIF 201806 (github, paper1 (3 cit), paper2 (3 cit), Cornell, navi)

Following natural language navigation instructions on a realistic simulated quadcopter.

ADS 201806 (home, paper (16 cit), data, challenge2018, Pittsburgh)

Automatic Understanding of Visual Advertisements: A large annotated dataset of image and video ads. In this dataset, we provide over 64,000 ad images annotated with the topic of the ad (e.g. the product or topic, in case of public service announcements), the sentiment that the ad provokes, any symbolic references that the ad makes (e.g. an owl symbolizes wakefulness, ice symbolizes freshness, etc.), including bounding boxes containing the physical content that alludes symbolically to concepts outside of the ad, and questions and answers about the meaning of the ad ("What should I do according to the ad? Why should I do it, according to the ad?")

VizWiz 201802 (home, arXiv (17 cit), challenge2018, UTAustin, qa)

VizWiz is proposed to empower a blind person to directly request in a natural manner what (s)he would like to know about the surrounding physical world.

CHALET 201801 (arXiv (10 cit), github, Cornell, navi+exec)

We present CHALET, a 3D house simulator with support for navigation and manipulation. CHALET includes 58 rooms and 10 house configuration, and allows to easily create new house and room layouts. CHALET supports a range of common household activities, including moving objects, toggling appliances, and placing objects inside closeable containers. The environment and actions available are designed to create a challenging domain to train and evaluate autonomous agents, including for tasks that combine language, vision, and planning in a dynamic environment.

CoDraw 201712 (arXiv (6 cit), github, data, FAIR, draw)

CoDraw: Visual dialog for collaborative drawing. In this work, we propose a goal-driven collaborative task that contains vision, language, and action in a virtual environment as its core components. Specifically, we develop a collaborative `Image Drawing' game between two agents, called CoDraw. Our game is grounded in a virtual world that contains movable clip art objects. Two players, Teller and Drawer, are involved. The Teller sees an abstract scene containing multiple clip arts in a semantically meaningful configuration, while the Drawer tries to reconstruct the scene on an empty canvas using available clip arts. The two players communicate via two-way communication using natural language. We collect the CoDraw dataset of ~10K dialogs consisting of 138K messages exchanged between a Teller and a Drawer from Amazon Mechanical Turk (AMT). We analyze our dataset and present three models to model the players' behaviors, including an attention model to describe and draw multiple clip arts at each round. The attention models are quantitatively compared to the other models to show how the conventional approaches work for this new task. We also present qualitative visualizations.

IQA 201712 (arXiv (26 cit), github, youtube, UW/AI2, qa+navi)

We introduce Interactive Question Answering (IQA), the task of answering questions that require an autonomous agent to interact with a dynamic visual environment. IQA presents the agent with a scene and a question, like: "Are there any apples in the fridge?" The agent must navigate around the scene, acquire visual understanding of scene elements, interact with objects (e.g. open refrigerators) and plan for a series of actions conditioned on the question.

EmbodiedQA 201711 (home, arXiv (35 cit), FAIR/Gatech, qa+navi)

We present a new AI task -- Embodied Question Answering (EmbodiedQA) -- where an agent is spawned at a random location in a 3D environment and asked a question ("What color is the car?"). In order to answer, the agent must first intelligently navigate to explore the environment, gather information through first-person (egocentric) vision, and then answer the question ("orange"). This challenging task requires a range of AI skills -- active perception, language understanding, goal-driven navigation, commonsense reasoning, and grounding of language into actions. In this work, we develop the environments, end-to-end-trained reinforcement learning agents, and evaluation protocols for EmbodiedQA.

R2R 201711 (home, arXiv (38 cit), challenge2018, speaker-follower model: paper, code; Matterport3D: home, simulator, arXiv (56 cit), Australia, navi)

R2R is the first benchmark dataset for visually-grounded natural language navigation in real buildings. The dataset requires autonomous agents to follow human-generated navigation instructions in previously unseen buildings, as illustrated in the demo above. For training, each instruction is associated with a Matterport3D Simulator trajectory. 22k instructions are available, with an average length of 29 words. There is a test evaluation server for this dataset available at EvalAI.

FigureQA 201710 (home, arXiv (8 cit), Microsoft, qa)

We introduce FigureQA, a visual reasoning corpus of over one million question-answer pairs grounded in over 100,000 images. The images are synthetic, scientific-style figures from five classes: line plots, dot-line plots, vertical and horizontal bar graphs, and pie charts. We formulate our reasoning task by generating questions from 15 templates; questions concern various relationships between plot elements and examine characteristics like the maximum, the minimum, area-under-the-curve, smoothness, and intersection. To resolve, such questions often require reference to multiple plot elements and synthesis of information distributed spatially throughout a figure. To facilitate the training of machine learning systems, the corpus also includes side data that can be used to formulate auxiliary objectives. In particular, we provide the numerical data used to generate each figure as well as bounding-box annotations for all plot elements. We study the proposed visual reasoning task by training several models, including the recently proposed Relation Network as a strong baseline. Preliminary results indicate that the task poses a significant machine learning challenge. We envision FigureQA as a first step towards developing models that can intuitively recognize patterns from visual representations of data.

GuessWhich 201708 (arXiv (10 cit), github, Gatech)

In this work, we design a cooperative game - GuessWhich - to measure human-AI team performance in the specific context of the AI being a visual conversational agent. GuessWhich involves live interaction between the human and the AI. The AI, which we call ALICE, is provided an image which is unseen by the human. Following a brief description of the image, the human questions ALICE about this secret image to identify it from a fixed pool of images. We measure performance of the human-ALICE team by the number of guesses it takes the human to correctly identify the secret image after a fixed number of dialog rounds with ALICE.

NLVR 201707 (home, paper (20 cit), github, Cornell, qa)

Visual reasoning language dataset, containing 92,244 pairs of examples of natural statements grounded in synthetic images with 3,962 unique sentences.

GuessWhat 201707 (home, arXiv (38 cit), data, github, Google/Montreal)

GuessWhat?! is a cooperative two-player guessing game. The goal of the game is to locate an unknown object in a rich image scene by asking a sequence of questions. The aim of this project is to facilitate research in combining visual understanding, natural language processing and cooperative agent interaction.

  • 155,280 played games
  • 821,889 questions+answers
  • 66,537 images
  • 134,073 objects

TQA 201707 (home, paper (19 cit), ai2)

TQA: Textbook Question Answering. The TQA dataset encourages work on the task of Multi-Modal Machine Comprehension (M3C) task. The M3C task builds on the popular Visual Question Answering (VQA) and Machine Comprehension (MC) paradigms by framing question answering as a machine comprehension task, where the context needed to answer questions is provided and composed of both text and images. The dataset constructed to showcase this task has been built from a middle school science curriculum that pairs a given question to a limited span of knowledge needed to answer it.

Visual Genome 201705 (home, paper (444 cit), Stanford)

Visual Genome is a dataset, a knowledge base, an ongoing effort to connect structured image concepts to language.

  • 108,077 Images
  • 5.4 Million Region Descriptions
  • 1.7 Million Visual Question Answers
  • 3.8 Million Object Instances
  • 2.8 Million Attributes
  • 2.3 Million Relationships
  • Everything Mapped to Wordnet Synsets

CLEVR 201612 (home, arXiv (184 cit), Stanford/FAIR, qa)

An artificially generated visual question answering dataset:

  • A training set of 70,000 images and 699,989 questions
  • A validation set of 15,000 images and 149,991 questions
  • A test set of 15,000 images and 14,988 questions
  • Answers for all train and val questions
  • Scene graph annotations for train and val images giving ground-truth locations, attributes, and relationships for objects
  • Functional program representations for all training and validation images

MarioQA (home, arXiv (11 cit), github, videoqa, postech/korea)

MarioQA: Answering Questions by Watching Gameplay Videos. From a total of 13 hours of gameplays, we collect 187,757 examples with automatically generated QA pairs. There are 92,874 unique QA pairs and each video clip contains 11.3 events in average. There are 78,297, 64,619 and 44,841 examples in NT, ET and HT, respectively. Note that there are 3.5K examples that can be answered using a single frame of video; the portion of such examples is only less than 2%. The other examples are event-centric; 98K examples require to focus on a single event out of multiple ones while 86K need to recognize multiple events for counting (55K) or identifying their temporal relationships (44K). Note that there are instances that belong to both cases.

VisDial 201611 (home, paper (119 cit), challenge2018, Gatech, qa)

Visual Dialog is a novel task that requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a follow-up question about the image, the agent has to answer the question.

  • 120k images from COCO
  • 1 dialog / image
  • 10 rounds of question-answers / dialog
  • Total 1.2M dialog question-answers

Comics 201611 (arXiv (19 cit), github, umd)

We construct a dataset, COMICS, that consists of over 1.2 million panels (120 GB) paired with automatic textbox transcriptions. An in-depth analysis of COMICS demonstrates that neither text nor image alone can tell a comic book story, so a computer must understand both modalities to keep up with the plot. We introduce three cloze-style tasks that ask models to predict narrative and character-centric aspects of a panel given n preceding panels as context.

SCONE 201606 (home, paper1 (16 cit), paper2 (37 cit), CodaLab, github/clic-lab, Stanford, exec)

Sequential CONtext-dependent Execution dataset: The task in the SCONE dataset is to execute a sequence of actions according to the instructions. Each scenario contains a world with several objects (e.g., beakers), each with different properties (e.g., chemical colors and amounts). Given 5 sequential instructions in human language (e.g., "Pour from the first beaker into the yellow beaker" or "Mix it"), the system has to predict the final world state.

Blocks 201606 (home, paper1 (28 cit.), paper2 (41 cit.), github, ciff-models, ISI, exec)

Dataset where humans give instructions to robots using unrestricted natural language commands to build complex goal configurations in a blocks world. Example instruction from a sequence: "move the nvidia block to the right of the hp block".

VIST 201604 (home, arXiv (64 cit), github, challenge2017, challenge2018, Microsoft, desc)

Visual Storytelling Challenge (NAACL 2018). We introduce the first dataset for sequential vision-to-language, and explore how this data may be used for the task of visual storytelling. The dataset includes 81,743 unique photos in 20,211 sequences, aligned to descriptive and story language. VIST is previously known as "SIND", the Sequential Image Narrative Dataset (SIND).

AI2D 201603 (home, arXiv (30 cit), data, ai2, qa)

AI2D is a dataset of illustrative diagrams for research on diagram understanding and associated question answering. Each diagram has been densely annotated with object segmentations, diagrammatic and text elements. Each diagram has a corresponding set of questions and answers.

MovieQA 201512 (home, arXiv (129 cit), examples, videoqa, toronto)

We introduce the MovieQA dataset which aims to evaluate automatic story comprehension from both video and text. The data set consists of almost 15,000 multiple choice question answers obtained from over 400 movies and features high semantic diversity. Each question comes with a set of five highly plausible answers; only one of which is correct. The questions can be answered using multiple sources of information: movie clips, plots, subtitles, and for a subset scripts and DVS.

VQA 201505 (home, challenge2017, challenge2018, paper1 (878 cit), paper2 (67 cit), paper3 (160 cit), Gatech/Vtech, qa)

VQA is a new dataset containing open-ended questions about images. These questions require an understanding of vision, language and commonsense knowledge to answer.

  • 265,016 images (COCO and abstract scenes)
  • At least 3 questions (5.4 questions on average) per image
  • 10 ground truth answers per question
  • 3 plausible (but likely incorrect) answers per question
  • Automatic evaluation metric

Refer 201410 (paper1 (176 cit), paper2 (70 cit), github, UNC)

ReferItGame: Referring to Objects in Photographs of Natural Scenes. In this paper we introduce a new game to crowd-source natural language referring expressions. By designing a two player game, we can both collect and verify referring expressions directly within the game. To date, the game has produced a dataset containing 130,525 expressions, referring to 96,654 distinct objects, in 19,894 photographs of natural scenes.

SAIL 200607 (home, data, data2, data3, paper1 (226 cit), paper2 (280 cit), UTAustin, navi)

A corpus of 786 route instructions gathered from six people in three large-scale virtual indoor environments. Thirty six other people followed these instructions and rated them for quality. These human participants finished at the intended destination on 69% of the trials. State of the art is 35.4%. The instructions were later split into sentences and corresponding segments. State of the art at sentence level is 73%. Sample sentences: "With the wall on your left, walk forward", "Go two intersections down the pink hallway".


Full post...

December 03, 2018

Building a Language and Compiler for Machine Learning

Post at Julia Blog by Mike Innes et al.
Mention at techrepublic.

Since we originally proposed the need for a first-class language, compiler and ecosystem for machine learning (ML), there have been plenty of interesting developments in the field. Not only have the tradeoffs in existing systems, such as TensorFlow and PyTorch, not been resolved, but they are clearer than ever now that both frameworks contain distinct “static graph” and “eager execution” interfaces. Meanwhile, the idea of ML models fundamentally being differentiable algorithms – often called differentiable programming – has caught on...

Full post... Related link

November 27, 2018

Lost in Math by Sabine Hossenfelder

The book makes great points, especially regarding the faulty incentive structure of scientists and the charade of “research proposals” for funding. My only qualm is that the reader may come away thinking there are no objective measures of goodness for models, which is not true. Statistical learning theory, Bayesian Occam factors, algorithmic complexity etc. each grapple with this problem, yet not much of this is mentioned.
Full post... Related link

September 27, 2018

Ömer Kırnap, M.S. 2018

Current Position: PHD Student at UCL (personal website)
M.S Thesis: Transition Based Dependency Parsing with Deep Learning, Koç University, Department of Computer Engineering. September 2018. (PDF, Presentation).
Publications: bibtex.php
Code: CoNLL18 and CoNLL17

Thesis Abstract:
I introduce word and context embeddings derived from a language model representing left/right context of a word instance and demonstrate that context embeddings significantly improve the accuracy of transition based parser. Our multi-layer perceptron (MLP) parser making use of these embeddings was ranked 7th out of 33 participants (ranked 1st among transition based parsers) in CoNLL 2017 UD Shared Task. However MLP parser relies on additional hand-crafted features which are used to summarize sequential information. I exploit recurrent neural networks to remove these features by implementing tree-stack LSTM, and develop new set of continuous embeddings called morphological feature embeddings. According to official comparison results in CoNLL 2018 UD Shared Task, our tree-stack LSTM outperforms MLP in transition based dependency parsing.

Full post...

July 02, 2018

June 04, 2018

Erenay Dayanık, M.S. 2018

Current position: PhD student at University of Stuttgart, Germany (Linkedin)
M.S. Thesis: Morphological Tagging and Lemmatization with Neural Components. Koç University, Department of Computer Engineering. June, 2018. (PDF, Presentation, Code, Data)
Publications: bibtex.php


I describe and evaluate MorphNet, a language-independent, end-to-end model that is designed to combine morphological analysis and disambiguation. Tradition- ally, analysis of morphologically complex languages has been performed in two stages: (i) A morphological analyzer based on finite-state transducers produces all possible morphological analyses of a word, (ii) A statistical disambiguation model picks the correct analysis based on the context for each word. MorphNet uses a sequence- to-sequence recurrent neural network to combine analysis and disambiguation. The model consists of three LSTM encoders to create embeddings of various input fea- tures and a two layer LSTM decoder to predict the correct morphological analysis. When MorphNet is trained with text labeled with correct morphological analyses, the model is able to achieve state-of-the art or comparable results in twenty-six different languages.

Full post...

May 30, 2018

Knet-the-Julia-dope: An interactive book on deep learning.

Written by Manuel Antonio Morales (@moralesq). This repo is the Julia translation of the mxnet-the-straight-dope repo, a collection of notebooks designed to teach deep learning, MXNet, and the gluon interface. This project grew out of the MIT course 6.338 Modern Numerical Computing with Julia taught by professor Alan Edelman. Our main objectives are:
  • Introduce the Julia language and its main packages in the context of deep learning
  • Introduce Julia's package Knet: an alternative/complementary option to MXNet
  • Leverage the strengths of Jupyter notebooks to present prose, graphics, equations, and code together in one place

Full post...

May 29, 2018

Wasserstein GAN: a Julia/Knet implementation

Written by Cem Eteke (@ceteke). This repository contains implementation of WGAN and DCGAN in Julia using Knet. Here is a detailed report about WGAN.
Full post...

May 28, 2018

Relational networks: a Julia/Knet implementation

Written by Erenay Dayanık (@ereday). Knet implementation of "A simple neural network module for relational reasoning" by Santoro et al. (2017). (Relational Networks, arXiv:1706.01427, blog post)
Full post...

May 27, 2018

Fast multidimensional reduction and broadcast operations on GPU for machine learning

Doğa Dikbayır, Enis Berk Çoban, İlker Kesen, Deniz Yuret, Didem Unat. Concurrency and Computation: Practice and Experience. 2018. (PDF). Abstract: Reduction and broadcast operations are commonly used in machine learning algorithms for different purposes. They widely appear in the calculation of the gradient values of a loss function, which are one of the core structures of neural networks. Both operations are implemented naively in many libraries usually for scalar reduction or broadcast; however, to our knowledge, there are no optimized multidimensional implementations available. This fact limits the performance of machine learning models requiring these operations to be performed on tensors. In this work, we address the problem and propose two new strategies that extend the existing implementations to perform on tensors. We introduce formal definitions of both operations using tensor notations, investigate their mathematical properties, and exploit these properties to provide an efficient solution for each. We implement our parallel strategies and test them on a CUDA enabled Tesla K40 m GPU accelerator. Our performant implementations achieve up to 75% of the peak device memory bandwidth on different tensor sizes and dimensions. Significant speedups against the implementations available in the Knet Deep Learning framework are also achieved for both operations.
Full post...

Neural Style Transfer: a Julia notebook

Written by Cemil Cengiz (@cemilcengiz).

This notebook implements deep CNN based image style transfer algorithm from "Image Style Transfer Using Convolutional Neural Networks" (Gatys et al., CVPR 2016). The proposed technique takes two images as input, i.e. a content image (generally a photograph) and a style image (generally an artwork painting). Then, it produces an output image such that the content(objects in the image) resembles the "content image" whereas the style i.e. the texture is similar to the "style image". In order words, it re-draws the "content image" using the artistic style of the "style image".

The images below show an original photograph followed by two different styles applied by the network.

Full post...

May 26, 2018

MorphNet: A sequence-to-sequence model that combines morphological analysis and disambiguation

Erenay Dayanık, Ekin Akyürek, Deniz Yuret (2018). arXiv:1805.07946. (PDF)

Abstract: We introduce MorphNet, a single model that combines morphological analysis and disambiguation. Traditionally, analysis of morphologically complex languages has been performed in two stages: (i) A morphological analyzer based on finite-state transducers produces all possible morphological analyses of a word, (ii) A statistical disambiguation model picks the correct analysis based on the context for each word. MorphNet uses a sequence-to-sequence recurrent neural network to combine analysis and disambiguation. We show that when trained with text labeled with correct morphological analyses, MorphNet obtains state-of-the art or comparable results for nine different datasets in seven different languages.

Full post...

May 25, 2018

Happy birthday Raymond Smullyan

A mathematician friend of mine recently told me of a mathematician friend of his who everyday "takes a nap". Now, I never take naps. But I often fall asleep while reading -- which is very different from deliberately taking a nap! I am far more like my dogs Peekaboo, Peekatoo and Trixie than like my mathematician friend once removed. These dogs never take naps; they merely fall asleep. They fall asleep wherever and whenever they choose (which, incidentally is most of the time!). Thus these dogs are true sages.

I think this is all that Chinese philosophy is really about; the rest is mere elaboration!

Raymond Smullyan, The Tao is Silent (1977)

Full post...

May 24, 2018

A new dataset and model for learning to understand navigational instructions

Ozan Arkan Can, Deniz Yuret (2018). arXiv:1805.07952. (PDF).

Abstract: In this paper, we present a state-of-the-art model and introduce a new dataset for grounded language learning. Our goal is to develop a model that can learn to follow new instructions given prior instruction-perception-action examples. We based our work on the SAIL dataset which consists of navigational instructions and actions in a maze-like environment. The new model we propose achieves the best results to date on the SAIL dataset by using an improved perceptual component that can represent relative positions of objects. We also analyze the problems with the SAIL dataset regarding its size and balance. We argue that performance on a small, fixed-size dataset is no longer a good measure to differentiate state-of-the-art models. We introduce SAILx, a synthetic dataset generator, and perform experiments where the size and balance of the dataset are controlled.

Full post...

May 13, 2018

Tutorial: Deep Learning with Julia/Knet

Tutorial at Qatar Computing Research Institute, May 13, 2018. Thanks to Dr. Sanjay Chawla for the invitation.
Full post...

May 07, 2018

Deep Learning in NLP: A Brief History

Panel presentation at the International Symposium on Brain and Cognitive Science (ISBCS 2018)

Full post...

April 10, 2018