July 02, 2018

June 04, 2018

Erenay Dayanık, M.S. 2018

Current position: PhD student at University of Stuttgart, Germany (Linkedin)
M.S. Thesis: Morphological Tagging and Lemmatization with Neural Components. Koç University, Department of Computer Engineering. June, 2018. (PDF, Presentation, Code, Data)
Publications: bibtex.php

Abstract

I describe and evaluate MorphNet, a language-independent, end-to-end model that is designed to combine morphological analysis and disambiguation. Tradition- ally, analysis of morphologically complex languages has been performed in two stages: (i) A morphological analyzer based on finite-state transducers produces all possible morphological analyses of a word, (ii) A statistical disambiguation model picks the correct analysis based on the context for each word. MorphNet uses a sequence- to-sequence recurrent neural network to combine analysis and disambiguation. The model consists of three LSTM encoders to create embeddings of various input fea- tures and a two layer LSTM decoder to predict the correct morphological analysis. When MorphNet is trained with text labeled with correct morphological analyses, the model is able to achieve state-of-the art or comparable results in twenty-six different languages.


Full post...

May 30, 2018

Knet-the-Julia-dope: An interactive book on deep learning.

Written by Manuel Antonio Morales (@moralesq). This repo is the Julia translation of the mxnet-the-straight-dope repo, a collection of notebooks designed to teach deep learning, MXNet, and the gluon interface. This project grew out of the MIT course 6.338 Modern Numerical Computing with Julia taught by professor Alan Edelman. Our main objectives are:
  • Introduce the Julia language and its main packages in the context of deep learning
  • Introduce Julia's package Knet: an alternative/complementary option to MXNet
  • Leverage the strengths of Jupyter notebooks to present prose, graphics, equations, and code together in one place

Full post...

May 29, 2018

Wasserstein GAN: a Julia/Knet implementation

Written by Cem Eteke (@ceteke). This repository contains implementation of WGAN and DCGAN in Julia using Knet. Here is a detailed report about WGAN.
Full post...

May 28, 2018

Relational networks: a Julia/Knet implementation

Written by Erenay Dayanık (@ereday). Knet implementation of "A simple neural network module for relational reasoning" by Santoro et al. (2017). (Relational Networks, arXiv:1706.01427, blog post)
Full post...

May 27, 2018

Fast multidimensional reduction and broadcast operations on GPU for machine learning

Doğa Dikbayır, Enis Berk Çoban, İlker Kesen, Deniz Yuret, Didem Unat. Concurrency and Computation: Practice and Experience. 2018. (PDF). Abstract: Reduction and broadcast operations are commonly used in machine learning algorithms for different purposes. They widely appear in the calculation of the gradient values of a loss function, which are one of the core structures of neural networks. Both operations are implemented naively in many libraries usually for scalar reduction or broadcast; however, to our knowledge, there are no optimized multidimensional implementations available. This fact limits the performance of machine learning models requiring these operations to be performed on tensors. In this work, we address the problem and propose two new strategies that extend the existing implementations to perform on tensors. We introduce formal definitions of both operations using tensor notations, investigate their mathematical properties, and exploit these properties to provide an efficient solution for each. We implement our parallel strategies and test them on a CUDA enabled Tesla K40 m GPU accelerator. Our performant implementations achieve up to 75% of the peak device memory bandwidth on different tensor sizes and dimensions. Significant speedups against the implementations available in the Knet Deep Learning framework are also achieved for both operations.
Full post...

Neural Style Transfer: a Julia notebook

Written by Cemil Cengiz (@cemilcengiz).

This notebook implements deep CNN based image style transfer algorithm from "Image Style Transfer Using Convolutional Neural Networks" (Gatys et al., CVPR 2016). The proposed technique takes two images as input, i.e. a content image (generally a photograph) and a style image (generally an artwork painting). Then, it produces an output image such that the content(objects in the image) resembles the "content image" whereas the style i.e. the texture is similar to the "style image". In order words, it re-draws the "content image" using the artistic style of the "style image".

The images below show an original photograph followed by two different styles applied by the network.


Full post...

May 26, 2018

MorphNet: A sequence-to-sequence model that combines morphological analysis and disambiguation

Erenay Dayanık, Ekin Akyürek, Deniz Yuret (2018). arXiv:1805.07946. (PDF)

Abstract: We introduce MorphNet, a single model that combines morphological analysis and disambiguation. Traditionally, analysis of morphologically complex languages has been performed in two stages: (i) A morphological analyzer based on finite-state transducers produces all possible morphological analyses of a word, (ii) A statistical disambiguation model picks the correct analysis based on the context for each word. MorphNet uses a sequence-to-sequence recurrent neural network to combine analysis and disambiguation. We show that when trained with text labeled with correct morphological analyses, MorphNet obtains state-of-the art or comparable results for nine different datasets in seven different languages.


Full post...

May 25, 2018

Happy birthday Raymond Smullyan

A mathematician friend of mine recently told me of a mathematician friend of his who everyday "takes a nap". Now, I never take naps. But I often fall asleep while reading -- which is very different from deliberately taking a nap! I am far more like my dogs Peekaboo, Peekatoo and Trixie than like my mathematician friend once removed. These dogs never take naps; they merely fall asleep. They fall asleep wherever and whenever they choose (which, incidentally is most of the time!). Thus these dogs are true sages.

I think this is all that Chinese philosophy is really about; the rest is mere elaboration!

Raymond Smullyan, The Tao is Silent (1977)


Full post...

May 24, 2018

A new dataset and model for learning to understand navigational instructions

Ozan Arkan Can, Deniz Yuret (2018). arXiv:1805.07952. (PDF).

Abstract: In this paper, we present a state-of-the-art model and introduce a new dataset for grounded language learning. Our goal is to develop a model that can learn to follow new instructions given prior instruction-perception-action examples. We based our work on the SAIL dataset which consists of navigational instructions and actions in a maze-like environment. The new model we propose achieves the best results to date on the SAIL dataset by using an improved perceptual component that can represent relative positions of objects. We also analyze the problems with the SAIL dataset regarding its size and balance. We argue that performance on a small, fixed-size dataset is no longer a good measure to differentiate state-of-the-art models. We introduce SAILx, a synthetic dataset generator, and perform experiments where the size and balance of the dataset are controlled.


Full post...

May 13, 2018

Tutorial: Deep Learning with Julia/Knet

Tutorial at Qatar Computing Research Institute, May 13, 2018. Thanks to Dr. Sanjay Chawla for the invitation.
Full post...

May 07, 2018

Deep Learning in NLP: A Brief History

Panel presentation at the International Symposium on Brain and Cognitive Science (ISBCS 2018)


Full post...

April 10, 2018