I am an associate professor of Computer Engineering at Koç University in Istanbul working at the Artificial Intelligence Laboratory. Previously I was at the MIT AI Lab and later co-founded Inquira, Inc. My research is in natural language processing and machine learning. For prospective students here are some research topics, papers, classes, blog posts and past students.
Koç Üniversitesi Bilgisayar Mühendisliği Bölümü'nde öğretim üyesiyim ve Yapay Zeka Laboratuarı'nda çalışıyorum. Bundan önce MIT Yapay Zeka Laboratuarı'nda çalıştım ve Inquira, Inc. şirketini kurdum. Araştırma konularım doğal dil işleme ve yapay öğrenmedir. İlgilenen öğrenciler için araştırma konuları, makaleler, verdiğim dersler, Türkçe yazılarım, ve mezunlarımız.

December 19, 2016

Julia ve Knet ile Derin Öğrenmeye Giriş

21 Aralık 2016 Çarşamba 19:30'da Data İstanbul'da "Julia ve Knet ile Derin Öğrenmeye Giriş" sunumu yapacağım.

Bir önceki sunum: ODTÜ Yapay Öğrenme ve Bilgi İşlemede Yeni Teknikler Yaz Okulu, 6-9 Eylül, 2016, ODTÜ, Ankara. (URL, Sunum,Video)

Full post...

December 14, 2016

CharNER: Character-Level Named Entity Recognition

Onur Kuru, Ozan Arkan Can and Deniz Yuret. 2016. COLING. Osaka. (PDF,Presentation)


We describe and evaluate a character-level tagger for language-independent Named Entity Recognition (NER). Instead of words, a sentence is represented as a sequence of characters. The model consists of stacked bidirectional LSTMs which inputs characters and outputs tag probabilities for each character. These probabilities are then converted to consistent word level named entity tags using a Viterbi decoder. We are able to achieve close to state-of-the-art NER performance in seven languages with the same basic model using only labeled NER data and no hand-engineered features or other external resources like syntactic taggers or Gazetteers.

Full post...

December 13, 2016

Learning grammatical categories using paradigmatic representations: Substitute words for language acquisition

Mehmet Ali Yatbaz, Volkan Cirik, Aylin Küntay and Deniz Yuret. 2016. COLING. Osaka. (PDF,Poster)


Learning word categories is a fundamental task in language acquisition. Previous studies show that co-occurrence patterns of preceding and following words are essential to group words into categories. However, the neighboring words, or frames, are rarely repeated exactly in the data. This creates data sparsity and hampers learning for frame based models. In this work, we propose a paradigmatic representation of word context which uses probable substitutes instead of frames. Our experiments on child-directed speech show that models based on probable substitutes learn more accurate categories with fewer examples compared to models based on frames.

Full post...

December 10, 2016

Knet: beginning deep learning with 100 lines of Julia (NIPS workshop)

Deniz Yuret. 2016. Machine Learning Systems Workshop at NIPS 2016. Barcelona. (PDF,Slide,Poster)


Knet (pronounced "kay-net") is the Koç University machine learning framework implemented in Julia, a high-level, high-performance, dynamic programming language. Unlike gradient generating compilers like Theano and TensorFlow which restrict users into a modeling mini-language, Knet allows models to be defined by just describing their forward computation in plain Julia, allowing the use of loops, conditionals, recursion, closures, tuples, dictionaries, array indexing, concatenation and other high level language features. High performance is achieved by combining automatic differentiation of most of Julia with efficient GPU kernels and memory management. Several examples and benchmarks are provided to demonstrate that GPU support and automatic differentiation of a high level language are sufficient for concise definition and efficient training of sophisticated models.

Full post...

November 01, 2016

Transfer Learning for Low-Resource Neural Machine Translation

Zoph, Barret and Yuret, Deniz and May, Jonathan and Knight, Kevin. 2016. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing pp 1568--1575, Austin, Texas. (PDF)


The encoder-decoder framework for neural machine translation (NMT) has been shown effective in large data scenarios, but is much less effective for low-resource languages. We present a transfer learning method that significantly improves BLEU scores across a range of low-resource languages. Our key idea is to first train a high-resource language pair (the parent model), then transfer some of the learned parameters to the low-resource pair (the child model) to initialize and constrain training. Using our transfer learning method we improve baseline NMT models by an average of 5.6 BLEU on four low-resource language pairs. Ensembling and unknown word replacement add another 2 BLEU which brings the NMT performance on low-resource machine translation close to a strong syntax based machine translation (SBMT) system, exceeding its performance on one language pair. Additionally, using the transfer learning model for re-scoring, we can improve the SBMT system by an average of 1.3 BLEU, improving the state-of-the-art on low-resource machine translation.

Full post...