December 19, 2016

Julia ve Knet ile Derin Öğrenmeye Giriş

Sunumlar:
  • Data İstanbul, 21 Aralık 2016 Çarşamba, 19:30 (URL, Sunum,Video).
  • ODTÜ Yapay Öğrenme ve Bilgi İşlemede Yeni Teknikler Yaz Okulu (OBAYO), 6-9 Eylül, 2016, ODTÜ, Ankara. (URL, Sunum,Video)
  • İsmail Arı Bilgisayar Bilimleri ve Mühendisliği Bilimsel Eğitim Etkinliği (ISMAIL 2017), 31 Temmuz - 4 Ağustos 2017 Boğaziçi Üniversitesi, İstanbul. (URL).

Full post...

December 14, 2016

CharNER: Character-Level Named Entity Recognition

Onur Kuru, Ozan Arkan Can and Deniz Yuret. 2016. COLING. Osaka. (PDF,Presentation)

Abstract

We describe and evaluate a character-level tagger for language-independent Named Entity Recognition (NER). Instead of words, a sentence is represented as a sequence of characters. The model consists of stacked bidirectional LSTMs which inputs characters and outputs tag probabilities for each character. These probabilities are then converted to consistent word level named entity tags using a Viterbi decoder. We are able to achieve close to state-of-the-art NER performance in seven languages with the same basic model using only labeled NER data and no hand-engineered features or other external resources like syntactic taggers or Gazetteers.


Full post...

December 13, 2016

Learning grammatical categories using paradigmatic representations: Substitute words for language acquisition

Mehmet Ali Yatbaz, Volkan Cirik, Aylin Küntay and Deniz Yuret. 2016. COLING. Osaka. (PDF,Poster)

Abstract

Learning word categories is a fundamental task in language acquisition. Previous studies show that co-occurrence patterns of preceding and following words are essential to group words into categories. However, the neighboring words, or frames, are rarely repeated exactly in the data. This creates data sparsity and hampers learning for frame based models. In this work, we propose a paradigmatic representation of word context which uses probable substitutes instead of frames. Our experiments on child-directed speech show that models based on probable substitutes learn more accurate categories with fewer examples compared to models based on frames.


Full post...

December 10, 2016

Knet: beginning deep learning with 100 lines of Julia (NIPS workshop)

Deniz Yuret. 2016. Machine Learning Systems Workshop at NIPS 2016. Barcelona. (PDF,Slide,Poster)

Abstract

Knet (pronounced "kay-net") is the Koç University machine learning framework implemented in Julia, a high-level, high-performance, dynamic programming language. Unlike gradient generating compilers like Theano and TensorFlow which restrict users into a modeling mini-language, Knet allows models to be defined by just describing their forward computation in plain Julia, allowing the use of loops, conditionals, recursion, closures, tuples, dictionaries, array indexing, concatenation and other high level language features. High performance is achieved by combining automatic differentiation of most of Julia with efficient GPU kernels and memory management. Several examples and benchmarks are provided to demonstrate that GPU support and automatic differentiation of a high level language are sufficient for concise definition and efficient training of sophisticated models.


Full post...