December 11, 2009
Unsupervised morphological disambiguation using statistical language models
Abstract:
In this paper, we present a probabilistic model for the unsupervised morphological disambiguation problem. Our model assigns morphological parses T to the contexts C instead of assigning them to the words W. The target word $w \in W$ determines the possible parse set $T_w \subset T$ that can be used in $w$'s context $c_w \in C$. To assign the correct morphological parse $t\in T_w$ to $w$, our model finds the parse $t\in T_w$ that maximizes $P(t|c_w)$. $P(t|c_w)$'s are estimated using a statistical language model and the vocabulary of the corpus. The system performs significantly better than an unsupervised baseline and its performance is close to a supervised baseline.
Full post...
November 17, 2009
How to speak
KOC UNIVERSITY
ECOE 590 SEMINAR
******************
Speaker : Patrick Winston (video presentation)
Title : How to speak
Date : 17 November 2009, Tuesday
Time : 17:00
Place : ENG B30
Refreshments will be served at 16:45
Abstract: In this skillful lecture, Professor Patrick Winston of the Massachusetts Institute of Technology offers tips on how to give an effective talk, cleverly illustrating his suggestions by using them himself. He emphasizes how to start a lecture, cycling in on the material, using verbal punctuation to indicate transitions, describing "near misses" that strengthen the intended concept, and asking questions. He also talks about using the blackboard, overhead projections, props, and "how to stop."
Video available at http://isites.harvard.edu/fs/html/icb.topic58703/winston1.html
Full post... Related link
August 14, 2009
Önder Eker, M.S. 2009
M.S. Thesis: Parser Evaluation Using Textual Entailments. Boğaziçi Üniversitesi Department of Computer Engineering, August 2009. (PDF).
Abstract
Syntactic parsing is a basic problem in natural language processing. It can be defined as assigning a structure to a sentence. Two prevalent approaches to parsing are phrase-structure parsing and dependency parsing. A related problem is parser evaluation. PETE is a dependency-based evaluation where the parse is represented as a list of simple sentences, similar to the Recognizing Textual Entailments task. Each entailment focuses on one relation. A priori training of annotators is not required. A program generates entailments from a dependency parse. Phrase-structure parses are converted to dependency parses to generate entailments. Additional entailments are generated for phrase-structure coordinations. Experiments are carried out with a function-tagger. Parsers are evaluated on the set of entailments generated from the Penn Treebank WSJ and Brown test sections. A phrase-structure parser obtained the highest score.
Full post...
August 07, 2009
ACL 2009 Notes
Tutorial: Kevin Knight, Philipp Koehn.
Topics in Statistical Machine Translation
MT: Phrase based, hierarchical, and syntax based approaches. Hiero is equivalent? to a syntax based approach with a single nonterminal. Minimum Bayes Risk (MBR) chooses not the best option but the one that has maximum expected BLEU. Approaches that work with lattices and forests. System combination provides significant gain. Integrating LM into decoding improves. Cube pruning makes hiero and syntax based more efficient. Throwing out 99% of the phrase table gives no loss. Factored models help when factors used as back off. Reordering tried before. The source and target can be string, tree, or forest. Arabic and Chinese seem most popular. Good test: can you put a jumbled up sentence in the right order. If we could output only grammatical sentences (simplified English?). Dependency LM for output language. Lattice translation. Giza alignments not very accurate, guess = 80%. Bleu between human translators is at the level of best systems, i.e. cannot use as upper bound.Tutorial: Simone Paolo Ponzetto and Massimo Poesio.
State-of-the-art NLP Approaches to Coreference Resolution: Theory and Practical Recipes
Coref: ACE is current standard dataset. Also MUC and other new ones. Anaphora approx 50% proper nouns, 40% noun phrases, 10% pronouns. NPs most difficult. Tough to know when discourse-new. Have evaluation problems like other fields. Would think deciding on anaphora easier for annotators, but issues like whether to consider China and its population anaphoric.P09-1001 [bib]: Qiang Yang; Yuqiang Chen; Gui-Rong Xue; Wenyuan Dai; Yong Yu.
Heterogeneous Transfer Learning for Image Clustering via the SocialWeb
ML: Qiang Yang gave the first invited talk. When the training and test sets have different distributions or different representations. Did not talk much about: when train and test have different labels. Link to causality. Unsupervised pre-learning boosting supervised learning curve.P09-1002 [bib]: Katrin Erk; Diana McCarthy; Nicholas Gaylord.
Investigations on Word Senses and Word Usages
WSD: Annotators provide scores 1-5 for two tasks: how good a fit between a usage and sense, how close are two usages of same word. Claim forcing annotators to single decision detrimental. Also claim coarse senses insufficient to explain results.P09-1010 [bib]: S.R.K. Branavan; Harr Chen; Luke Zettlemoyer; Regina Barzilay.
Reinforcement Learning for Mapping Instructions to Actions
Situated language: Best paper award. Good work goes beyond studying language in isolation. Reinforcement results sound incredibly good, number of features pretty small, how much prior info did they exactly use?P09-1011 [bib]: Percy Liang; Michael Jordan; Dan Klein
Learning Semantic Correspondences with Less Supervision
Semantic representations: Learn semantic mappings in the domains of weather, robocup sportscasting, and NFL recaps when it is not clear what record and what field the text is referring to.P09-1009 [bib]: Benjamin Snyder; Tahira Naseem; Regina Barzilay
Unsupervised Multilingual Grammar Induction
Syntax: A candidate constituent in one language may be split in another preventing wrong rules to be learnt.P09-1024 [bib]: Christina Sauper; Regina Barzilay
Automatically Generating Wikipedia Articles: A Structure-Aware Approach
Summarization: I did not know summarization consists of cutting and pasting existing text.P09-1025 [bib]: Neil McIntyre; Mirella Lapata
Learning to Tell Tales: A Data-driven Approach to Story Generation
Schemas: Learning a model of fairy tales to generate new ones. Nice idea but resulting stories not so good. Better models possible.P09-1034 [bib]: Sebastian Pado; Michel Galley; Dan Jurafsky; Christopher D. Manning
Robust Machine Translation Evaluation with Entailment Features
MT: Compared to human judgement Meteor does best (significantly better than Bleu) among shallow evaluation metrics. Using RTE to see if the produced translation is an entailment or paraphrase of the reference does better.P09-1039 [bib]: Andre Martins; Noah Smith; Eric Xing
Concise Integer Linear Programming Formulations for Dependency Parsing
Syntax: Best paper award.P09-1040 [bib]: Joakim Nivre
Non-Projective Dependency Parsing in Expected Linear Time
Syntax: By adding one more operation that swaps tokens to the shift reduce parser, generation of nonprojective parses possible.P09-1041 [bib]: Gregory Druck; Gideon Mann; Andrew McCallum
Semi-supervised Learning of Dependency Parsers using Generalized Expectation Criteria
Syntax: Instead of labeled data, use expectation constraints in training parser.P09-1042 [bib]: Kuzman Ganchev; Jennifer Gillenwater; Ben Taskar
Dependency Grammar Induction via Bitext Projection Constraints
Syntax: Similar to above, but uses bitext constraints.P09-1057 [bib]: Sujith Ravi; Kevin Knight
Minimized Models for Unsupervised Part-of-Speech Tagging
Syntax: Best paper award.P09-1068 [bib]: Nathanael Chambers; Dan Jurafsky
Unsupervised Learning of Narrative Schemas and their Participants
Schemas: very nice work modeling structure of NYT stories. Could be improved by focusing on a particular genre and introducing narrative ordering to model (apparently time ordering is really difficult).P09-1070 [bib]: Joseph Reisinger; Marius Pasca
Latent Variable Models of Concept-Attribute Attachment
SemRel: unsupervised learning of concept clusters and attributes for each cluster from text.P09-1072 [bib]: Kai-min K. Chang; Vladimir L. Cherkassky; Tom M. Mitchell; Marcel Adam Just
Quantitative modeling of the neural representation of adjective-noun phrases to account for fMRI activation
Brain: continuing the work of brain imaging. Some success in guessing which adj-noun pair being thought. Better questions can be asked.P09-2062 [bib]: Chris Biemann; Monojit Choudhury; Animesh Mukherjee
Syntax is from Mars while Semantics from Venus! Insights from Spectral Analysis of Distributional Similarity Networks
WSD: Qualitative differences between distributional similarity networks for semantics and syntax. Does it say anything about word meaning representation?P09-2059 [bib]: Gumwon Hong; Seung-Wook Lee; Hae-Chang Rim
Bridging Morpho-Syntactic Gap between Source and Target Sentences for English-Korean Statistical Machine Translation
MT: Problems similar to Turkish. Collins '05 proposed reordering. Lee 06 removed useless function words. Hong inserts pseudo-words to xlate to Korean morphemes.P09-2069 [bib]: Haşim Sak; Tunga Güngör; Murat Saraçlar
A Stochastic Finite-State Morphological Parser for Turkish
Mor: A probabilistic generative model for Turkish words.P09-1076 [bib]: Bonnie Webber
Genre distinctions for discourse in the Penn TreeBank
Invited talk - Discourse: topics seem relevant to Schema learning, should find a good tutorial.P09-1087 [bib]: Michel Galley; Christopher D. Manning
Quadratic-Time Dependency Parsing for Machine Translation
Syntax: nonprojective parsing tying each word to its most likely head. Why did this not work when I tried it in CoNLL? Gives O(n2). Could you adopt Nivre for linear? Unsupervised parsing? Using dependency LM as a feature.P09-1088 [bib]: Phil Blunsom; Trevor Cohn; Chris Dyer; Miles Osborne
A Gibbs Sampler for Phrasal Synchronous Grammar Induction
MT: Bayesian magic. Look into SCFGs. Generates its own word alignment. Works better on non-monotonic language pairs, monotonic ones difficult to improve on.P09-1089 [bib]: Shachar Mirkin; Lucia Specia; Nicola Cancedda; Ido Dagan; Marc Dymetman; Idan Szpektor
Source-Language Entailment Modeling for Translating Unknown Terms
MT: Generate paraphrases or entailments for unknown words using RTE.P09-1090 [bib]: Ananthakrishnan Ramanathan; Hansraj Choudhary; Avishek Ghosh; Pushpak Bhattacharyya
Case markers and Morphology: Addressing the crux of the fluency problem in English-Hindi SMT
MT: Reordering and factored model. Fluency and adequacy manually evaluated in addition to BLEU.P09-1108 [bib]: Adam Pauls; Dan Klein
K-Best A* Parsing
Syntax: Best paper award.P09-1104 [bib]: Aria Haghighi; John Blitzer; John DeNero; Dan Klein
Better Word Alignments with Supervised ITG Models
MT: Check if they have code available. Claim 1.1 bleu improvement.P09-1105 [bib]: Fei Huang
Confidence Measure for Word Alignment
MT: Measure confidence based on posterior probability, improve alignments.P09-1113 [bib]: Mike Mintz; Steven Bills; Rion Snow; Daniel Jurafsky
Distant supervision for relation extraction without labeled data
SemRel: Unsupervised method.P09-1116 [bib]: Dekang Lin; Xiaoyun Wu
Phrase Clustering for Discriminative Learning
WSD: cluster phrases instead of words. Much less ambiguous, so pure context. Use different size clusters together, let the learning algorithm pick. Similar to hierarchical. Improves NER and query classification. Any application where clustering words useful because of sparsity. Clusters derived from 700B web data. Are the clusters available?P09-1117 [bib]: Katrin Tomanek; Udo Hahn
Semi-Supervised Active Learning for Sequence Labeling
ML: Self learning does not work because the instances with most confidence are not the useful ones. Active learning asks for labels of instances with least confidence. Boosting effect?D09-1030 [bib]: Chris Callison-Burch
Fast, Cheap, and Creative: Evaluating Translation Quality Using Amazon’s Mechanical Turk
MT: This article has one answer to the BLEU upper bound question among other things. The following graph shows that professional humans still get higher Bleu compared to SMT systems (although this is using 10 reference translations). They mention Google MT got higher Bleu but probably the test set was used in training. Still gives relative performances. Also, amazing things apparently can be done with Amazon Turk. Should use them to judge Turkish alignment quality.D09-1045 [bib]: Jeff Mitchell; Mirella Lapata
Language Models Based on Semantic Composition
LM: Using simple VSM model for semantics small improvement over trigrams.W09-2504 [bib]: Idan Szpektor; Ido Dagan
Augmenting WordNet-based Inference with Argument Mapping
RTE: Some lexical substitutions require other words to be shuffled. Automatic learning of shuffling rules using DIRT.W09-2506 [bib]: Stefan Thater; Georgiana Dinu; Manfred Pinkal
Ranking Paraphrases in Context
WSD: Using lexsub dataset. No dictionary (I think). VSM semantic representation. Check Mitchell&Lapata, Erk&Pado for prior work.W09-2507 [bib]: Kirk Roberts
Building an Annotated Textual Inference Corpus for Motion and SpaceW09-2510 [bib]: David Clausen; Christopher D. Manning
Presupposed Content and Entailments in Natural Language Inference
RTE: Example: "Mary lied about buying a car" -> Mary did not buy a car. "Mary regretted buying a car" -> Mary bought a car. "Mary thought about buying a car" -> Uncertain. Kartunnen 1975 presupposition projection. Check out NatLog system (natural logic).D09-1058 [bib]: Jun Suzuki; Hideki Isozaki; Xavier Carreras; Michael Collins
An Empirical Study of Semi-supervised Structured Conditional Models for Dependency Parsing
Syntax: Take a look at earlier model in Suzuki, ACL'08. What is with the q function? Other work building on McDonald: Carreras '07, Koo '08. MIRA training.D09-1059 [bib]: Richard Johansson
Statistical Bistratal Dependency Parsing
Syntax: Trying simultaneous parsing/SRL with joint probabilistic model.D09-1060 [bib]: Wenliang Chen; Jun’ichi Kazama; Kiyotaka Uchimoto; Kentaro Torisawa
Improving Dependency Parsing with Subtrees from Auto-Parsed Data
Syntax: Self training, SSL for parser. Improvement, even though confidence in unlabeled text not well represented. Best system does 46% of the sentences completely correct (unlabeled).D09-1065 [bib]: Brian Murphy; Marco Baroni; Massimo Poesio
EEG responds to conceptual stimuli and corpus semantics
Brain: Using EEG instead of fMRI in Mitchell style work. Why doesn't anybody try: (1) verbs, (2) grammaticality, (3) lie/truth, (4) agree/disagree, (5) complex grammatical constructs.D09-1070 [bib]: Taesun Moon; Katrin Erk; Jason Baldridge
Unsupervised morphological segmentation and clustering with document boundaries
Mor: help unsupervised morphology by assuming same stem more likely to appear in same document.D09-1071 [bib]: Jurgen Van Gael; Andreas Vlachos; Zoubin Ghahramani
The infinite HMM for unsupervised PoS tagging
Syntax: Use npbayes to pick the number of HMM states. Directly use learnt HMM states rather than trying to map them to existing tagset.D09-1072 [bib]: Qiuye Zhao; Mitch Marcus
A Simple Unsupervised Learner for POS Disambiguation Rules Given Only a Minimal LexiconD09-1085 [bib]: Laura Rimell; Stephen Clark; Mark Steedman
Unbounded Dependency Recovery for Parser Evaluation
Syntax: same motivation as Onder's work. Focuses on a particular construct difficult for parsers (accuracy < 50%) and builds a test set. Same problem in many fields (infrequent senses ignored in WSD, rare issues ignored in RTE/Semantics, rare constructs ignored in syntax, etc. etc.)D09-1086 [bib]: David A. Smith; Jason Eisner
Parser Adaptation and Projection with Quasi-Synchronous Grammar Features
Syntax: learn mapping between parsers with different output styles (e.g. how they connect auxiliary verbs).D09-1087 [bib]: Zhongqiang Huang; Mary Harper
Self-Training PCFG Grammars with Latent Annotations Across Languages
Syntax.D09-1088 [bib]: Reut Tsarfaty; Khalil Sima’an; Remko Scha
An Alternative to Head-Driven Approaches for Parsing a (Relatively) Free Word-Order Language
Syntax: Separate ordering information to get better coefficient stats in parser learning. Many issues same as Turkish.D09-1105 [bib]: Roy Tromble; Jason Eisner
Learning Linear Ordering Problems for Better Translation
MT: Approximate solution to reordering problem for MT. Shows improvement. Does not make use of parse tree.D09-1106 [bib]: Yang Liu; Tian Xia; Xinyan Xiao; Qun Liu
Weighted Alignment Matrices for Statistical Machine Translation
MT: Compact representation for an alignment distribution. Similar to forest for trees or lattice for segmentations.D09-1107 [bib]: Matti Kääriäinen
Sinuhe – Statistical Machine Translation using a Globally Trained Conditional Exponential Family Translation Model
MT: New MT engine based on structured learning. Faster than moses with better TM scores, but overall lower BLEU.D09-1108 [bib]: Hui Zhang; Min Zhang; Haizhou Li; Chew Lim Tan
Fast Translation Rule Matching for Syntax-based Statistical Machine Translation
MT: Compact representation with fast search for packed forests.
Full post... Related link
August 04, 2009
Modeling Morphologically Rich Languages Using Split Words and Unstructured Dependencies
Abstract: We experiment with splitting words into their stem and suffix components for modeling morphologically rich languages. We show that using a morphological analyzer and disambiguator results in a significant perplexity reduction in Turkish. We present flexible n-gram models, Flex-Grams, which assume that the n−1 tokens that determine the probability of a given token can be chosen anywhere in the sentence rather than the preceding n − 1 positions. Our final model achieves 27% perplexity reduction compared to the standard n-gram model.
Full post... Related link
July 22, 2009
Morphological cues vs. number of nominals in learning verb types in Turkish: The syntactic bootstrapping mechanism revisited
Abstract: The syntactic bootstrapping mechanism of verb learning was evaluated against child-directed speech in Turkish, a language with rich morphology, nominal ellipsis and free word order. Machine-learning algorithms were run on transcribed caregiver speech directed to two Turkish learners (one hour every two weeks between 0;9 to 1;10) of different socioeconomic backgrounds. We found that the number of nominals in child-directed utterances plays a small, but significant, role in classifying transitive and intransitive verbs. Further, we found that accusative morphology on the noun is a strong cue in clustering verb types. We also found that verbal morphology (past tense and bareness of verbs) is useful in distinguishing between different subtypes of intransitive verbs. These results suggest that syntactic bootstrapping mechanisms should be extended to include morphological cues to verb learning in morphologically rich languages.
Keywords: Language development; Turkish; Child-directed speech, Syntactic bootstrapping; Morphology
Full post... Related link
April 29, 2009
Natural Language Processing summer course at Sabanci University
- Overview of NLP (2 hours)
- NLP Applications
- Processing pipeline: Basic steps and how they feed into each other and how they are used by applications
- Morphological Analysis (could be skipped or shortened) (2 hours)
- Introduction to Statistical Models, n-gram language modeling, (2hours)
- Applications to simple sequence problems (tagging English and/or deascifier)
- Morphological Disambiguation (applications to Turkish)
- HMMs (formal treatment (backward-forward + viterbi) + applications to tagging) (2-3 hours)
- CFGs and Probabilistic CFGs (3-4 hours)
- Inside-outside algorithm for training PCFGs
- Parsing with PCFGs
- Machine Translation (MT) (3-4 Hours)
- Brief overview Classical Symbolic MT
- Statistical Machine Translation
- Word-based Models
- Phrase-based Models
- Syntax-based models
- Dealing with Morphology in SMT
- Elements of Information Theory / Advanced Language Modeling and Applications
- Entropy/Perplexity/Mutual Information
- Noisy Channel Model
- Sequence classification / HMM
- Sample classification / Naive Bayes
- Smoothing
- Adaptation
- Named Entity Extraction (NE)
- Using HMM for NE
- Using CRF for NE
- Using Boosting/MaxEnt/SVM for NE
- Spoken Language Understanding (SLU) as Template Filling
- HMM approaches (AT&T vs BBN)
- Hidden Vector State Models
- Latent Semantic Analysis
- Sample-classification based (Boosting/MaxEnt/Decision Trees)
- Summarization
- Greedy Algorithms, MMR
- TextRank/LexRank
- Classification based extractive summarization
- Global Models for Summarization: Linear Programming approaches
- Question Answering
- Spoken Dialog Systems and Dialog Management (DM)
- Dialog Systems
- DM
- Finite State Models
- Agent Models
- Reinforcement Learning
- Topic Classification
- Discriminative classification: SVM/Boosting
- Generative classification: language model, document similarity, vector-space-model
- Feature selection/transformation (LDA)
- Latent semantic indexing
- SLU as Intent Determination
- Semantic Role Labeling
- Robustness to ASR
- Topic Clustering
- K-Means
- Top/Down vs. Bottom/Up
- Topic Segmentation
- HMM
- TextTiling
- Markov Chains
- Sentence Segmentation
- HMM
- CRF
- Hybrid
- Active Learning/Semi-Supervised Learning/Unsupervised Learning/Model Adaptation/Robustness
Full post...
April 04, 2009
Dennett in Istanbul
Dennett's 1995 book Darwin's Dangerous Idea argues that natural selection is a blind and algorithmic process sufficiently powerful to account for the generation and evolution of life, minds, and societies. I am looking forward to his talk, an earlier version of which I had seen at MIT when the book first came out.
These days he has taken on religious fundamentalism (see Breaking the Spell). One of his proposed solutions to fight ignorance and intolerance is to teach children about ALL of world's religions instead of brainwashing them with a single system of thought, or leaving them vulnerable by not teaching them about religion at all.
If you have not had the pleasure of listening to Dennett before, I recommend his many recorded talks available at the following websites: TED talks, Wikipedia, Reitstoen.com, and his homepage.
Dennett is quite popular in the Artificial Intelligence / Cognitive Science community due to his refreshingly rational explanations of perplexing issues like consciousness and free will. You may not agree with the specifics of his theories, but at least he makes a convincing case that there is no need for "magic dust" to explain these natural phenomena. Talking about interesting psychological results in Sweet Dreams he says:
I often discover skeptics who are quite confident that I am simply making these facts up! But we must learn to treat such difficulties as measures of our frail powers of imagination, not insights into impossibility.
Yet some of his adversaries take the failure of their imagination for a physical model of the mind as evidence of its impossibility, succumb to mysterianism, take comfort in the assumption that some questions will never be answered, or look for magic dust in the depths of quantum theory. Dennett thinks one day we will find a psychological explanation for their defect.
Many of Dennett's books that I have mentioned in this blog are on the mysteries of the mind. If you get sleepy when you read philosophy, then I especially recommend Mind's I which is one of my favorite collections of philosophical fiction. Here is a list of his books which I hope to turn into an annotated bibliography at some point:
Content and Consciousness (1969)
Brainstorms: Philosophical Essays on Mind and Psychology (1978)
Mind's I: Fantasies and Reflections on Self and Soul (1981)
Elbow Room: The Varieties of Free Will Worth Wanting (1984)
The Intentional Stance (1987)
Consciousness Explained (1991)
Darwin's Dangerous Idea: Evolution and the Meanings of Life (1995)
Kinds of Minds: Towards and Understanding of Consciousness (1996)
Brainchildren: Essays on Designing Minds (1998)
Freedom Evolves (2003)
Sweet Dreams: Philosophical Obstacles to a Science of Consciousness (2005)
Breaking the Spell: Religion as a Natural Phenomenon (2006)
Full post... Related link
March 22, 2009
Matematik nereden gelir
Özellikle çocuklarda henüz soyutla somutun arası çok açılmadığı için bu bağlantılar son derece açık. Örneğin Lakoff ile Núñez temel aritmetiğin dört zihinsel işleve dayandığını iddia ediyorlar: farklı cisimleri bir araya getirmek, parçalarından bir cisim oluşturmak, cetvelle bir uzunluk ölçmek, ve bir yol üzerinde hareket etmek. Borovik'in dört işlem örneklerinde çocukların sayı kavramını henüz saydıkları cisimlerden soyutlayamadıkları için aslında aritmetikten çok daha zor bir problemle uğraştıkları vurgulanıyor. Örneğin elmalarla armutları toplayamayız derken, 10 elmayı 5 insana bölmek kurallara aykırı sayılmıyor okul kitaplarında. Hele hele 10 elmayı ikişer ikişer paylaştırdığımızda beş insana yeteceğini hesaplarken (10/2=5), sol tarafta elmaları bölerken sağ taraftan insanların çıkmasına ne demeli? Benzer örnekleri ve çocukların bunları nasıl kavramlaştırdığı konusundaki teorilerini Borovik'in web sayfasındaki iki kitap çalışmasında okuyabilirsiniz.
Lakoff ve Núñez'in kitabının son bölümü tek bir örneğe ayrılmış. Euler'in meşhur e^πi + 1 = 0 eşitliği. Yazarlar haklı olarak matematiğin en meşhur sayılarını bir araya getiren bu eşitliğin 3^6=729 gibi rastgele bir rakamsal eşitlik olmadığını belirtiyorlar. Euler'in eşitliğindeki her sabitin uzun bir metaforlar zinciriyle desteklenen derin birer anlamı var. Bu zinciri bir noktada kaybeden öğrenci eşitliğin ispatını anlasa bile ne demek olduğunu anlamıyor. Yazarların 70 sayfada sabırla çözümlediği bu metaforlar zincirini burada hakkıyla anlatmam zor. Ama aşağıdaki sorulardan bazıları sizin de kafanızı karıştırıyorsa okumanızı tavsiye ederim:
1. Eğer a^b işlemi a sayısını b defa kendisiyle çarpmak demek ise e sayısını kendisiyle pi defa çarpmak ne demek?
2. i sayısı nereden gelir, neden sanaldır, bir sayıyı i ile çarpmak ya da i'nci üssünü almak ne demek?
3. e sayısı niye 2.718281828459045... değerine sahip?
4. pi sayısı dairelerle ilgili birşey değil miydi? Euler'in eşitliğinin dairelerle ne ilgisi var?
5. İçinde e ve pi gibi iki tane sonsuz kesir olan bir işlem bize nasıl -1 gibi basit bir sonuç verebilir?
6. Matematikte niye e, pi, i, 1 ve 0 sayıları durmadan ortaya çıkıyor ama çoğu sayı (örneğin 192563948.98542129) pek görünmüyor? Bu sayıların sembolize ettiği fikir ve kavramlar nedir?
Full post... Related link
March 03, 2009
Classification of semantic relations between nominals
Abstract: The NLP community has shown a renewed interest in deeper semantic analyses, among them automatic recognition of semantic relations in text. We present the development and evaluation of a semantic analysis task: automatic recognition of relations between pairs of nominals in a sentence. The task was part of SemEval-2007, the fourth edition of the semantic evaluation event previously known as SensEval. Apart from the observations we have made, the long-lasting effect of this task may be a framework for comparing approaches to the task. We introduce the problem of recognizing relations between nominals, and in particular the process of drafting and refining the definitions of the semantic relations. We show how we created the training and test data, list and briefly describe the 15 participating systems, discuss the results, and conclude with the lessons learned in the course of this exercise.
Full post... Related link
March 01, 2009
Incandescence by Greg Egan
For example, when an object is placed north or south of the center, it is pulled towards the center. Here is what an object left to free fall from the north of the center looks like to an observer inside the asteroid:
However from a different perspective, one can see that the object is actually moving around its own slightly tilted orbit around the star. And if the object is supported by a floor and is prevented from moving toward the center, it will experience a gravitational pull toward the center -- thus an example of the maxim: "Weight is the difference between preferred and actual motion."
When two object are placed toward and away from the sun with respect to the center of the asteroid, they feel a pull away from the center:
Again, from a different perspective, this can be seen as the objects trying to follow their own natural orbits, and weight being the difference between preferred and actual motion:
As a final example, suppose an object at the center is given a radial push. Unlike the objects in the previous example, this object will not accelerate away from the center, but keep cycling in an elliptical course as seen from inside the asteroid:
The standard explanation is in terms of Coriolis force due to the object's motion balancing the tidal acceleration. From the orbital perspective, our radial push places the object in an elliptical orbit crossing the circular orbit of the planet:
Starting with these and many other interesting observations, Zak and his friends uncover the secrets of their universe before ever stepping outside their closed world and ever seeing their sun, their orbit, or fixed stars. The book is a treat for aficionados of gedankenexperiments and qualitative physics.
P.S. I'd like to thank Greg Egan for allowing the use of these animations from his page, which hosts a treasure of information about science and science fiction.
Full post... Related link
February 16, 2009
Ergun's English-Turkish machine translation notes
Turkish English parallel text from Kemal Oflazer, Statistical Machine Translation into a Morphologically Complex Language, Invited Paper, In Proceedings of CICLING 2008 -- Conference on Intelligent Text Processing and Computational Linguistics, Haifa, Israel, February 2008 (lowercased and converted to utf8):
en-tr.zip
The Turkish part of the dataset is "selectively split", i.e. some suffixes are separated from their stems, some are not.
Here is the Turkish text to develop the language model:
lm.tr.gz
The directions for the Moses baseline system:
http://www.statmt.org/wmt09/baseline.html
The link for the scripts:
http://www.statmt.org/wmt08/scripts.tgz
Be careful to put the stems and suffixes back together before computing the BLEU score. Splitting them artificially increases the score.
To compute the score do not use the mteval scorer at http://www.statmt.org/wmt09/baseline.html - because it retokenizes the input and splits all the '+' characters that are used to denote suffixes. Either use the multi-bleu perl script, or comment out the language-dependent part of NormalizeText in mteval.
For Turkish dictionaries and other resources please see Turkish language resources.
Full post...
February 11, 2009
Turkish morphology presentation
Full post... Related link
February 02, 2009
İngilizce Türkçe otomatik tercüme
ekledi. Denemek isterseniz: http://translate.google.com
Bu teknolojinin İngilizce bilmeyen Türk nüfusunun internetteki
bilgi birikimine ulaşımı için önemli olduğunu düşünüyor ve birkaç
yıldır üzerinde ben de çalışıyorum. En büyük engellerden biri
araştırma amacıyla kullanılabilecek yüklü miktarda
İngilizce-Türkçe paralel metne ihtiyaç olması (yaklaşık 100 milyon
kelime = 1000 kitap). Bu metni toplayabilmek için bir iki yıl
telefonla devlet kurumları, uluslararası kuruluşlar, yayınevi,
haber kurumu, hukuk ve tercümanlık şirketleri, üniversite
bölümleri vs ile görüşüp pozitif bir cevap alamayınca yoruldum ve
vazgeçtim. İşin üzücü tarafı karşılaştığım büyük engelin yayın
hakkı, fikir mülkiyeti gibi hukuksal bir konu değil, insanların
ilgisizliği olması. Şimdilik bir iki milyon kelimelik metinden
geliştirilmiş oyuncak bir sistemle uğraşıyorum öğrencilerimle.
Google'ın sistemini ben yazmış olmak isterdim. Ama henüz yarışma
bitmiş değil, sistemin kalitesine bir örnek olarak bu paragrafın
bir tercümesini veriyorum...
This technology does not speak English in the Turkish population
on the Internet access to knowledge is important to think and a
few years, I am working on. One of the biggest obstacles can be
used for research purposes in the amount of installed
English-English parallel text is needed (about 100 million word =
1000 books). This text to be able to collect a two-year phone and
the state institutions, international organizations, publishing,
news agency, law and translation companies, universities sections
and with a positive answer so do not get tired and I've given
up. The major obstacle faced by the unfortunate job of
broadcasting the The right to legal issues such as property, not
the ideas, people is indifference. Currently, one of two million
words of text I'm dealing with a system developed for students
with toys. I would like it to Google's system. But the contest
yet not finished, as an example of the quality of the system of
this paragraph I give a translation...
Full post... Related link
January 28, 2009
Neşe Aral, M.S. 2009
M.S. Thesis: Dynamics of Gene Regulatory Cell Cycle Network in Saccharomyces Cerevisiae. Koç University Department of Physics, January 2009. (Download PDF).
Abstract:
In this thesis, the genetic regulatory dynamics within the cell cycle network of the yeast Saccharomyces Cerevisiae is examined. As the mathematical approach, an asynchronously updated Boolean network is used to model the time evolution of the expression level of genes taking part in the regulation of the cell-cycle. The attractors of the model’s dynamics and their stability are investigated by means of a stochastic transition matrix. It is shown that the cell cycle network has unusual dynamical properties when compared with similar random networks. Furthermore, an entropy measure is employed to monitor the sequential evolution of the system. It is observed that the experimentally identified cell cycle phases G1, S, G2 and M correspond to the stages of the network where the entropy goes through a local extremum.
Full post... Related link