Deniz Yuret, Laura Rimell, Aydin Han. Language Resources and Evaluation. 2012. (PDF, URL, Task website, Related posts)
Abstract
Parser Evaluation using Textual Entailments (PETE) is a shared task in
the SemEval-2010 Evaluation Exercises on Semantic Evaluation. The
task involves recognizing textual entailments based on syntactic
information alone. PETE introduces a new parser evaluation scheme
that is formalism independent, less prone to annotation error, and
focused on semantically relevant distinctions. This paper describes
the PETE task, gives an error analysis of the top-performing Cambridge
system, and introduces a standard entailment module that can be used
with any parser that outputs Stanford typed dependencies.
Full post...
Deep Learning is cheap Solomonoff induction?
1 hour ago