arXiv Analytics

Sign in

arXiv:1905.06319 [cs.CL]AbstractReferencesReviewsResources

Exact Hard Monotonic Attention for Character-Level Transduction

Shijie Wu, Ryan Cotterell

Published 2019-05-15Version 1

Many common character-level, string-to-string transduction tasks, e.g., grapheme-to-phoneme conversion and morphological inflection, consist almost exclusively of monotonic transduction. Neural sequence-to-sequence models with soft attention, non-monotonic models, outperform popular monotonic models. In this work, we ask the following question: Is monotonicity really a helpful inductive bias in these tasks? We develop a hard attention sequence-to-sequence model that enforces strict monotonicity and learns alignment jointly. With the help of dynamic programming, we are able to compute the exact marginalization over all alignments. Our models achieve state-of-the-art performance on morphological inflection. Furthermore, we find strong performance on two other character-level transduction tasks.

Related articles:
arXiv:1806.04357 [cs.CL] (Published 2018-06-12)
Multi-Task Neural Models for Translating Between Styles Within and Across Languages
arXiv:1606.01545 [cs.CL] (Published 2016-06-05)
Neural Net Models for Open-Domain Discourse Coherence
arXiv:1907.11049 [cs.CL] (Published 2019-07-25)
Grammatical Sequence Prediction for Real-Time Neural Semantic Parsing