arXiv Analytics

Sign in

arXiv:2011.08272 [cs.CL]AbstractReferencesReviewsResources

NLPGym -- A toolkit for evaluating RL agents on Natural Language Processing Tasks

Rajkumar Ramamurthy, Rafet Sifa, Christian Bauckhage

Published 2020-11-16Version 1

Reinforcement learning (RL) has recently shown impressive performance in complex game AI and robotics tasks. To a large extent, this is thanks to the availability of simulated environments such as OpenAI Gym, Atari Learning Environment, or Malmo which allow agents to learn complex tasks through interaction with virtual environments. While RL is also increasingly applied to natural language processing (NLP), there are no simulated textual environments available for researchers to apply and consistently benchmark RL on NLP tasks. With the work reported here, we therefore release NLPGym, an open-source Python toolkit that provides interactive textual environments for standard NLP tasks such as sequence tagging, multi-label classification, and question answering. We also present experimental results for 6 tasks using different RL algorithms which serve as baselines for further research. The toolkit is published at https://github.com/rajcscw/nlp-gym

Comments: Accepted at Wordplay: When Language Meets Games Workshop @ NeurIPS 2020
Categories: cs.CL, cs.AI
Related articles: Most relevant | Search more
arXiv:2005.00870 [cs.CL] (Published 2020-05-02)
Predicting Performance for Natural Language Processing Tasks
arXiv:2204.09593 [cs.CL] (Published 2022-04-01)
COOL, a Context Outlooker, and its Application to Question Answering and other Natural Language Processing Tasks
arXiv:1412.1342 [cs.CL] (Published 2014-12-03)
A perspective on the advancement of natural language processing tasks via topological analysis of complex networks