{ "id": "1910.02718", "version": "v1", "published": "2019-10-07T10:52:14.000Z", "updated": "2019-10-07T10:52:14.000Z", "title": "Continual Learning in Neural Networks", "authors": [ "Rahaf Aljundi" ], "comment": "PhD Thesis, Supervisor: Tinne Tuytelaars", "categories": [ "cs.LG", "cs.CV", "stat.ML" ], "abstract": "Artificial neural networks have exceeded human-level performance in accomplishing several individual tasks (e.g. voice recognition, object recognition, and video games). However, such success remains modest compared to human intelligence that can learn and perform an unlimited number of tasks. Humans' ability of learning and accumulating knowledge over their lifetime is an essential aspect of their intelligence. Continual machine learning aims at a higher level of machine intelligence through providing the artificial agents with the ability to learn online from a non-stationary and never-ending stream of data. A key component of such a never-ending learning process is to overcome the catastrophic forgetting of previously seen data, a problem that neural networks are well known to suffer from. The work described in this thesis has been dedicated to the investigation of continual learning and solutions to mitigate the forgetting phenomena in neural networks. To approach the continual learning problem, we first assume a task incremental setting where tasks are received one at a time and data from previous tasks are not stored. Since the task incremental setting can't be assumed in all continual learning scenarios, we also study the more general online continual setting. We consider an infinite stream of data drawn from a non-stationary distribution with a supervisory or self-supervisory training signal. The proposed methods in this thesis have tackled important aspects of continual learning. They were evaluated on different benchmarks and over various learning sequences. Advances in the state of the art of continual learning have been shown and challenges for bringing continual learning into application were critically identified.", "revisions": [ { "version": "v1", "updated": "2019-10-07T10:52:14.000Z" } ], "analyses": { "keywords": [ "continual learning", "task incremental setting", "general online continual", "artificial neural networks", "success remains modest" ], "tags": [ "dissertation" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }