{ "id": "2210.01797", "version": "v1", "published": "2022-10-01T01:41:17.000Z", "updated": "2022-10-01T01:41:17.000Z", "title": "Ten Years after ImageNet: A 360° Perspective on AI", "authors": [ "Sanjay Chawla", "Preslav Nakov", "Ahmed Ali", "Wendy Hall", "Issa Khalil", "Xiaosong Ma", "Husrev Taha Sencar", "Ingmar Weber", "Michael Wooldridge", "Ting Yu" ], "categories": [ "cs.LG", "cs.AI", "cs.IR" ], "abstract": "It is ten years since neural networks made their spectacular comeback. Prompted by this anniversary, we take a holistic perspective on Artificial Intelligence (AI). Supervised Learning for cognitive tasks is effectively solved - provided we have enough high-quality labeled data. However, deep neural network models are not easily interpretable, and thus the debate between blackbox and whitebox modeling has come to the fore. The rise of attention networks, self-supervised learning, generative modeling, and graph neural networks has widened the application space of AI. Deep Learning has also propelled the return of reinforcement learning as a core building block of autonomous decision making systems. The possible harms made possible by new AI technologies have raised socio-technical issues such as transparency, fairness, and accountability. The dominance of AI by Big-Tech who control talent, computing resources, and most importantly, data may lead to an extreme AI divide. Failure to meet high expectations in high profile, and much heralded flagship projects like self-driving vehicles could trigger another AI winter.", "revisions": [ { "version": "v1", "updated": "2022-10-01T01:41:17.000Z" } ], "analyses": { "keywords": [ "deep neural network models", "perspective", "graph neural networks", "meet high expectations", "extreme ai divide" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }