arXiv Analytics

Sign in

arXiv:2306.17089 [cs.LG]AbstractReferencesReviewsResources

Concept-Oriented Deep Learning with Large Language Models

Daniel T. Chang

Published 2023-06-29Version 1

Large Language Models (LLMs) have been successfully used in many natural-language tasks and applications including text generation and AI chatbots. They also are a promising new technology for concept-oriented deep learning (CODL). However, the prerequisite is that LLMs understand concepts and ensure conceptual consistency. We discuss these in this paper, as well as major uses of LLMs for CODL including concept extraction from text, concept graph extraction from text, and concept learning. Human knowledge consists of both symbolic (conceptual) knowledge and embodied (sensory) knowledge. Text-only LLMs, however, can represent only symbolic (conceptual) knowledge. Multimodal LLMs, on the other hand, are capable of representing the full range (conceptual and sensory) of human knowledge. We discuss conceptual understanding in visual-language LLMs, the most important multimodal LLMs, and major uses of them for CODL including concept extraction from image, concept graph extraction from image, and concept learning. While uses of LLMs for CODL are valuable standalone, they are particularly valuable as part of LLM applications such as AI chatbots.

Related articles: Most relevant | Search more
arXiv:2303.02206 [cs.LG] (Published 2023-03-03, updated 2023-08-23)
Domain Specific Question Answering Over Knowledge Graphs Using Logical Programming and Large Language Models
arXiv:2306.07567 [cs.LG] (Published 2023-06-13)
Large Language Models Sometimes Generate Purely Negatively-Reinforced Text
arXiv:2306.04634 [cs.LG] (Published 2023-06-07)
On the Reliability of Watermarks for Large Language Models