{ "id": "2012.11816", "version": "v1", "published": "2020-12-22T03:41:16.000Z", "updated": "2020-12-22T03:41:16.000Z", "title": "Molecular CT: Unifying Geometry and Representation Learning for Molecules at Different Scales", "authors": [ "Jun Zhang", "Yaqiang Zhou", "Yao-Kun Lei", "Yi Isaac Yang", "Yi Qin Gao" ], "comment": "v1. 14 pages, 9 figures", "categories": [ "cs.LG", "cond-mat.soft" ], "abstract": "Deep learning is changing many areas in molecular physics, and it has shown great potential to deliver new solutions to challenging molecular modeling problems. Along with this trend arises the increasing demand of expressive and versatile neural network architectures which are compatible with molecular systems. A new deep neural network architecture, Molecular Configuration Transformer (Molecular CT), is introduced for this purpose. Molecular CT is composed of a relation-aware encoder module and a computationally universal geometry learning unit, thus able to account for the relational constraints between particles meanwhile scalable to different particle numbers and invariant w.r.t. the trans-rotational transforms. The computational efficiency and universality makes Molecular CT versatile for a variety of molecular learning scenarios and especially appealing for transferable representation learning across different molecular systems. As examples, we show that Molecular CT enables representational learning for molecular systems at different scales, and achieves comparable or improved results on common benchmarks using a more light-weighted structure compared to baseline models.", "revisions": [ { "version": "v1", "updated": "2020-12-22T03:41:16.000Z" } ], "analyses": { "keywords": [ "representation learning", "universal geometry learning unit", "unifying geometry", "ct enables representational learning", "molecular systems" ], "note": { "typesetting": "TeX", "pages": 14, "language": "en", "license": "arXiv", "status": "editable" } } }