TY - JOUR
T1 - GPUMD: A package for constructing accurate machine-learned potentials and performing highly efficient atomistic simulations
AU - Fan, Zheyong
AU - Wang, Yanzhou
AU - Ying, Penghua
AU - Song, Keke
AU - Wang, Junjie
AU - Wang, Yong
AU - Zeng, Zezhu
AU - Xu, Ke
AU - Lindgren, Eric
AU - Rahm, J. Magnus
AU - Gabourie, Alexander J.
AU - Liu, Jiahui
AU - Dong, Haikuan
AU - Wu, Jianyang
AU - Chen, Yue
AU - Zhong, Zheng
AU - Sun, Jian
AU - Erhart, Paul
AU - Su, Yanjing
AU - Ala-Nissila, Tapio
N1 - Funding Information:
Z.F. acknowledges support from the National Natural Science Foundation of China (NSFC) (Grant No. 11974059). Y.W., K.S., J.L., and H.D. acknowledge the support from the National Key Research and Development Program of China (Grant No. 2018YFB0704300). The work at Nanjing University (by J.J.W., Y.W., and J.S.) is partially supported by the NSFC (Grant Nos. 12125404, 11974162, and 11834006) and the Fundamental Research Funds for the Central Universities. The calculations performed in Nanjing University were carried out using supercomputers at the High Performance Computing Center of Collaborative Innovation Center of Advanced Microstructures, the high-performance supercomputing center of Nanjing University. P.Y. and Z.Z. acknowledge support from the NSFC (Grant No. 11932005). T.A.-N. has been supported in part by the Academy of Finland through its QTF Centre of Excellence program (Grant No. 312298) and Technology Industries of Finland Centennial Foundation Future Makers grant. Z.Z. and Y.C. gratefully acknowledge the research computing facilities offered by ITS, HKU. J.M.R., E.L., and P.E. acknowledge support from the Swedish Research Council (Grant Nos. 2018-06482, 2020-04935, and 2021-05072) and the Swedish Foundation for Strategic Research (SSF) via the SwedNess program (Grant No. GSn15-0008) as well as computational resources provided by the Swedish National Infrastructure for Computing (SNIC) at NSC, C3SE, and PDC partially funded by the Swedish Research Council (Grant No. 2018-05973).
Publisher Copyright:
© 2022 Author(s).
PY - 2022/9/21
Y1 - 2022/9/21
N2 - We present our latest advancements of machine-learned potentials (MLPs) based on the neuroevolution potential (NEP) framework introduced in Fan et al. [Phys. Rev. B 104, 104309 (2021)] and their implementation in the open-source package gpumd. We increase the accuracy of NEP models both by improving the radial functions in the atomic-environment descriptor using a linear combination of Chebyshev basis functions and by extending the angular descriptor with some four-body and five-body contributions as in the atomic cluster expansion approach. We also detail our efficient implementation of the NEP approach in graphics processing units as well as our workflow for the construction of NEP models and demonstrate their application in large-scale atomistic simulations. By comparing to state-of-the-art MLPs, we show that the NEP approach not only achieves above-average accuracy but also is far more computationally efficient. These results demonstrate that the gpumd package is a promising tool for solving challenging problems requiring highly accurate, large-scale atomistic simulations. To enable the construction of MLPs using a minimal training set, we propose an active-learning scheme based on the latent space of a pre-trained NEP model. Finally, we introduce three separate Python packages, viz., gpyumd, calorine, and pynep, that enable the integration of gpumd into Python workflows.
AB - We present our latest advancements of machine-learned potentials (MLPs) based on the neuroevolution potential (NEP) framework introduced in Fan et al. [Phys. Rev. B 104, 104309 (2021)] and their implementation in the open-source package gpumd. We increase the accuracy of NEP models both by improving the radial functions in the atomic-environment descriptor using a linear combination of Chebyshev basis functions and by extending the angular descriptor with some four-body and five-body contributions as in the atomic cluster expansion approach. We also detail our efficient implementation of the NEP approach in graphics processing units as well as our workflow for the construction of NEP models and demonstrate their application in large-scale atomistic simulations. By comparing to state-of-the-art MLPs, we show that the NEP approach not only achieves above-average accuracy but also is far more computationally efficient. These results demonstrate that the gpumd package is a promising tool for solving challenging problems requiring highly accurate, large-scale atomistic simulations. To enable the construction of MLPs using a minimal training set, we propose an active-learning scheme based on the latent space of a pre-trained NEP model. Finally, we introduce three separate Python packages, viz., gpyumd, calorine, and pynep, that enable the integration of gpumd into Python workflows.
UR - http://www.scopus.com/inward/record.url?scp=85138439619&partnerID=8YFLogxK
U2 - 10.1063/5.0106617
DO - 10.1063/5.0106617
M3 - Article
C2 - 36137808
AN - SCOPUS:85138439619
SN - 0021-9606
VL - 157
SP - 1
EP - 26
JO - Journal of Chemical Physics
JF - Journal of Chemical Physics
IS - 11
M1 - 114801
ER -