Neural: Machine Learning for Atomistics
Welcome to the Neural project! Neural is an open-source code designed to easily bring machine-learning to atomistic calculations. This allows one to predict (or really, interpolate) calculations on the potential energy surface, by optimizing a neural network representation of a "training set" of atomic images. The code works by learning from any other calculator (usually DFT) that can provide energy as a function of atomic coordinates. In theory, these predictions can take place with arbitrary accuracy approaching that of the original calculator.
Neural is designed to integrate closely with the Atomic Simulation Environment (ASE). As such, the interface is in pure python, although several compute-heavy parts of the underlying code also have fortran versions to accelerate the calculations. The close integration with ASE means that any calculator that works with ASE ─ including EMT, GPAW, DACAPO, VASP, NWChem, and Gaussian ─ can easily be used as the parent method.
Neural is developed at Brown University in the School of Engineering, primarily by Andrew Peterson and Alireza Khorshidi, and is released under the GNU General Public License. This is a relatively new project, so things are constantly changing!
Important: We have transitioned from Neural, which only allows neural network schemes with limited fingerprinting options, to our new code Amp. Amp is being designed to allow one to specify their own favorite machine-learning regression scheme, and their own description of the environment around each atom, in order to allow for customizable machine-learning schemes beyond neural networks. As of this writing, Neural is more stable than Amp, but all of our major development efforts will be focused on the Amp project. The last major release of Neural is expected to be 0.2; the first major release of Amp is 0.3.