Two types of neural network calculators are currently included in the Neural repository: CartesianNeural (based on pure Cartesian coordinates) and BPNeural (based on Behler-Parrinello symmetry functions). These calculators construct neural network potential energy surfaces which are first trained to the potential energies and forces of some images of interest. The trained calculator can then give information about the potential energy and forces of an atomic system. Depending on the calculator, either explicit atomic positions or some functions of them are feed as inputs. The inputs then propagate through a number of hidden layers and nodes, finally producing the potential energy of the system in the output layer. Analytical forces calculated using back-propagation are also available in the calculators. For more information on the theory of neural networks, the reader is referred to the reference Neural Networks by Rojas.
The code is designed to integrate with ASE and be as intuitive as possible. The machine-learning algorithms are ASE calculator objects, and contain all the normal methods inherent to ASE calculators. However, the machine-learning calculators have an additional method, 'train'.
Here we give a brief overview of the structure of how a calculator is trained. For detailed instructions, see the page for the particular flavor of calculator you are interested in.
Training the calculator
Training the calculator can be as simple as
The argument to the train method is the set of images to be used for training. In the example it is the location of an ASE trajectory file. This can also be an ASE database file ('.db', recommended for large or inhomogeneous data sets), or just a list of images.
Using the calculator
The trained calculator can now be used just like any other ASE calculator. E.g.,
atoms = ... atoms.set_calculator(calc) energy = atoms.get_potential_energy()