Sunday, May 24, 2015

Energy Minimization: Finding and Connecting Stationary Points



Let’s start by considering a simple example with one coordinate (for example a distance), $R$, and where the energy is given by
$$ E= \frac{1}{2}(R-R_e)^2$$
The corresponding potential energy surface, known as a quadratic PES, is shown in Figure 1.

Figure 1. Plot of the (quadratic) potential energy surface given by Equation 1.

Here $R_e$ is the value of $R$ at which the energy is lowest (this is known as the equilibrium geometry) and this is what we’d like to find.  We start by taking a guess at $R$, $R_g$.  We already know how to check whether this is an energy minimum: we need to evaluate the gradient, which is
$$\left(\frac{\partial E}{\partial R}\right)_{R_g}=k(R_g-R_e)$$
It’s clear that the gradient is non-zero for $R_e \ne R_g$.  However, by rearranging the equation the gradient also tells us how to change $R$ to get an energy minimum
$$R_e=R_g-\frac{1}{k}\left(\frac{\partial E}{\partial R}\right)_{R_g}=R_g+\frac{1}{k}F_g$$
where $F$ is the force, which is the negative gradient.  If we know $k$ we can find $R_e$ in one step starting from $R_g$ (Figure 2).

Figure 2. The equilibrium geometry can be found in one step on a quadratic PES

If we don’t know $k$ then it is safest to take many small steps, i.e. scale the gradient by some small constant ($c$) and repeat until the gradient falls below some threshold (Figure 3)
$$R_{n+1}=R_n-c\left(\frac{\partial E}{\partial R}\right)_{R_n}$$

Figure 3. Energy minimization by steepest descent.

This general approach is known as steepest descent, since the gradient tells you in which direction where the energy is decreasing (descending) the most. Notice that this means that minimizing the energy will find the closest minimum to the starting geometry.  When steepest descent is used to find equilibrium geometries it is often combined with a so-called “line search”, which tries to find the lowest energy in the direction of the current gradient, but the general idea is still the same.

Another use of steepest descent is to connect a transitions state with its two closest minima, since this tells you what two minima the transition state connects.  Here the guess structure is a transition state structure displaced along the normal mode corresponding to the imaginary frequency (the transition state structure itself cannot be used because its gradient is zero).  This path is the minimum energy path (MEP) between reactions and products and the resulting collection of structures is known as the intrinsic reaction coordinate (IRC).  An IRC is usually depicted as a plot of the potential energy vs the mass-weighted root-mean-square-displacement of the Cartesian coordinates relative to some reference geometry (usually the transition state), usually denoted $s$ (Figure 4).  When we draw a potential energy surface for a reaction we usually mean an IRC.

Figure 4. Depiction of an IRC or MEP: two steepest descent paths starting from the transition state

Clearly, the more steps we take, the higher the computational cost, and in general we want to take as few steps as possible.  As I mentioned above, if we knew $k$ we could find the energy minimum in one step.  For a quadratic surface $k$ is the second derivative of $E$ with respect to $R$ (or the first derivative of the gradient) so Eq 3 becomes
$$ R_e=R_g-\left(\frac{\partial^2 E}{\partial R^2}\right)^{-1}_{R_g}\left(\frac{\partial E}{\partial R}\right)_{R_g}$$
Such a step is known as a Newton-Raphson step or quadratic step.  This works fine if the surface is quadratic, which is a good approximation near the minimum but not far away.  For really bad guesses, where the PES tends to be flat, the quadratic step can be too large (Figure 5).

Figure 5. Quadratic steps (a) close to and (b) far from the minimum.

Most algorithms will scale back quadratic steps that are considered unreasonably large, and even using quadratic energy minimizers many steps are needed:
$$R_{n+1}=R_n-\left(\frac{\partial^2 E}{\partial R^2}\right)^{-1}_{R_n}\left(\frac{\partial E}{\partial R}\right)_{R_n}$$
$$ \mathbf{q}_{n+1}=\mathbf{q}_{n}-c\mathbf{H}^{-1}_n\mathbf{g}_n$$
Eq. 7 is the same equation as 6 but in many dimensions: qn are the current coordinates (for example Cartesian coordinates) for step number $n$ ($n = 0$ for the guess coordinates), $\mathbf{q}_{n+1}$ are the new coordinates, $\mathbf{g}_n$ is the gradient, and $\mathbf{H}_n$ is the Hessian both evaluated at the current coordinates. Note that the Hessian should be computed at each step.


Friday, May 1, 2015

Computational Chemistry Highlights: April issue

The April issue of Computational Chemistry Highlights is out.

CCH is an overlay journal that identifies the most important papers in computational and theoretical chemistry published in the last 1-2 years. CCH is not affiliated with any publisher: it is a free resource run by scientists for scientists. You can read more about it here.

Table of content for this issue features contributions from CCH editors Mario Barbatti, Steven Bachrach, and Jan Jensen:

Big Data Meets Quantum Chemistry Approximations: The ∆-Machine Learning Approach