Contents:
EMBED for wordpress. Want more? Advanced embedding details, examples, and help!
More recently, the calculus of variations has found applications in other fields such as economics and electrical engineering. Much of the mathematics underlying control theory, for instance, can be regarded as part of the calculus of variations. This book is an introductory account of the calculus of variations suitable for advanced undergraduate and graduate students of mathematics, physics, or engineering.
The mathematical background assumed of the reader is a course in multivariable calculus, and some familiarity with the elements of real analysis and ordinary differential equations. The book focuses on variational problems that involve one independent variable. The fixed endpoint problem and problems with constraints are discussed in detail. The calculus of variations is concerned with the maxima or minima collectively called extrema of functionals. A functional maps functions to scalars , so functionals have been described as "functions of functions. For a function space of continuous functions, extrema of corresponding functionals are called weak extrema or strong extrema , depending on whether the first derivatives of the continuous functions are respectively all continuous or not.
Both strong and weak extrema of functionals are for a space of continuous functions but weak extrema have the additional requirement that the first derivatives of the functions in the space be continuous. Thus a strong extremum is also a weak extremum, but the converse may not hold. Finding strong extrema is more difficult than finding weak extrema.
Finding the extrema of functionals is similar to finding the maxima and minima of functions. The maxima and minima of a function may be located by finding the points where its derivative vanishes i. The extrema of functionals may be obtained by finding functions where the functional derivative is equal to zero. This leads to solving the associated Euler—Lagrange equation.
Also, as previously mentioned the left side of the equation is zero so that. According to the fundamental lemma of calculus of variations , the part of the integrand in parentheses is zero, i. In general this gives a second-order ordinary differential equation which can be solved to obtain the extremal function f x.
The Euler—Lagrange equation is a necessary , but not sufficient , condition for an extremum J [ f ]. A sufficient condition for a minimum is given in the section Variations and sufficient condition for a minimum.
The arc length of the curve is given by. The Euler—Lagrange equation will now be used to find the extremal function f x that minimizes the functional A [ y ]. Since f does not appear explicitly in L , the first term in the Euler—Lagrange equation vanishes for all f x and thus,.
In other words, the shortest distance between two points is a straight line.
In that case, the Euler—Lagrange equation can be simplified to the Beltrami identity : [14]. By Noether's theorem , there is an associated conserved quantity. In this case, this quantity is the Hamiltonian, the Legendre transform of the Lagrangian, which often coincides with the energy of the system. This is minus the constant in Beltrami's identity.
The discussion thus far has assumed that extremal functions possess two continuous derivatives, although the existence of the integral J requires only first derivatives of trial functions. The condition that the first variation vanishes at an extremal may be regarded as a weak form of the Euler—Lagrange equation. The theorem of Du Bois-Reymond asserts that this weak form implies the strong form. If L has continuous first and second derivatives with respect to all of its arguments, and if.
I was exactly in the same situation as you a few years ago. Quite often both choices are equivalent, but there are circumstances where one case is markedly better than the other. Optimizers actually come up alot, for instance in index fund portfolio management, but these generally use linear programming or discrete optimization. Analytical Mechanics , Maruzen, Tokyo [in Japanese]. Cite article How to cite? In Euler even postulated that "every effect in nature follows a maximum or minimum principle".
Hilbert was the first to give good conditions for the Euler—Lagrange equations to give a stationary solution. Within a convex area and a positive thrice differentiable Lagrangian the solutions are composed of a countable collection of sections that either go along the boundary or satisfy the Euler—Lagrange equations in the interior.
However Lavrentiev in showed that there are circumstances where there is no optimum solution but one can be approached arbitrarily closely by increasing numbers of sections.
For instance the following:. Here a zig zag path gives a better solution than any smooth path and increasing the number of sections improves the solution.
Plateau's problem consists of finding a function that minimizes the surface area while assuming prescribed values on the boundary of D ; the solutions are called minimal surfaces. The Euler—Lagrange equation for this problem is nonlinear:. It is often sufficient to consider only small displacements of the membrane, whose energy difference from no displacement is approximated by.
Since v vanishes on C and the first variation vanishes, the result is. The proof for the case of one dimensional integrals may be adapted to this case to show that. The difficulty with this reasoning is the assumption that the minimizing function u must have two derivatives. Riemann argued that the existence of a smooth minimizing function was assured by the connection with the physical problem: membranes do indeed assume configurations with minimal potential energy.
Riemann named this idea the Dirichlet principle in honor of his teacher Peter Gustav Lejeune Dirichlet. However Weierstrass gave an example of a variational problem with no solution: minimize. The function that minimizes the potential energy with no restriction on its boundary values will be denoted by u. Provided that f and g are continuous, regularity theory implies that the minimizing function u will have two derivatives.
In taking the first variation, no boundary condition need be imposed on the increment v. Then if we allow v to assume arbitrary boundary values, this implies that u must satisfy the boundary condition. This boundary condition is a consequence of the minimizing property of u : it is not imposed beforehand. Such conditions are called natural boundary conditions. For such a trial function,. By appropriate choice of c , V can assume any value unless the quantity inside the brackets vanishes.
Therefore, the variational problem is meaningless unless. This condition implies that net external forces on the system are in equilibrium. If these forces are in equilibrium, then the variational problem has a solution, but it is not unique, since an arbitrary constant may be added.
Further details and examples are in Courant and Hilbert Both one-dimensional and multi-dimensional eigenvalue problems can be formulated as variational problems. It is shown below that the Euler—Lagrange equation for the minimizing u is. It can be shown see Gelfand and Fomin that the minimizing u has two derivatives and satisfies the Euler—Lagrange equation.
This variational characterization of eigenvalues leads to the Rayleigh—Ritz method : choose an approximating u as a linear combination of basis functions for example trigonometric functions and carry out a finite-dimensional minimization among such linear combinations.
This method is often surprisingly accurate. The next smallest eigenvalue and eigenfunction can be obtained by minimizing Q under the additional constraint. This procedure can be extended to obtain the complete sequence of eigenvalues and eigenfunctions for the problem. The variational problem also applies to more general boundary conditions. After integration by parts,.
If we first require that v vanish at the endpoints, the first variation will vanish for all such v only if. If u satisfies this condition, then the first variation will vanish for arbitrary v only if. These latter conditions are the natural boundary conditions for this problem, since they are not imposed on trial functions for the minimization, but are instead a consequence of the minimization. Eigenvalue problems in higher dimensions are defined in analogy with the one-dimensional case.
For example, given a domain D with boundary B in three dimensions we may define. The Euler—Lagrange equation satisfied by u is. This result depends upon the regularity theory for elliptic partial differential equations; see Jost and Li—Jost for details. Many extensions, including completeness results, asymptotic properties of the eigenvalues and results concerning the nodes of the eigenfunctions are in Courant and Hilbert Fermat's principle states that light takes a path that locally minimizes the optical length between its endpoints.
After integration by parts of the first term within brackets, we obtain the Euler—Lagrange equation. The light rays may be determined by integrating this equation. This formalism is used in the context of Lagrangian optics and Hamiltonian optics. After integration by parts in the separate regions and using the Euler—Lagrange equations, the first variation takes the form.
Snell's law for refraction requires that these terms be equal. As this calculation demonstrates, Snell's law is equivalent to vanishing of the first variation of the optical path length. The optical length of the curve is given by.