Class RungeKutta¶
Defined in File RungeKutta.m
Class Documentation¶
-
class RungeKutta : public handle¶
-
Class container for Runge-Kutta solvers of the system of Ordinary Differential Equations (ODEs) or Differential Algebraic Equations (DAEs). The user must simply define the Butcher tableau of the solver, which defined as:
\[\begin{split} \begin{array}{c|c} \mathbf{c} & \mathbf{A} \\ \hline & \mathbf{b} \\ & \hat{\mathbf{b}} \end{array} \end{split}\]where \( \mathbf{A} \) is the Runge-Kutta matrix (lower triangular matrix):
\[\begin{split} \mathbf{A} = \begin{bmatrix} a_{11} & a_{12} & \dots & a_{1s} \\ a_{21} & a_{22} & \dots & a_{2s} \\ \vdots & \vdots & \ddots & \vdots \\ a_{s1} & a_{s2} & \dots & a_{ss} \end{bmatrix}, \end{split}\]\( \mathbf{b} \) is the Runge-Kutta weights vector relative to a method of order \( p \) (row vector):
\[ \mathbf{b} = \left[ b_1, b_2, \dots, b_s \right], \]\( \hat{\mathbf{b}} \) is the (optional) embedded Runge-Kutta weights vector relative to a method of order \( \hat{p} \) (usually \( \hat{p} = p−1 \) or \( \hat{p} = p+1 \)) (row vector):
\[ \hat{\mathbf{b}} = \left[ \hat{b}_1, \hat{b}_2, \dots, \hat{b}_s \right], \]and \( \mathbf{c} \) is the Runge-Kutta nodes vector (column vector):
\[ \mathbf{c} = \left[ c_1, c_2, \dots, c_s \right]^T. \]Public Functions
-
function RungeKutta(t_name, t_order, tbl)¶
-
Class constructor that requires the name of the solver used to integrate the system.
- Parameters
-
t_name – The name of the solver.
t_order – Order of the RK method.
tbl.A – The matrix \( \mathbf{A} \) (lower triangular matrix).
tbl.b – The weights vector \( \mathbf{b} \) (row vector).
tbl.b_e – The embedded weights vector \( \hat{\mathbf{b}} \) (row vector).
tbl.c – The nodes vector \( \mathbf{c} \) (column vector).
- Returns
-
An instance of the class.
-
function get_name()¶
-
Get the name of the solver used to integrate the system.
- Returns
-
The name of the solver.
-
function set_name(t_name)¶
-
Set the name of the solver used to integrate the system.
- Parameters
-
t_name – The name of the solver.
-
function get_order()¶
-
Get the order of the solver used to integrate the system.
- Returns
-
The order of the solver.
-
function set_system(t_sys)¶
-
Set the system to be solved.
- Parameters
-
t_sys – The system to be solved.
-
function get_max_substeps()¶
-
Get the maximum number of substeps.
- Returns
-
The maximum number of substeps.
-
function set_max_substeps(t_max_substeps)¶
-
Set the maximum number of substeps.
- Parameters
-
t_max_substeps – The maximum number of substeps.
-
function get_max_projection_iter()¶
-
Get the maximum number of iterations in the projection process.
- Returns
-
The maximum number of iterations in the projection process.
-
function set_max_projection_iter(t_max_projection_iter)¶
-
Set the maximum number of iterations in the projection process.
- Parameters
-
t_max_projection_iter – The maximum number of projection iterations.
-
function set_projection_tolerance(())¶
-
Set the tolerance for projection step.
- Parameters
-
t_projection_tolerance – The tolerance for projection step.
t_projection_low_tolerance – The low tolerance for projection step.
t_projection_rcond_tolerance – The matrix conditioning tolerance for projection step.
-
function()¶
-
Get the tolerance for projection step.
- Returns
-
The tolerance for projection step, the low tolerance and the matrix conditioning tolerance.
-
function get_projected_invs()¶
-
Get projected invariants boolean vector.
- Returns
-
The projected invariants boolean vector.
-
function set_projected_invs(t_projected_invs)¶
-
Set the projected invariants boolean vector.
- Parameters
-
t_projected_invs – The projected invariants boolean vector.
-
function get_A()¶
-
Get the matrix \( \mathbf{A} \) (lower triangular matrix).
- Returns
-
The matrix \( \mathbf{A} \) (lower triangular matrix).
-
function set_A(t_A)¶
-
Set the matrix \( \mathbf{A} \) (lower triangular matrix).
- Parameters
-
t_A – The matrix \( \mathbf{A} \) (lower triangular matrix).
-
function get_b()¶
-
Get the weights vector \( \mathbf{b} \) (row vector).
- Returns
-
The weights vector \( \mathbf{b} \) (row vector).
-
function set_b(t_b)¶
-
Set the weights vector \( \mathbf{b} \) (row vector).
- Parameters
-
t_b – The weights vector \( \mathbf{b} \) (row vector).
-
function get_b_e()¶
-
Get the embedded weights vector \( \hat{\mathbf{b}} \) (row vector).
- Returns
-
The embedded weights vector \( \hat{\mathbf{b}} \) (row vector).
-
function set_b_e(t_b_e)¶
-
Set the embedded weights vector \( \hat{\mathbf{b}} \) (row vector).
- Parameters
-
t_b_e – The embedded weights vector \( \hat{\mathbf{b}} \) (row vector).
-
function get_c()¶
-
Get the nodes vector \( \mathbf{c} \) (column vector).
- Returns
-
The nodes vector \( \mathbf{c} \) (column vector).
-
function set_c(t_c)¶
-
Set the nodes vector \( \mathbf{c} \) (column vector).
- Parameters
-
t_c – The nodes vector \( \mathbf{c} \) (column vector).
-
function get_A_tol()¶
-
Get the absolute tolerance for adaptive step.
- Returns
-
The absolute tolerance for adaptive step.
-
function set_A_tol(t_A_tol)¶
-
Set absolute tolerance for adaptive step.
- Parameters
-
t_A_tol – The absolute tolerance for adaptive step.
-
function get_R_tol()¶
-
Get the relative tolerance for adaptive step.
- Returns
-
The relative tolerance for adaptive step.
-
function set_R_tol(t_R_tol)¶
-
Set relative tolerance for adaptive step.
- Parameters
-
t_R_tol – The relative tolerance for adaptive step.
-
function get_safety_factor()¶
-
Get the safety factor for adaptive step.
- Returns
-
The safety factor for adaptive step.
-
function set_safety_factor(t_safety_factor)¶
-
Set safety factor for adaptive step.
- Parameters
-
t_fac – The safety factor for adaptive step.
-
function get_factor_min()¶
-
Get the minimum safety factor for adaptive step.
- Returns
-
The minimum safety factor for adaptive step.
-
function set_factor_min(t_factor_min)¶
-
Set the minimum safety factor for adaptive step.
- Parameters
-
t_factor_min – The minimum safety factor for adaptive step.
-
function get_factor_max()¶
-
Get the maximum safety factor for adaptive step.
- Returns
-
The maximum safety factor for adaptive step.
-
function set_factor_max(t_factor_max)¶
-
Set the maximum safety factor for adaptive step.
- Parameters
-
t_factor_max – The maximum safety factor for adaptive step.
-
function set_d_t_min(t_d_t_min)¶
-
Set the minimum step for advancing.
- Parameters
-
t_d_t_min – The minimum step for advancing.
-
function get_stages()¶
-
Get the stages number of the solver used to integrate the system.
- Returns
-
The stages number of the solver.
-
function is_explicit()¶
-
Check if the solver is explicit.
- Returns
-
True if the solver is explicit, false otherwise.
-
function is_implicit()¶
-
Check if the solver is implicit.
- Returns
-
True if the solver is implicit, false otherwise.
-
function is_embedded()¶
-
Check if the solver is embedded.
- Returns
-
True if the solver is embedded, false otherwise.
-
function get_tableau()¶
-
Get the Butcher tableau.
- Returns
-
The matrix \( \mathbf{A} \) (lower triangular matrix), the weights vector \( \mathbf{b} \) (row vector), the embedded weights vector \( \hat{\mathbf{b}} \) (row vector), and nodes vector \( \mathbf{c} \) (column vector).
-
function project_initial_conditions(x_t, t, varargin)¶
-
Project the system initial condition \( \mathbf{x} \) at time \( t \) on the invariants \( \mathbf{h} (\mathbf{x}, \mathbf{v}, t) = \mathbf{0} \). The constrained minimization is solved through the projection algorithm described in the project method.
- Parameters
-
x – The initial guess for the states \( \widetilde{\mathbf{x}} \).
t – The time \( t \) at which the states are evaluated.
x_b – [optional] Boolean vector to project the corresponding states to be projected (default: all states are projected).
- Returns
-
The solution of the projection problem \( \mathbf{x} \).
-
function step(x_k, t_k, d_t)¶
-
Compute a step using a generic integration method for a system of the form \( \mathbf{F}(\mathbf{x}, \mathbf{x}', \mathbf{v}, t) = \mathbf{0} \). The step is based on the following formula:
\[ \mathbf{x}_{k+1}(t_{k}+\Delta t) = \mathbf{x}_k(t_{k}) + \mathcal{S}(\mathbf{x}_k(t_k), \mathbf{x}'_k(t_k), t_k, \Delta t) \]where \( \mathcal{S} \) is the generic advancing step of the solver.
- Parameters
-
x_k – States value at \( k \)-th time step \( \mathbf{x}(t_k) \).
t_k – Time step \( t_k \).
d_t – Advancing time step \( \Delta t\).
- Returns
-
The approximation of \( \mathbf{x_{k+1}}(t_{k}+\Delta t) \) and \( \mathbf{x}'_{k+1}(t_{k}+\Delta t) \).
-
function adaptive_solve(t, x_0, varargin)¶
-
Solve the system and calculate the approximate solution over the suggested mesh of time points with adaptive step size control.
- Parameters
-
t – Time mesh points \( \mathbf{t} = \left[ t_0, t_1, \ldots, t_n \right]^T \).
x_0 – Initial states value \( \mathbf{x}(t_0) \).
t_min – [optional] Minimum timestep \( \Delta t_{\mathrm{min}} \).
t_max – [optional] Maximum timestep \( \Delta t_{\mathrm{max}} \).
- Returns
-
A matrix \( \left[(\mathrm{size}(\mathbf{x}) \times \mathrm{size} (\mathbf{t})\right] \) containing the approximated solution \( \mathbf{x}(t) \) and \( \mathbf{x}^\prime(t) \) of the system. Additionally, the veils \( \mathbf{v}(t) \) and invariants \( \mathbf{h}(\mathbf{x}, \mathbf{v}, t) \) are also returned.
-
function advance(x_k, t_k, d_t)¶
-
Advance using a generic integration method for a system of the form \( \mathbf{F}(\mathbf{x}, \mathbf{x}', \mathbf{v}, t) = \mathbf{0} \). The step is based on the following formula:
\[ \mathbf{x}_{k+1}(t_{k}+\Delta t) = \mathbf{x}_k(t_{k}) + \mathcal{S}(\mathbf{x}_k(t_k), \mathbf{x}'_k(t_k), t_k, \Delta t) \]where \( \mathcal{S} \) is the generic advancing step of the solver. In the advvancing step, the system solution is also projected on the manifold \( \mathcal{h}(\mathbf{x}, \mathbf{v}, t) \). Substepping is also used to ensure that the solution is accurate.
- Parameters
-
x_k – States value at \( k \)-th time step \( \mathbf{x}(t_k) \).
t_k – Time step \( t_k \).
d_t – Advancing time step \( \Delta t\).
- Returns
-
The approximation of \( \mathbf{x_{k+1}}(t_{k}+\Delta t) \), \( \mathbf{x}'_{k+1}(t_{k}+\Delta t) \) and the suggested time step for the next advancing step \( \Delta t_{k+1} \).
-
function check_tableau(tbl)¶
-
Check Butcher tableau consistency for an explicit Runge-Kutta method.
- Parameters
-
tbl.A – Matrix \( \mathbf{A} \).
tbl.b – Weights vector \( \mathbf{b} \).
tbl.b_e – [optional] Embedded weights vector \( \hat{\mathbf{b}} \) (row vector).
tbl.c – Nodes vector \( \mathbf{c} \).
- Returns
-
True if the Butcher tableau is consistent, false otherwise.
-
function estimate_step(x_h, x_l, d_t)¶
-
Compute adaptive time step for the next advancing step according to the error control method. The error control method used is the local truncation error method, which is based on the following formula:
\[ e = \sqrt{\dfrac{1}{n} \displaystyle\sum_{i=1}{n}\left(\dfrac {\mathbf{x} - \hat{\mathbf{x}}} {s c_i} \right)^2} \]where \( \mathbf{x} \) is the approximation of the states at computed with higher order method of \( p \), and \( \hat{\mathbf{x}} \) is the approximation of the states at computed with lower order method of \( \hat{p} \). To compute the suggested time step for the next advancing step \( \Delta t_{k+1} \), The error is compared to \( 1 \) in order to find an optimal step size. From the error behaviour \( e \approx Ch^{q+1} \) and from \( 1 \approx Ch_{opt}^{q+1} \) (where \( q = \min(p,\hat{p}) \)) the optimal step size is obtained as:
\[ h_{opt} = h \left( \dfrac{1}{e} \right)^{\frac{1}{q+1}} \]We multiply the previous quation by a safety factor \( f \), usually \( f = 0.8 \), \( 0.9 \), \( (0.25)^{1/(q+1)} \), or \( (0.38)^{1/(q+1)} \), so that the error will be acceptable the next time with high probability. Further, \( h \) is not allowed to increase nor to decrease too fast. So we put:
\[ h_{new} = h \min \left( f_{max}, \max \left( f_{max}, f \left( \dfrac{1}{e} \right)^{\frac{1}{q+1}} \right) \right) \]for the new step size. Then, if \( e \leq 1 \), the computed step is accepted and the solution is advanced to \( \mathbf{x} \) and a new step is tried with \( h_{new} \) as step size. Else, the step is rejected and the computations are repeated with the new step size \( h_{new} \). Typially, \( f \) is set in the interval \( [0.8, 0.9] \), \( f_{max} \) is set in the interval \( [1.5, 5] \), and \( f_{min} \) is set in the interval \( [0.1, 0.2] \).
- Parameters
-
x_h – Approximation of the states at \( k+1 \)-th time step \( \mathbf{x_{k+1}}(t_{k}+\Delta t) \) with higher order method.
x_l – Approximation of the states at \( k+1 \)-th time step \( \mathbf{x_{k+1}}(t_{k}+\Delta t) \) with lower order method.
d_t – Actual advancing time step \( \Delta t\).
- Returns
-
The suggested time step for the next advancing step \( \Delta t_{k+1} \).
-
function explicit_K(x_k, t_k, d_t)¶
-
Solve the \( i \)-th explicit step of the system and find the \( \mathbf{K}_i \) variable:
\[ \mathbf{F}_i\left(\mathbf{x}_k + \Delta t \displaystyle\sum_{j=1}^{i-1} a_{ij} \mathbf{K}_j, \, \mathbf{K}_i, \, t_k + c_i \Delta t \right) = \mathbf{0} \]by Newton’s method.
- Parameters
-
x_k – States value at \( k \)-th time step \( \mathbf{x}(t_k) \).
K – Initial guess for the \( \mathbf{K} \) variable to be found.
t_k – Time step \( t_k \).
d_t – Advancing time step \( \Delta t\).
- Returns
-
The \( \mathbf{K} \) variables of the system to be solved and the error control flag.
-
function explicit_step(x_k, t_k, d_t)¶
-
Compute an integration step using the explicit Runge-Kutta method for a system of the form \( \mathbf{F}(\mathbf{x}, \mathbf{x}', \mathbf{v}, t) = \mathbf{0} \).
Solution Algorithm
Consider a Runge-Kutta method, written for a system of the form \( \mathbf{x}' = \mathbf{f}(\mathbf{x}, \mathbf{v}, t) \):
\[\begin{split} \begin{array}{l} \mathbf{K}_i = \mathbf{f} \left( \mathbf{x}_k + \Delta t \displaystyle\sum_{j=1}^{s} a_{ij} \mathbf{K}_j, \, t_k + c_i \Delta t \right), \qquad i = 1, 2, \ldots, s \\ \mathbf{x}_{k+1} = \mathbf{x}_k + \Delta t \displaystyle\sum_{j=1}^s b_j \mathbf{K}_j \, , \end{array} \end{split}\]Beacuse of the nature of the matrix \( \mathbf{A} \) (lower triangular) the \( s\) stages for a generic explicit Runge-Kutta method take the form:
\[ \mathbf{K}_i = \mathbf{f} \left( \mathbf{x}_k + \Delta t \displaystyle\sum_{j=1}^{i-1} a_{ij} \mathbf{K}_j, \, t_k + c_i \Delta t \right), \qquad i = 1, 2, \ldots, s. \]Then the explicit Runge-Kutta method for an implicit system of the form \(\mathbf{F}(\mathbf{x}, \mathbf{x}', t) = \mathbf{0} \) can be written as:
\[\begin{split} \begin{array}{l} \mathbf{F}_i \left( \mathbf{x}_k + \Delta t \displaystyle\sum_{j=1}^{i-1} a_{ij} \mathbf{K}_j, \, \mathbf{K}_i, \, t_k + c_i \Delta t \right) = \mathbf{0}, \qquad i = 1, 2, \ldots, s \\ \mathbf{x}_{k+1} = \mathbf{x}_k + \displaystyle\sum_{j=1}^s b_j \mathbf{K}_j. \end{array} \end{split}\]It is important to notice that the system of \( s \) equations \( \mathbf{F}_i \) is a triangular system (which may be non-linear in the \( \mathbf{K}_i \) variables), so it can be solved using forward substitution and the solution of the system is the vector \( \mathbf{K} \). Thus, the final system to be solved is the following:
\[\begin{split} \left\{\begin{array}{l} \mathbf{F}_1 \left( \mathbf{x}_k, \, \mathbf{K}_1, \, t_k + c_1 \Delta t \right) = \mathbf{0} \\ \mathbf{F}_2 \left( \mathbf{x}_k + \Delta t \, a_{21} \mathbf{K}_1, \, \mathbf{K}_2, \, t_k + c_2 \Delta t \right) = \mathbf{0} \\ ~~ \vdots \\ \mathbf{F}_s \left( \mathbf{x}_k + \Delta t \displaystyle\sum_{j=1}^{s-1} a_{sj} \mathbf{K}_j, \, \mathbf{K}_s, \, t_k + c_s \Delta t \right) = \mathbf{0} \end{array}\right. \end{split}\]The \( \mathbf{K}_i \) variable are computed using the Newton’s method.
Note
Another approach is to directly solve the whole system of equations by Newton’smethod. In other words, the system of equations is solved iteratively by computing the Jacobian matrixes of the system and using them to compute the solution. This approach is used in the implicit Runge-Kutta method. For this reason, a Butcher tableau relative to an explicit Runge-Kutta method can also be used in the
ImplicitRungeKutta
class.The suggested time step for the next advancing step \( \Delta t_{k+1} \), is the same as the input time step \( \Delta t \) since in the explicit Runge-Kutta method the time step is not modified through any error control method.
- Parameters
-
x_k – States value at \( k \)-th time step \( \mathbf{x}(t_k) \).
t_k – Time step \( t_k \).
d_t – Advancing time step \( \Delta t\).
- Returns
-
The approximation of the states at \( k+1 \)-th time step \( \mathbf{x_{k+1}}(t_{k}+\Delta t) \), the suggested time step for the next advancing step \( \Delta t_{k+1} \), and the error control flag.
-
function implicit_jacobian(x_k, K_in, t_k, d_t)¶
-
Compute the Jacobian of the system of equations:
\[ \mathbf{F}_i\left(\mathbf{x}_k + \Delta t \displaystyle\sum_{j=1}^s a_{ij} \mathbf{K}_j, \, \mathbf{K}_i, \, t_k + c_i \Delta t \right) = \mathbf{0} \]to be solved in the \( \mathbf{K} \) variable:
\[ \dfrac{\partial \mathbf{F}_i}{\partial \mathbf{K}_i} \left( \mathbf{x}_k + \Delta t \displaystyle\sum_{j=1}^s a_{ij} \mathbf{K}_j, \, \mathbf{K}_i, \, t_k + c_i \Delta t \right) \]- Parameters
-
i – Index of the step to be computed.
x_i – States value at \( i \)-th node.
K – Variable \( \mathbf{K} \) of the system to be solved.
t_k – Time step \( t_k \).
d_t – Advancing time step \( \Delta t\).
- Returns
-
The Jacobian of the system of equations to be solved.
-
function implicit_residual(x_k, K_in, t_k, d_t)¶
-
Compute the residual of system to be solved:
\[ \mathbf{F}_i\left(\mathbf{x}_k + \Delta t \displaystyle\sum_{j=1}^{s} a_{ij} \mathbf{K}_j, \, \mathbf{K}_i, \, t_k + c_i \Delta t \right) = \mathbf{0}. \]- Parameters
-
x_k – States value at \( k \)-th time step \( \mathbf{x}(t_k) \).
K – Variable \( \mathbf{K} \) of the system to be solved.
t_k – Time step \( t_k \).
d_t – Advancing time step \( \Delta t\).
- Returns
-
The residual of system to be solved.
-
function implicit_step(x_k, t_k, d_t)¶
-
Compute an integration step using the implicit Runge-Kutta method for a system of the form \( \mathbf{F}(\mathbf{x}, \mathbf{x}', t) = \mathbf{0} \).
Solution Algorithm
Consider a Runge-Kutta method, written for a system of the form \( \mathbf{x}' = \mathbf{f}(\mathbf{x}, \mathbf{v}, t) \):
\[\begin{split} \begin{array}{l} \mathbf{K}_i = \mathbf{f} \left( \mathbf{x}_k + \Delta t \displaystyle\sum_{j=1}^{s} a_{ij} \mathbf{K}_j, \, t_k + c_i \Delta t \right), \qquad i = 1, 2, \ldots, s \\ \mathbf{x}_{k+1} = \mathbf{x}_k + \Delta t \displaystyle\sum_{j=1}^s b_j \mathbf{K}_j \, , \end{array} \end{split}\]Then the implicit Runge-Kutta method for an implicit system of the form \(\mathbf{F}(\mathbf{x}, \mathbf{x}', t) = \mathbf{0} \) can be written as:
\[\begin{split} \begin{array}{l} \mathbf{F}_i \left( \mathbf{x}_k + \Delta t \displaystyle\sum_{j=1}^s a_{ij} \mathbf{K}_j, \, \mathbf{K}_i, \, t_k + c_i \Delta t \right) = \mathbf{0}, \qquad i = 1, 2, \ldots, s \\ \mathbf{x}_{k+1} = \mathbf{x}_k + \displaystyle\sum_{j=1}^s b_j \mathbf{K}_j. \end{array} \end{split}\]Thus, the final system to be solved is the following:
\[\begin{split} \left\{\begin{array}{l} \mathbf{F}_1 \left( \mathbf{x}_k + \Delta t \displaystyle\sum_{j=1}^s a_{1j} \mathbf{K}_j, \, \mathbf{K}_1, \, t_k + c_1 \Delta t \right) = \mathbf{0} \\ \mathbf{F}_2 \left( \mathbf{x}_k + \Delta t \displaystyle\sum_{j=1}^s a_{2j} \mathbf{K}_j, \, \mathbf{K}_2, \, t_k + c_2 \Delta t \right) = \mathbf{0} \\ ~~ \vdots \\ \mathbf{F}_s \left( \mathbf{x}_k + \Delta t \displaystyle\sum_{j=1}^s a_{sj} \mathbf{K}_j, \, \mathbf{K}_s, \, t_k + c_s \Delta t \right) = \mathbf{0} \end{array}\right. \end{split}\]The \( \mathbf{K} \) variables are computed using the Newton’s method.
The suggested time step for the next advancing step \( \Delta t_{k+1} \), is the same as the input time step \( \Delta t \) since in the implicit Runge-Kutta method the time step is not modified through any error control method.
- Parameters
-
x_k – States value at \( k \)-th time step \( \mathbf{x}(t_k) \).
t_k – Time step \( t_k \).
d_t – Advancing time step \( \Delta t\).
- Returns
-
The approximation of the states at \( k+1 \)-th time step \( \mathbf{x_{k+1}}(t_{k}+\Delta t) \), the suggested time step for the next advancing step \( \Delta t_{k+1} \), and the error control flag.
-
function project(x_t, t, varargin)¶
-
Project the system solution \( \mathbf{x} \) on the invariants \( \mathbf{h} (\mathbf{x}, \mathbf{v}, t) = \mathbf{0} \). The constrained minimization problem to be solved is:
\[ \textrm{minimize} \quad \dfrac{1}{2}\left(\mathbf{x} - \widetilde{\mathbf{x}}\right)^2 \quad \textrm{subject to} \quad \mathbf{h}(\mathbf{x}, \mathbf{v}, t) = \mathbf{0}. \]Solution Algorithm
Given the Lagrangian of the minimization problem of the form:
\[ \mathcal{L}(\mathbf{x}, \boldsymbol{\lambda}) = \frac{1}{2}\left(\mathbf{x} - \widetilde{\mathbf{x}}\right)^2 + \boldsymbol{\lambda} \cdot \mathbf{h}(\mathbf{x}, \mathbf{v}, t). \]The solution of the problem is obtained by solving the following:
\[\begin{split} \left\{\begin{array}{l} \mathbf{x} + \mathbf{Jh}_\mathbf{x}^T \boldsymbol{\lambda} = \widetilde{\mathbf{x}} \\[0.5em] \mathbf{h}(\mathbf{x}, \mathbf{v}, t) = \mathbf{0} \end{array}\right. \end{split}\]Using the Taylor expansion of the Lagrangian:
\[ \mathbf{h}(\mathbf{x}, \mathbf{v}, t) + \mathbf{Jh}_\mathbf{x} \delta\mathbf{x} + \mathcal{O}\left(\left\| \delta\mathbf{x} \right\|^2\right) = \mathbf{0} \]define the iterative method as:
\[ \mathbf{x} = \widetilde{\mathbf{x}} + \delta\mathbf{x}. \]Notice that \( \delta\mathbf{x} \) is the solution of the linear system:
\[\begin{split} \begin{bmatrix} \mathbf{I} & \mathbf{Jh}_\mathbf{x}^T \\[0.5em] \mathbf{Jh}_\mathbf{x} & \mathbf{0} \end{bmatrix} \begin{bmatrix} \delta\mathbf{x} \\[0.5em] \boldsymbol{\lambda} \end{bmatrix} = \begin{bmatrix} \widetilde{\mathbf{x}} - \mathbf{x} \\[0.5em] -\mathbf{h}(\mathbf{x}, \mathbf{v}, t) \end{bmatrix} \end{split}\]where \( \mathbf{Jh}_\mathbf{x} \) is the Jacobian of the invariants/ with respect to the states \( \mathbf{x} \).
- Parameters
-
x_t – The initial guess for the states \( \widetilde{\mathbf{x}} \).
t – The time \( t \) at which the states are evaluated.
x_b – [optional] Boolean vector to project the corresponding states to be projected (default: all states are projected).
- Returns
-
The solution of the projection problem \( \mathbf{x} \).
-
function set_tableau(tbl)¶
-
Set the Butcher tableau.
- Parameters
-
tbl.A – Matrix \( \mathbf{A} \) (lower triangular matrix).
tbl.b – Weights vector \( \mathbf{b} \) (row vector).
tbl.b_e – [optional] Embedded weights vector \( \hat{\mathbf{b}} \) (row vector).
tbl.c – Nodes vector \( \mathbf{c} \) (column vector).
-
function solve(t, x_0)¶
-
Solve the system and calculate the approximate solution over the mesh of time points.
- Parameters
-
t – Time mesh points \( \mathbf{t} = \left[ t_0, t_1, \ldots, t_n \right]^T \).
x_0 – Initial states value \( \mathbf{x}(t_0) \).
- Returns
-
A matrix \( \left[(\mathrm{size}(\mathbf{x}) \times \mathrm{size} (\mathbf{t})\right] \) containing the approximated solution \( \mathbf{x}(t) \) and \( \mathbf{x}^\prime(t) \) of the system. Additionally, the veils \( \mathbf{v}(t) \) and invariants \( \mathbf{h}(\mathbf{x}, \mathbf{v}, t) \) are also returned.
-
function tableau_order(A, b, c)¶
-
Check the order of a Runge-Kutta tableau according to the routine taken from thereference: A family of embedded Runge-Kutta formulae, J. R. Dormand and Journal of Computational and Applied Mathematics, volume 6(1), 1980.
- Parameters
-
A – Matrix \( \mathbf{A} \).
b – Weights vector \( \mathbf{b} \) (row vector).
c – Nodes vector \( \mathbf{c} \) (column vector).
- Returns
-
The order of the Runge-Kutta tableau and an error message if the order is not correct.
-
function RungeKutta(t_name, t_order, tbl)¶