CEPS  24.01
Cardiac ElectroPhysiology Simulator
Structure of CEPS

CEPS follows a quite standard way of implementing PDE resolution. The main program instantiates an InputParameters class followed by the Geometry built from the meshes given in the input file. The Geometry is partitioned when instantiated. Then, depending on the user's request, a PDE problem class or a convergence study for this class is created then run.

Now we will list the main classes of CEPS, with a very rough description on how they work.

Parameters

It is the first class created by the CEPS program. Each PDE problem is linked to this instance. InputParameters is basically a nice wrapper around a string-to-string map build from the input file.

There should be only one instance of this class in the program, all structures refering to it. Most of the main classes have a setupWithParameters method to set its attribute from the input file.

Geometry

This class encompasses the definition of all geometrical nodes (GeomNode) and geometrical cells (GeomCell) grouped in a Mesh, for each dimension. It also contains the partition of the computational domain for parallel computations.

Partitions are computed with PTScotch, which distributes the nodes between processors. The nodes are then renumbered to have contiguous numerotation on each processor, and with an offset for each CPU: all nodes on CPU n2 have greater IDs than on CPU n1<n2. This new index is accessed with GeomNode::getGlobalIndex.

Each CPU has a list of owned nodes and halo nodes, which are the set of nodes owned by other CPUs located 1 edge from owned nodes. We then distribute cells based on the following heuristic: a cell is owned by the CPU that owns the node with lower global index. Then, a similar renumbering as for nodes is applied to cells. The index is accessed with GeomCell::getGlobalIndex.

PDE Problem classes (e.g. CardiacProblem, BilayerAtriaProblem)

The XXXPdeProblem classes regroup all necessary elements needed to perform a simulation. They are initialized from a text input file. With the exception of Geometry and InputParameters, they contain the structures listed in this page.

All PDE problems are derived from either AbstractStaticPdeProblem, or AbstractTimedPdeProblem, which both derive from the AbstractPdeProblem class. These abstract classes contain elements and basic methods that are common to all PDE problems.

Here is a list of the main methods of these classes, in particular those that can or must be overloaded to define a new problem.

Method Purpose
AbstractPdeProblem::defineUnknowns Creates the unknowns of the problem. It can be spatial unknowns or unknowns that exist outside of the geometry (called 0D unknowns). Interactions between unknowns must also be set here.
See also AbstractPdeProblem::addUnknown, AbstractPdeProblem::addZeroDUnknown, AbstractPdeProblem::addUnknownInteraction.
AbstractPdeProblem::defineSourceTerms
AbstractPdeProblem::defineBoundaryConditions
Add source terms and boundary conditions to the SourceTermManager and BoundaryConditionManager held by the problem. See Syntax to add boundary conditions and source terms..
AbstractStaticPdeProblem::defineInitialGuess
AbstractTimedPdeProblem::defineInitialCondition
Sets the link to functors used to define initial condition (or guess for static problems).
AbstractPdeProblem::defineAnalyticSolution Same with problems that have an analytic solution.
AbstractPdeProblem::run The main event. The method defines how the solution is computed. It generally instantiates a XXXPdeSolver and calls its solve method.

To better understand how a problem is organized and how to create a new one, have a look at this page which explains how the PseudoWaveFrontPdeProblem is built.

FunctionDictionary, SAFunc

The dictionary is a class that stores scalar, vectorial and tensorial functions of either time, space or both. These functions are of the type SAFunc, which take a single argument of structured type CepsStandardArgs. The SAFunc picks from the standard args what it needs to evaluate the function (time, location, cell ID, unknown ID, whatever).

See this section to see how some functions can be added to the dictionary with a single input string. However, you will likely need to define your own functions. In that case, see how it is done for the PseudoWaveFrontPdeProblem here.

BoundaryConditionManager, StimulationManager, Field

These classes manage data about regular boundary conditions (Dirichlet, Neumann, Robin) and Stimulation fields. Each BoundaryCondition and SourceTerm is derived from a ScalarField, which is a wrapper around ScalarSAFunc. The Field class adds to SAFunc the possibility of precomputing the values of the function (useful for functions that do not change in time) and/or the support of the function (useful for localized stimulations).

See this page for information on how to add boundary conditions and source terms to the managers.

Spatial discretization, degrees of freedom (AbstractDiscretization, DegreeOfFreedom)

The AbstractDiscretization class regroups some elements that are common between Finite Elements, Finite Differences and Finite Volumes. Specifically, it is responsible of building the data structures that hold degrees of freedom of the PDE problem. Since a discretization node (or cell) can hold several degrees of freedom, and not necessarily the same number everywhere, these data structures are quite intricate, especially because of parallelism.

There are three ways to access the degrees of freedom (dofs):

1- the dofs tree: it sorts dofs by their location type (node, cell, 0D), then the ID of the element on which they are defined (unknown ID for 0D dofs). 2- the distributed dofs: dofs are split into owned dofs and halo dofs. 3- the dofs per unknown maps: you can get all the dofs for a specific unknown with this map.

Finite Elements

The class FiniteElements, derived from AbstractDiscretization regroups data on all non-boundary and boundary elements of each dimension (0D,1D,2D,3D). In particular, it contains the data of the nodes that are added for elements that have nodal points that do not coincide with the geometrical nodes (e.g. P2 elements that have middle points). Storing these nodes in this class avoids node duplication between elements that share nodes.

The class also contains instances of GaussianQuadratures, one for each dimension, which are used during the assembling of the linear system.

It also contains ReferenceFE instances (reference finite elements), one for each dimension and type of finite elements. Those reference elements contain data on how to place node in the element and the definition of the basis functions.

Finally, the finite elements themselves, which are derived from the abstract class FEBase, are basically collections of pointers on geometrical elements they are built from (with data on nodes and Jacobians) and their reference finite element.

Distributed Vectors and Matrices

When running in parallel, the degrees of freedom and the geometrical elements are distributed between the processors, such that each CPUs deals with data on its subdomain. An one-cell-wide overlapping region (called halo in CEPS) is added to the subdomain for link and communications between CPUs.

The classes DistributedVector, DistributedMatrix and DistributedHaloVector manage the partition of real scalar arrays. For other types, we use the DistributedInfos class. Instances of these classes can be used without CPU-specific operations. DistributedHaloVector also contains data from the halo regions.

Additionally, within an instance of AbstractPdeProblem (or FiniteElements alone), one can automatically generate distributed arrays by calling getDistributedVector(), getDistributedHaloVector() or getDistributedMatrix(). These method invoke a DistributedFactory, located in the FiniteElements class, which will create arrays following the partition of the elements, and taking into account the number of degrees of freedom of the problem.

PDE Solvers (e.g. MonodomainSolver, LaplacianSolver)

This class is in charge of performing the main loop solving the PDE, being called through the main public method AbstractPdeSolver::solve. Similarily to Pde Problems, there is an intermediate abstract layer for solvers for static PDEs and PDEs with time.

The class is linked to all other elements (geometry, problem, spatial discretization, assemblers, output writer...) and contains the arrays storing the values of the solution and necessary data such as ionic currents for cardiac solvers. These arrays may contain data at several time steps for multistep methods (e.g. Crank Nicholson) or methods with substeps (Runge-Kutta).

The private method setupLinearSystem is called once (or as few as possible for multisteps methods) and is intended to build the left hand side matrix as well as parts of the solving process that will not change during the simulation (e.g. compute mass and/or stiffness matrices). The private method updateLinearSystem is called at each iteration and is intended to regroup changes in the matrix and in the right hand side of the PDE.

PDE Assemblers (e.g. BidomainAssembler, HeatAssembler, FEMassMatrixAssembler)

These classes regroup methods that fill the discretization linear system (left hand side matrix and right hand side vector).

Finite Elements Assemblers

The derived classes of finite element assemblers have the computeBlocksOnElementAtQuadPoint virtual method, in which are defined the terms that come from the variational formulation of the PDE. It evaluates the coefficients to add in the linear system corresponding to a single finite element, for a single evaluation at a quadrature point.

This method must be defined and must contain the following elements:

1- Build a list of valid unknowns on the element. This can be done by calling extractValidUnknownsFrom(m_problem->getSpatialUnknowns(),elem) 2- Calling setupBlocksOnElementForUnknowns(unknowns) where the argument is the list built at previous step. This will extract the right degrees of freedom, allocate the submatrices that will be inserted into the big linear system. 3- Filling the submatrices m_bMat[iU1][iU2] and potentially subvectors m_bVec[iU1] where iU is an index in the list build at step 1. The matrices phi and gradPhi in this function are available to use. For a simple example, have a look at FEHeatAssembler::computeBlocksOnElementAtQuadPoint.

Ionic Model Solver

IonicSolver is a class that regroups an instance of an ionicmodel" and an instance of an \ref DclassODE "ODE solver". The class makes the link between the generic use of the ODE solver and ionic models that have specific features. In particular, even if the ionic model can be splitted into a linear and non-linear evolution function, the two need to be computed at the same time to save precious computation time. Ionic models are one of if not the heaviest part of cardiac computations. The IonicSolver takes care of that.

The solver then only takes the transmembrane voltage at a time $t$ and returns the term $C_\mathrm{m}\partial_t v$ corresponding to ionic currnet and** stimulation current.

Ionic Models

Ionic models are derived from the AbstractIonicModel class. They contain the definitions of ionic variables and how they evolve in time, depending on the transmembrane voltage that it is computed in the PDE system. They are coupled to the PDE system by returning $C_\mathrm{m}\partial_t v$. This current accounts for ionic currents themselves but also stimulation current. The two currents are not splitted as the stimulation current can appear in several places of the model (as in the Ten-Tuscher model). The total current is returned in two linear and non-linear parts (see next section).

Ionic model classes work in conjunction with the ODE Solverclasses" through an instance of an IonicSolver.

Typically, models are imported from the CellML repository and follow the structure of the downloaded models. State variables are split into gate variables and misc variables, each set being affected to a different ODE solver. It is useful to define macros for indices as there can be a large number of variables and parameters. For example, in the Ten Tusscher model:

// aliases for gates variables
static constexpr const int_t m_xR1 = 0;
static constexpr const int_t m_xR2 = 1;
static constexpr const int_t m_xS = 2;
static constexpr const int_t m_m = 3;
static constexpr const int_t m_h = 4;
(...)

ODE Solver

ODE solvers are designed to solve the equations

\[ y'(t) = f(y,t), \]

which can be written as the sum of a linear and non-linear part:

\[ y'(t) = a(y,t)y + b(t,y). \]

They are designed to work either in conjunction with an ionic model, or using pointers to generic functions. Ionic models fall into the second writing of the equations with $b\equiv 0$ for gate variables and $ a\equiv 0$ for misc variables.

Solvers are generally based on multiple evaluations of $f$, either at previous type steps (multistep methods), or at multiple times during current time steps (methods with substeps). This means that those states must be stored in the class structure, on top of the arrays for $ y$ at times $t^n$ and $t^{n+1}$.