CEPS

CEPS follows a quite standard way of implementing PDE resolution. The main classes are:
The Problem class regroups all necessary elements needed to perform a simulation. It is initialized from a text input file. It contains the structures already listed above. The main method of this class is run()
, which simply performs the simulation with options defined in the file.
The global structure of a CardiacProblem and its components is as follows:
All PDE problems are derived from the AbstractPdeProblem class. This abstract class contains elements and basic variants of methods that are common to all PDE problems (see PdeParameters and below).
Each PDE problem contains at least one instance of a class that regroups parameters: PdeParameters. Then, additional classes for parameters can be added/derived to handle parameters dedicated to a specific "task". For example, the class BilayerAtriaProblem contains an instance of BilayerParameters, an instance of CardiacParameters (as it is derived from CardiacProblem), and an instance of PdeParameters (as it is derived from AbstractPdeProblem).
Each new parameter class should have a readFromInputFile
method.
This class encompasses the definition of all geometrical nodes (Node) and elements (all derived from AbstractElement) grouped in a Mesh, for each dimension. It also contains the partition of the computational domain for parallel computations.
This class is in charge of performing the main loop solving the PDE, being called through the main public method solve()
. The class is linked to all other elements (geometry, finite elements, assemblers, myocardium, output writer...) and contains the arrays storing the values of the solution and necessary data such as ionic currents for cardiac solvers. These arrays may contain data at several time steps for multistep methods (e.g. Crank Nicholson) or methods with substeps (RungeKutta).
The private method setupLinearSystem()
is called once (or as few as possible for multisteps methods) and is intended to build the left hand side matrix as well as parts of the solving process that will not change during the simulation (e.g. compute mass and/or stiffness matrices). The private method updateLinearSystem()
is called at each iteration and is intended to regroup changes in the matrix and in the right hand side of the PDE.
Myocardium holds information on the state of the simulated tissue: its intrisic properties, such as the conductivities, but also the state of the ionic gates at the current time of the simulation. Therefore, Myocardium contains ionic models.
These classes regroup methods that fill the discretization linear system (left hand side matrix and right hand side vector). In particular, the derived classes of finite element assemblers have the computeMatrixTerm
and computeVectorTerm
virtual methods, in which are defined the terms that come from the variational formulation of the PDE.
Some generic assemblers can be found in the src/pde/assemblers
directory (FEPkMassMatrixAssembler, FEPkStiffnessMatrixAssembler), as well as assemblers for simple equations (LaplacianAssembler, HeatAssembler). Assemblers for cardiac applications are located in the src/cardiac/assemblers
directory.
There are some typedefs
for the Assembler class to ease the reading of the code:
DofMatrix
is the matrix corresponding contributions of a single element to the large problem matrix. Even though it is a dynamic matrix, it is intended to be of size N*N where N is the number of nodal points of an element times the number of unknowns per node.DofVector
is similar to DofMatrix
for the vector contribution of the element.UnknownVector
is a vector of scalars intended to store the values of the solution at previous step on nodal points.The class FiniteElements regroups data on all nonboundary and boundary elements of each dimension (0D,1D,2D,3D). In particular, it contains the data of the nodes that are added for elements that have nodal points that do not coincide with the geometrical nodes (e.g. P2 elements that have middle points). Storing these nodes in this class avoids node duplication between elements that share nodes.
The class also contains instances of GaussianQuadratures, one for each dimension, which are used during the assembling of the linear system.
It also contains ReferenceFE instances (reference finite elements), one for each dimension and type of finite elements. Those reference elements contain data on how to place node in the element and the definition of the basis functions.
Finally, the finite elements themselves, which are derived from the abstract class FEBase, are basically collections of pointers on geometrical elements they are built from (with data on nodes and Jacobians) and their reference finite element.
The dictionary is a class that stores scalar functions of either time, space or both. It manages instance of the ScalarFieldFunctor class, which means that the functions are user defined (see the advanced syntax page)).
Coefficients that are defined on the whole computational domain can be defined through this class. Coefficients can be scalar, vectorial ( ), or tensorial. Coefficients can be either constant, piecewiseconstants (different with respect to region attribute), read from a VTK file, or linked to any dictionary entry (for scalar coefficients).
These classes manage data about regular boundary conditions (Dirichlet, Neumann, Robin) and Stimulation fields. Each BoundaryCondition and Stimulation is derived from a ScalarField, which is an abstract way of handling functions of time and space. The BoundaryCondition and Stimulation instances store information on when and where they are applied. The managers create these instances, and can call their actualize() method, before applying the values into a DistributedVector.
The Stimulation and BoundaryCondition classes use different initializers than the default Dictionary entries for a more userfriendly way of passing parameters.
IonicModelSolver is a class that regroups an instance of an ionic model and two instances of an ODE solver, one for the gate variables of the ionic model, one for the other variables. Since the two solvers, that may be of different kinds, are coupled, this class provides a simple interface with the rest of the code. Here is an example of usage, provided you previously instantiated myIonicModel:
Ionic models are derived from the AbstractIonicModel class. They contain the definitions of ionic variables and how they evolve in time, depending on the transmembrane voltage that it is computed in the PDE system. They are coupled to the PDE system through the total ionic current that is computed at each ODE time step.
Ionic model classes work in conjunction with the ODE Solver classes through an instance of an IonicModelSolver.
Typically, models are imported from the CellML repository and follow the structure of the downloaded models. State variables are split into gate variables and misc variables, each set being affected to a different ODE solver. It is useful to define macros for indices as there can be a large number of variables and parameters. For example, in the Ten Tusscher model:
Ionic models can be set on specific parts of the mesh, so they store their own data about the state of the ODE system. In particular the arrays
contain scalar values at each degree of freedom of the region the model was attributed.
The following functions must be defined in every derived ionic model class:
setInitialConditions()
: initializes the gate and misc variables, as well as the action potential stored internally.computeGateRates
, computeMiscRates:
evolution functions .getNbStateVariables
, getNbMiscVariables
, getNbParameters
: return some basic information to inform other classes.ODE solvers are structured designed to solve the equations
which can be sometimes written as the sum of a linear and nonlinear part:
They are designed to work either in conjunction with an ionic model, or using pointers to generic functions. Ionic models fall into the second writing of the equations with for gate variables and for misc variables.
Here is an example of the use of an ODE with generic functions
Solvers are generally based on multiple evaluations of , either at previous type steps (multistep methods), or at multiple times during current time steps (methods with substeps). This means that those states must be stored in the class structure, on top of the arrays for at times and . If the array for already exists (as for the ionic model with m_gates
or m_misc
), a pointer to this array is sufficient.
The prototype (named rateFunction
) for the functions , and is the following:
The first real is the time, then a pointer to the preallocated arrays and , and finally a integer wich is the size of these arrays (which must the same!).
The following methods must be defined in each derived ODE solver:
takeOneStep
: how do we get from to takeOneSubStep
: leave this empty if the method has no substeps.When running in parallel, the computational domain is divided between the processors, such that each CPUs deals with data on its subdomain. An overlapping region of one or several cells (called \i halo in CEPS) is added to the subdomain for link and communications between CPUs.
The classes DistributedVector, DistributedMatrix and DistributedHaloVector manage the partition of data arrays. Instances of these classes can be used without CPUspecific operations. DistributedHaloVector also contains data from the halo regions. Basic usage of distributed vectors and matrices can be found in the src/linearAlgebra/examples
directory.
Additionally, within an instance of AbstractPdeProblem (or FiniteElements alone), one can automatically generate distributed arrays by calling getDistributedVector(), getDistributedHaloVector() or getDistributedMatrix(). These method invoke a DistributedFactory, located in the FiniteElements class, which will create arrays following the partition of the elements, and taking into account the number of unknowns of the problem.
Several utilities can be found in the src/common
directory:
Multiple indexations of the same elements are needed due to the complexity of the operation and the parallelization of the code. Here are the guidelines that must followed in the implementation:
indx_t
, not int_t
or uint_t
. This is required because of incompatibilities between PETSc types and the (u)int64_t
types. Be careful, as VTK uses its own vtkIdType
which may not be compatible.meshId
designates the index of a point or element within a single mesh. The index is read from the mesh file.geomId
is the index of a point or element, once all meshes have been put together. If there is only one mesh in the simulation, meshId
and ̀geomId
are equivalent.globalId
is a initially permutation of geomId
which takes into account the partionning of the computational domain into subdomains. All globalIds
of a subdomain are in a contiguous subset of all the indices of the domain. Both Geometry and FiniteElements classes have such a permutation vector. Note that additional Nodes can be created by the FiniteElements class in case of elements other than LegendreP1. globalId
takes into account these additional nodes (so there is not necessarily a reverse permutation).localId
designates the index of a point or element within the subset of globalIds
. It is basically globalId
minus the number of points/elements on all "previous" CPUs in the CPUgrid.rowId
is the index of an unknown in the linear system. It is deduced from globalId
, the dimensionnality of the problem (called pbDim
) and the component index of the unknown. If the PDE has a scalar solution, i.e. pbDim
is 1, then rowId
and globalId
are equivalent.xxxIdToYyyId
.Here is an example of the indexing of a problem with two unknowns per node (say and ), P1 finite elements with unknowns on geometrical nodes only, existing on two meshes, and computed on two CPUs. Node indices are in red, element indices in blue, unknown indices in green.