Structure of CEPS

CEPS follows a quite standard way of implementing PDE resolution. The main classes are:

Problem class (e.g. CardiacProblem, BilayerAtriaProblem)

The Problem class regroups all necessary elements needed to perform a simulation. It is initialized from a text input file. It contains the structures already listed above. The main method of this class is run(), which simply performs the simulation with options defined in the file.

The global structure of a CardiacProblem and its components is as follows:

All PDE problems are derived from the AbstractPdeProblem class. This abstract class contains elements and basic variants of methods that are common to all PDE problems (see PdeParameters and below).


Each PDE problem contains at least one instance of a class that regroups parameters: PdeParameters. Then, additional classes for parameters can be added/derived to handle parameters dedicated to a specific "task". For example, the class BilayerAtriaProblem contains an instance of BilayerParameters, an instance of CardiacParameters (as it is derived from CardiacProblem), and an instance of PdeParameters (as it is derived from AbstractPdeProblem).

Each new parameter class should have a readFromInputFile method.


This class encompasses the definition of all geometrical nodes (Node) and elements (all derived from AbstractElement) grouped in a Mesh, for each dimension. It also contains the partition of the computational domain for parallel computations.

PDE Solvers (e.g. MonodomainSolver, LaplacianSolver)

This class is in charge of performing the main loop solving the PDE, being called through the main public method solve(). The class is linked to all other elements (geometry, finite elements, assemblers, myocardium, output writer...) and contains the arrays storing the values of the solution and necessary data such as ionic currents for cardiac solvers. These arrays may contain data at several time steps for multistep methods (e.g. Crank Nicholson) or methods with substeps (Runge-Kutta).

The private method setupLinearSystem() is called once (or as few as possible for multisteps methods) and is intended to build the left hand side matrix as well as parts of the solving process that will not change during the simulation (e.g. compute mass and/or stiffness matrices). The private method updateLinearSystem() is called at each iteration and is intended to regroup changes in the matrix and in the right hand side of the PDE.


Myocardium holds information on the state of the simulated tissue: its intrisic properties, such as the conductivities, but also the state of the ionic gates at the current time of the simulation. Therefore, Myocardium contains ionic models.

PDE Assemblers (e.g. BidomainAssembler, HeatAssembler, FEPkMassMatrixAssembler)

These classes regroup methods that fill the discretization linear system (left hand side matrix and right hand side vector). In particular, the derived classes of finite element assemblers have the computeMatrixTerm and computeVectorTerm virtual methods, in which are defined the terms that come from the variational formulation of the PDE.

Some generic assemblers can be found in the src/pde/assemblers directory (FEPkMassMatrixAssembler, FEPkStiffnessMatrixAssembler), as well as assemblers for simple equations (LaplacianAssembler, HeatAssembler). Assemblers for cardiac applications are located in the src/cardiac/assemblers directory.

There are some typedefs for the Assembler class to ease the reading of the code:

  • DofMatrix is the matrix corresponding contributions of a single element to the large problem matrix. Even though it is a dynamic matrix, it is intended to be of size N*N where N is the number of nodal points of an element times the number of unknowns per node.
  • DofVector is similar to DofMatrix for the vector contribution of the element.
  • UnknownVector is a vector of scalars intended to store the values of the solution at previous step on nodal points.

Finite Elements

The class FiniteElements regroups data on all non-boundary and boundary elements of each dimension (0D,1D,2D,3D). In particular, it contains the data of the nodes that are added for elements that have nodal points that do not coincide with the geometrical nodes (e.g. P2 elements that have middle points). Storing these nodes in this class avoids node duplication between elements that share nodes.

The class also contains instances of GaussianQuadratures, one for each dimension, which are used during the assembling of the linear system.

It also contains ReferenceFE instances (reference finite elements), one for each dimension and type of finite elements. Those reference elements contain data on how to place node in the element and the definition of the basis functions.

Finally, the finite elements themselves, which are derived from the abstract class FEBase, are basically collections of pointers on geometrical elements they are built from (with data on nodes and Jacobians) and their reference finite element.


The dictionary is a class that stores scalar functions of either time, space or both. It manages instance of the ScalarFieldFunctor class, which means that the functions are user defined (see the advanced syntax page)).

Physical Coefficients

Coefficients that are defined on the whole computational domain can be defined through this class. Coefficients can be scalar, vectorial ( $\mathbb{R}^3$), or tensorial. Coefficients can be either constant, piecewise-constants (different with respect to region attribute), read from a VTK file, or linked to any dictionary entry (for scalar coefficients).

Boundary Conditions and Stimulation Manager (BoundaryConditionManager, StimulationManager)

These classes manage data about regular boundary conditions (Dirichlet, Neumann, Robin) and Stimulation fields. Each BoundaryCondition and Stimulation is derived from a ScalarField, which is an abstract way of handling functions of time and space. The BoundaryCondition and Stimulation instances store information on when and where they are applied. The managers create these instances, and can call their actualize() method, before applying the values into a DistributedVector.

The Stimulation and BoundaryCondition classes use different initializers than the default Dictionary entries for a more user-friendly way of passing parameters.

Ionic Model Solver

IonicModelSolver is a class that regroups an instance of an ionic model and two instances of an ODE solver, one for the gate variables of the ionic model, one for the other variables. Since the two solvers, that may be of different kinds, are coupled, this class provides a simple interface with the rest of the code. Here is an example of usage, provided you previously instantiated myIonicModel:

mySolver = new IonicModelSolver(paramStringsFromInputFile,myIonicModel);
// For the computation of the mapping, cf e.g. Myocardium.hpp
// do PDE to compute voltage
// ...
Groups two ODE solvers for ionic variables and interacts with cardiac problem.
Definition: IonicModelSolver.hpp:50

Ionic Models

Ionic models are derived from the AbstractIonicModel class. They contain the definitions of ionic variables and how they evolve in time, depending on the transmembrane voltage that it is computed in the PDE system. They are coupled to the PDE system through the total ionic current that is computed at each ODE time step.

Ionic model classes work in conjunction with the ODE Solver classes through an instance of an IonicModelSolver.

Typically, models are imported from the CellML repository and follow the structure of the downloaded models. State variables are split into gate variables and misc variables, each set being affected to a different ODE solver. It is useful to define macros for indices as there can be a large number of variables and parameters. For example, in the Ten Tusscher model:

// aliases for gates variables
static constexpr const int_t m_xR1 = 0;
static constexpr const int_t m_xR2 = 1;
static constexpr const int_t m_xS = 2;
static constexpr const int_t m_m = 3;
static constexpr const int_t m_h = 4;
int32_t int_t
we make sure ints are 32 bits
Definition: CepsTypes.hpp:99

Ionic models can be set on specific parts of the mesh, so they store their own data about the state of the ODE system. In particular the arrays

real_t* m_gates; // Values of gate variables
real_t* m_misc; // Values of misc state variables
real_t* m_v; // Transmembrane voltage
real_t* m_tau; // Dynamics of gate vars
real_t* m_wInf; // Limit values of gate vars
real_t* m_ionicCurrent; // Total ionic current
float real_t
floating point numbers CEPS type, configured at Cmake
Definition: CepsTypes.hpp:134

contain scalar values at each degree of freedom of the region the model was attributed.

The following functions must be defined in every derived ionic model class:

  • Constructors : constant parameters of the model can be set here.
  • setInitialConditions() : initializes the gate and misc variables, as well as the action potential stored internally.
  • computeGateRates, computeMiscRates: evolution functions $y'=f(t,y)$.
  • getNbStateVariables, getNbMiscVariables, getNbParameters : return some basic information to inform other classes.

ODE Solver

ODE solvers are structured designed to solve the equations

\[ y'(t) = f(y,t), \]

which can be sometimes written as the sum of a linear and non-linear part:

\[ y'(t) = a(y,t)y + b(t,y). \]

They are designed to work either in conjunction with an ionic model, or using pointers to generic functions. Ionic models fall into the second writing of the equations with $b\equiv 0$ for gate variables and $ a\equiv 0$ for misc variables.

Here is an example of the use of an ODE with generic functions

void myNonLinearPartFunction(real_t t, real_t* y, real_t* dty, uint_t n)
// Do something to compute dty ...
void myLinearPartFunction(real_t t, real_t* y, real_t* dty, uint_t n)
// Do some other thing to compute dty ...
mySolver = new DerivatedFromAbstractOdeSolver(...);
// or
for(int_t i=0; i<nSteps; i++,t+=dt)
real_t* sol = mySolver->getSolution();
uint32_t uint_t
we make sure ints are 32 bits
Definition: CepsTypes.hpp:101

Solvers are generally based on multiple evaluations of $f$, either at previous type steps (multistep methods), or at multiple times during current time steps (methods with substeps). This means that those states must be stored in the class structure, on top of the arrays for $ y$ at times $t^n$ and $t^{n+1}$. If the array for $y_n$ already exists (as for the ionic model with m_gates or m_misc), a pointer to this array is sufficient.

The prototype (named rateFunction) for the functions $f$, $a$ and $b$ is the following:

typedef void (*rateFunction)(real_t,real_t*,real_t*,uint_t);
void(* rateFunction)(real_t, real_t *, real_t *, uint_t)
Prototype of evolution functions to be given to ODE solvers.
Definition: AbstractOdeSolver.hpp:41

The first real is the time, then a pointer to the pre-allocated arrays $y^n$ and $\mathrm{d}_t y^{n+1}$, and finally a integer wich is the size of these arrays (which must the same!).

The following methods must be defined in each derived ODE solver:

  • takeOneStep : how do we get from $y^n$ to $y^{n+1}$
  • takeOneSubStep : leave this empty if the method has no sub-steps.

Distributed Vectors and Matrices

When running in parallel, the computational domain is divided between the processors, such that each CPUs deals with data on its subdomain. An overlapping region of one or several cells (called \i halo in CEPS) is added to the subdomain for link and communications between CPUs.

The classes DistributedVector, DistributedMatrix and DistributedHaloVector manage the partition of data arrays. Instances of these classes can be used without CPU-specific operations. DistributedHaloVector also contains data from the halo regions. Basic usage of distributed vectors and matrices can be found in the src/linearAlgebra/examples directory.

Additionally, within an instance of AbstractPdeProblem (or FiniteElements alone), one can automatically generate distributed arrays by calling getDistributedVector(), getDistributedHaloVector() or getDistributedMatrix(). These method invoke a DistributedFactory, located in the FiniteElements class, which will create arrays following the partition of the elements, and taking into account the number of unknowns of the problem.


Several utilities can be found in the src/common directory:

  • CepsProfiler : a tool that contains timers and memory profilers
  • CepsTools : macros for getting parallel information (get rank, size of grid, etc) and finding in vectors and maps
  • CepsFlags : access to options that are passed in command line (verbosity, profiling, etc)
  • CepsTypes : defines integer and floating point types
  • Precision : manages floating points comparisons
  • CepsIo : (poorly named) string utilities, operations on filenames
  • CepsMath : generic mathematical functions used throughout CEPS should be put here
  • CepsNumeric : small matrices operations, other vector operations

Indexation of points, elements, unknowns, etc.

Multiple indexations of the same elements are needed due to the complexity of the operation and the parallelization of the code. Here are the guidelines that must followed in the implementation:

  • The type of the indices is indx_t, not int_t or uint_t. This is required because of incompatibilities between PETSc types and the (u)int64_t types. Be careful, as VTK uses its own vtkIdType which may not be compatible.
  • meshId designates the index of a point or element within a single mesh. The index is read from the mesh file.
  • geomId is the index of a point or element, once all meshes have been put together. If there is only one mesh in the simulation, meshId and ̀geomId are equivalent.
  • globalId is a initially permutation of geomId which takes into account the partionning of the computational domain into subdomains. All globalIds of a subdomain are in a contiguous subset of all the indices of the domain. Both Geometry and FiniteElements classes have such a permutation vector. Note that additional Nodes can be created by the FiniteElements class in case of elements other than Legendre-P1. globalId takes into account these additional nodes (so there is not necessarily a reverse permutation).
  • localId designates the index of a point or element within the subset of globalIds. It is basically globalId minus the number of points/elements on all "previous" CPUs in the CPU-grid.
  • rowId is the index of an unknown in the linear system. It is deduced from globalId, the dimensionnality of the problem (called pbDim) and the component index of the unknown. If the PDE has a scalar solution, i.e. pbDim is 1, then rowId and globalId are equivalent.
  • All permutation vectors or functions between those indices shall be explicitly named xxxIdToYyyId.

Here is an example of the indexing of a problem with two unknowns per node (say $u$ and $v$), P1 finite elements with unknowns on geometrical nodes only, existing on two meshes, and computed on two CPUs. Node indices are in red, element indices in blue, unknown indices in green.