Spectral element method for 3-D controlled-source electromagnetic forward modelling using unstructured hexahedral meshes

SUMMARY For forward modelling of realistic 3-D land-based controlled-source electromagnetic (EM) problems, we develop a parallel spectral element approach, blending the ﬂexibility and versatility of the ﬁnite element method in using unstructured grids with the accuracy of the spectral method. Complex-shaped structures and topography are accommodated by using unstructured hexahedral meshes, in which the elements can have curved edges and non-planar faces. Our code is the ﬁrst spectral element algorithm in EM geophysics that uses the total ﬁeld formulation (here that of the electric ﬁeld). Combining unstructured grids and a total ﬁeld formulation provides advantages in dealing with topography, in particular, when the transmitter is located on rough surface topography. As a further improvement over existing spectral element meth-ods, our approach does not only allow for arbitrary distributions of conductivity, but also of magnetic permeability and dielectric permittivity. The total electric ﬁeld on the elements is expanded in terms of high-order Lagrangian interpolants, and element-wise integration in the weak form of the boundary value problem is accomplished by Gauss–Legendre–Lobatto quadrature. The resulting complex-valued linear system of equations is solved using the direct solver MUMPS, and, subsequently, the magnetic ﬁeld is computed at the points of interest by Faraday’s law. Five numerical examples comprehensively study the beneﬁts of this algorithm. Comparisons to semi-analytical and ﬁnite element results conﬁrm accurate representation of the EM responses and indicate low dependency on mesh discretization for the spectral element method. A convergence study illuminates the relation between high order polynomial approximation and mesh size and their effects on accuracy and computational cost revealing that high-order approximation yields accurate modelling results for very coarse meshes but is accompanied by high computational cost. The presented numerical experiments give evidence that 2nd and 3rd degree polynomials in combination with moderately discretized meshes provide better trade-offs in terms of computational resources and accuracy than lowest and higher order spectral element methods. To our knowledge, our ﬁnal example that includes pronounced surface topography and two geometrically complicated conductive anomalies represents the ﬁrst successful attempt at using 2nd order hexahedral elements supporting curved edges and non-planar faces in controlled-source EM geophysics.


I N T RO D U C T I O N
In the last few decades, the increasing importance of electromagnetic (EM) methods [e.g.controlled-source electromagnetics (CSEMs) and magnetotellurics (MT)] has been substantially driven by huge improvements in both instrumentation and data processing techniques.Thus, EM methods have been more widely used in geophysical surveys related to hydrocarbon (Constable & Srnka 2007;Streich 2016;Patro 2017), mineral (Oldenburg & Pratt 2007;Farquharson & Craven 2009;Kalscheuer et al. 2018), geothermal (Pellerin et al. 1996;Wannamaker 1997a, b;Streich et al. 2010;Amatyakul et al. 2015), hydrogeological (Pedersen et al. 2005;Asaue et al. 2012) and environmental (Sheard et al. 2005) investigations.Large amounts of high-quality 3-D EM data have been acquired in various complex geological settings, such as surveying areas with rough terrains.Reliable interpretation of that data relies on fast, efficient and robust discretization and solution techniques.
Forward and inverse modelling of EM data has undergone steady progress (cf.review papers by Avdeev 2005;Commer & Newman 2008;Börner 2010;Streich 2016;Miensopust 2017) due to improvements in computational resources, especially in terms of speed and memory capabilities and development in numerical methods.The availability of high-performance computing facilities has enabled the processing of increasingly larger data sets (Newman 2014).
In spite of these advances, 3-D inverse modelling still remains a challenging and computationally demanding task.One of the key factors for the high computational costs is the expensive solution of the 3-D EM forward modelling problem of large-scale and geologically complex settings, which has to be solved numerous times throughout the inversion process.In other words, accurate, efficient and robust forward modelling routines are crucial.Most developed 3-D EM forward codes use one of the following numerical approaches to obtain a solution to the forward problem, namely the integral equation (IE; see e.g.Raiche 1974;Weidelt 1975;Zhdanov et al. 2006;Kruglyakov & Kuvshinov 2018;Liu et al. 2018), finite difference (FD; see e.g.Wang & Hohmann 1993;Streich 2009;Cherevatova et al. 2018), finite element (FE; see e.g.Coggon 1971;Börner et al. 2008;Schwarzbach et al. 2011;Ren et al. 2013;Grayver & Bürg 2014) or finite volume (FV; see e.g.Madsen & Ziolkowski 1990;Jahandari & Farquharson 2014) methods.Each method has its inherent advantages, disadvantages and challenges.An overview of these methods can be found in review papers by Avdeev (2005), Börner (2010) and Everett (2012).Recently, in an attempt to combine and exploit the high accuracy of the spectral method and the geometrical flexibility of the finite element method, another solution technique prevalently used in fluid dynamics, seismic wave propagation and computational microwave engineering, the spectral element method (SEM), has been gradually adapted to deal with geo-EM forward modelling problems (e.g.Yin et al. 2017Yin et al. , 2019;;Huang et al. 2019).
The original spectral methods are a collection of procedures to solve differential equations using a weighted-residual technique (Boyd 2001;Canuto et al. 2006).In this approach, the solution of the problem is expressed as a sum of specific infinitely differentiable basis functions, which are defined over the whole domain.Typical examples of such functions include trigonometric functions, Chebyshev and Legendre polynomials.Due to the global support of these functions, spectral methods are considered global methods, that is the computation of a value at any point depends on information from the entire domain, and not just the neighbouring gridpoints (Boyd 2001;Canuto et al. 2006).Partly because of this, spectral methods usually have high accuracy and possess favourable error convergence properties (Gottlieb & Orszag 1977).On the downside, due to the use of global basis functions, spectral methods are not well suited to handle complex geometries, localized features and/or sharp gradients/discontinuities in material parameters (Abarbanel et al. 1986;Piotrowska et al. 2019).
Similar to the spectral method, the finite element procedure uses a weighted-residual technique which is applied to series expansions of the EM field, each defined only locally on a small subdomain (an element) of the whole domain (Patera 1984).The division into subdomains facilitates accurate representation of complex geometries as well as the incorporation of variable material properties.Thus, thanks to its flexibility to discretize subsurface structures and topography, the finite element method is a suitable modelling tool for 3-D CSEM.On the other hand, to guarantee sufficient accuracy the method requires substantially refined meshes which can result in high computational cost.Furthermore, the modelling accuracy is considerably affected by the quality of the mesh.The use of high-order elements, as incorporated by Schwarzbach et al. (2011), Grayver & Kolev (2015) and Rochlitz et al. (2019) for geo-EM modelling, reduces this dependency on fine enough meshes while increasing the accuracy of the solution.Nonetheless, the finite element method is substantially dependent on the subdivision of the computational domain.
As a blend of the globally supported spectral method and the locally supported finite element method, the spectral element method is based on weighted-residual techniques to solve differential equations.As in the finite element method, it may either use regular or deformed elements to discretize the domain into a set of subdomains (elements).Within each element, the solution of the differential equation is written in terms of locally supported orthogonal polynomials of arbitrary order.As a result of the freely selectable order of the polynomial approximation, the modelling accuracy can be improved by simply incrementing the polynomial order (Patera 1984;Rønquist & Patera 1987).Thus the spectral element method retains the geometrical flexibility of the finite element method and inherits the high accuracy as well as the convergence properties from the spectral method (Gottlieb & Orszag 1977;Patera 1984;Rønquist & Patera 1987).
In the following, we give a short and selective overview of the spectral element method applied to problems in various areas.The spectral element formulation was introduced by Patera (1984) to solve the incompressible Navier-Stokes equations, in which the velocity field within each element were represented by interpolants based on Chebyshev polynomials through Chebyshev-Gauss-Lobatto (CGL) collocation points.In a later publication by Rønquist & Patera (1987), the interpolation functions were expressed using Legendre polynomials, and Gauss-Legendre-Lobatto (GLL) points were chosen as collocation points.First applications to seismic wave propagation appeared in the nineties (Priolo et al. 1994;Seriani et al. 1994;Faccioli et al. 1996) and used Chebyshev polynomials to approximate the unknown fields.A major improvement arrived with the work of Komatitsch & Vilotte (1998), which proposed the combination of Lagrange polynomials as interpolants and a Gaussian quadrature scheme defined on GLL points for the elastic wave equation.This particular choice enabled constructing a diagonal mass matrix in seismological applications, which in turn led to a simpler explicit time stepping scheme and allows an effective parallel implementation (Schuberth 2003).Since then, the spectral element method has become the preferred method for seismic wavepropagation problems (Fichtner & Igel 2008;Peter et al. 2011).The spectral element approach has also found application in computational EMs, more precisely microwave engineering.In this field, Lee et al. (2006) introduced a spectral element implementation based on Legendre polynomials to solve the vector EM wave equation in frequency domain.Their proposition used mixed-order curl conforming vector basis functions to ensure the continuity of the tangential electrical field components across the surfaces and edges of the elements.Due to the use of orthogonal Legendre polynomials, the main advantage of this proposition consisted in obtaining a diagonal mass matrix, thus transforming the initial generalized eigenvalue problem to study eigenfrequencies of waveguides and microwave resonators into a regular eigenvalue problem, and concurrently reducing computational requirements (Lee et al. 2006).This approach was then adapted and modified to solve Maxwell's equations in time domain (Cohen & Duruflé 2007;Lee & Liu 2007;Lee et al. 2009) focusing solely on microwave engineering.Cohen & Duruflé (2007) presented Nédélec edge elements constructed from Lagrange polynomials associated with Gauss and GLL points.Their choice of Nédélec's first family of edge elements (Nédélec 1980) resulted in diagonal and block-diagonal mass matrices for orthogonal and non-orthogonal meshes, respectively.This family is also free of spurious modes when selecting GLL quadrature for time-harmonic problems (Cohen & Duruflé 2007).Other papers on microwave engineering apply the spectral element method to study resonators (Luo et al. 2009;Shi et al. 2016), waveguides (Liu et al. 2015a) and microwave propagation problems (Qian et al. 2015).
The spectral element approaches to solving EM problems described hitherto are mainly applied in the analysis of resonators, waveguides, microwave and other electronic devices (Lee et al. 2006(Lee et al. , 2009;;Cohen & Duruflé 2007;Lee & Liu 2007;Liu et al. 2015a, b;Qian et al. 2015;Ren et al. 2015).The following key differences differentiate this field of application from the geophysical application area: The aforementioned studies mainly operate on the nano-to centimetre scale, focus on short time duration (usually less than 10 −6 s) and use frequencies in the Mega-to Gigahertz range.In addition, most examples presented in the aforesaid publications focus on the analysis of eigenvalues of waveguide and scattering problems, and, thus they investigate the system matrix, but do not include a transmitting current source (Huang et al. 2019).Finally, most of these examples are limited to applications with dielectric, but non-conducting and non-magnetic anomalies.Few cases involving such anomalies are shown, but they only present results concerned with the eigenvalues or eigenfrequencies of the respective applications.Hence, these examples do not investigate challenges associated with field propagation through material with magnetically permeable and electrically conductive anomalies.
On a vastly different scale, namely that of the Earth, a hybrid spectral-finite element method has been used in EM induction modelling in 2-D (Martinec 1997) and 3-D (Martinec 1999;Martinec et al. 2003) for a spherical earth.The core concept of these publications consists in adopting a spectral method using spherical harmonics to describe the azimuthal dependence of field variables, whereas the radial dependence of field quantities are parametrized using piecewise linear functions (i.e.finite elements) in order to obtain a sparse system matrix.
The spectral element method has only recently been adapted to geophysical EM problems of intermediate scales, that means from hundreds of metres up to tens and hundreds of kilometres.Zhou et al. (2016) applied the spectral element approach together with domain decomposition to low-frequency EM simulations.In subsequent publications, Zhou et al. (2017a, b) proposed a spectral element method using Gauss' law for magnetism as a divergence-free condition to overcome the low-frequency breakdown phenomenon in numerical computations.In brief, this issue arises, because the vector Helmholtz equation derived from Ampere's and Faraday's laws does not involve conservation of charge.At the direct current limit, these laws become decoupled resulting in numerical breakdown, which leads to a very ill-conditioned discretized linear system and that is difficult to solve (Zhou et al. 2017a, b).The procedure has also been used in frequency-and time-domain 3-D airborne EM forward modelling.The implementation by Yin et al. (2017) used basis functions based on Legendre polynomials, referred to as GLL polynomials and adopted an integration scheme using GLL points.Huang et al. (2019) developed a time-domain spectral element method using the same approach as Yin et al. (2017) for space discretization and the backward Euler technique for time discretization.Yin et al. (2019) reintroduced basis functions based on Chebyshev polynomials that they coined Gauss-Lobatto-Chebyshev (GLC) polynomials to model problems in frequency-domain airborne EM modelling.The use of these polynomials allowed to derive analytical expressions for the entries of the system matrix.In a subsequent publication, Yin et al. (2021) used the same GLC polynomials to time-domain airborne EM forward modelling.Another approach introduced by Huang et al. (2021) extended the SEM for frequency-domain airborne forward modelling by adopting ideas from the element-free Galerkin method for which physical properties (e.g.conductivity) are allowed to vary within elements and are handled within cells when performing the numerical integration.This means that the mesh can exclusively be based on orthogonal hexahedral elements and implies that mesh refinement for models with complex conductivity distributions is less critical.Their approach represents an alternative to deal with complicated spatial distributions of material properties.For 3-D direct current (DC) forward modelling, Zhu et al. (2020) enabled application of the SEM to unstructured tetrahedral meshes by transforming between tetrahedra and hexahedra.Instead of using 1-D GLL quadrature as commonly used in the spectral element method, they use Proriol-Koornwinder-Dubiner polynomials to construct the arbitrary order nodal basis functions to represent the electric potential.It is noted that the used polynomials cannot directly form the basis functions and require a Vandermonde matrix to establish the final basis functions.The elemental matrix computation relies on mapping relations between the physical domain, the tetrahedral reference domain and the hexahedral domain.Zhu et al. (2022) developed a forward modelling algorithm for frequency-domain airborne EM applications based on EM potentials and the nodal unstructured tetrahedral spectral element method (Zhu et al. 2020).Owing to the necessity to transform between tetrahedra and hexahedra and to interpolate, Zhu et al. (2022) method imposes additional complexity to the implementation.Furthermore, it remains to be investigated to which extent the additional transformation and interpolation may result in accuracy penalties.Also, it is unknown whether this approach could work for edge-based basis functions required for E-and H-field approximations.To reduce the size of the computational problem, Xu et al. (2022) presented a combined spectral and infinite element method for 3-D CSEM forward modelling.Their hybrid scheme uses the spectral element method to represent the electric field within the region of interest as described in Lee et al. (2006) and uses the scalar infinite element method to replace the traditional Dirichlet boundary condition.The main idea consists in decreasing the size of the computational domain using the infinite element method with scalar basis functions based on high order GLL polynomials as boundary condition.This in return reduces the computational resources required to perform the simulation.
With regard to the studies applied to air-borne EM induction problems (Yin et al. 2017(Yin et al. , 2019(Yin et al. , 2021;;Huang et al. 2019Huang et al. , 2021)), it can be said that they demonstrate the suitability of the spectral element method for geophysical problems.However, all these publications present a preliminary stage of applying the spectral element method to geophysical applications.Nonetheless, they illustrate the flexibility of the spectral element method in either increasing the polynomial order of the approximation or refining the mesh to obtain an accurate solution.In addition, Yin et al. (2019) also claim that the solution is only weakly influenced by mesh refinement around transmitters and receivers as well as within heterogeneous structures.Two of these publications (Yin et al. 2017(Yin et al. , 2019) ) adopt structured hexahedral meshes which limits the scope to models with fairly simple geometries, whereas the other one (Huang et al. 2019) extends the implementation to encompass unstructured meshes, thus engulfing geometrically complicated structures.However, the meshing is not described in detail and the presented test cases do not consider topography.
Further, all these studies use the primary-secondary field decomposition whose advantages and disadvantages are well established.On the one hand, the decomposition avoids the source singularity and the rapid field variations in proximity to the source.Schwarzbach et al. (2011) point out that the primary-secondary approach potentially yields more accurate results, as only the anomalous (secondary) field is numerically approximated.On the other hand, the field decomposition formulation requires the computation of background (primary) fields for a chosen background model which can be computationally expensive (da Silva et al. 2012;Li et al. 2017).Further, Alumbaugh et al. (1996) point out that if the source is within a region of anomalous properties (e.g.topography), then the rapid variations of the primary field within the equivalent source term may result in significant modelling inaccuracy (Stalnaker 2004).Similarly, Mitsuhata (2000) and Streich (2009) remark that the background field needs to be chosen, such that secondary sources do not arise in the vicinity to primary sources for the primary-secondary approach to be effective.This is a severe limitation, because small-scale anomalies in vicinity of electric dipole sources are commonly observed in field measurements leading to a distortion of the EM fields recorded at receiver sites that is referred to as source overprint (Zonge & Hughes 1991;Hördt & Scholl 2004;Kalscheuer et al. 2015).
An alternative to the primary-secondary algorithm is the total field formulation.The main issues of this approach lies in overcoming the source singularity and implementing the Dirac delta function to describe source segments (Mitsuhata 2000;Haber 2014;Li et al. 2017).Both pose numerical difficulties and might require refined meshes in proximity of the source to adequately describe the source behaviour.On the other hand, there is no need to compute secondary fields and the source implementation along topography is straightforward and thus facilitated in comparison to the primary-secondary algorithm.Moreover, a long grounded cable source, that is placed along topography and has electrodes at different altitudes, constitutes a case in which the primary-secondary field approach may hit its limits.In this scenario, it is impossible to define a 1-D background model for the computation of the primary field that eliminates the source singularity, meaning the singularity of the primary field translates directly into a singularity of the equivalent source in the primary-secondary field approach at the location of the source.There is a need for a direct comparison of the two approaches when the source is located along or on topography.However, this is outside of the scope of our work.
The aforementioned studies report high levels of accuracy associated with the spectral element method and clearly indicate its potential for geophysical problems.However, implementations based on the total field formulation have not yet been attempted.In this study, we develop a 3-D modelling algorithm based on the spectral element method using the total field formulation for land-based CSEM simulations.This avoids the aforementioned problems associated with the primary-secondary field decomposition, but comes with its own challenges which are well known (see above).The total electric field is then expressed by mixed-order curl conforming vector basis functions constructed from 1-D Lagrange polynomials.As further improvements over the previously published approaches, our approach allows for unstructured hexahedral meshes, that accommodate geometrically complicated subsurface structures and topography, as well as arbitrary isotropic distributions of electric conductivity, dielectric permittivity and magnetic permeability.The latter, that is variable magnetic permeability and variable dielectric permittivity, although not a novel concept (e.g.Rulff et al. 2021;Kamm et al. 2020), constitutes an addition to existing SEM approaches in the context of land-based CSEM modelling.
The paper is structured as follows: In Section 2, we focus on the methodological framework and implementation of the proposed scheme.We then present five scenarios and corresponding results in Sections 3.1 and 3.2 which verify our numerical routine and illustrate its features.Finally, in Section 3.3 we contemplate the convergence behaviour for increasing polynomial order and examine how mesh size and polynomial order affect computational resources in order to evaluate which combinations are best suited to minimize computational cost.

Governing equations
In frequency-domain controlled-source EM problems, the total electric field satisfies the following partial differential equation derived from Maxwell's equations with assumed time dependence e iωt ∇ × where E represents the total electric field, ω is the angular frequency, μ is the magnetic permeability, σ is the electric conductivity, ε is the dielectric permittivity and J s is the source current density.It can readily be seen that this formulation accounts for displacement currents (iωεE).To ensure the uniqueness of the electric field, eq. ( 1) requires boundary conditions.The selection of proper boundary conditions largely depends on the scenario of land-based controlled-source EM induction problems.In our case, EM fields are excited by source currents J s from transmitters which generally are located in the centre of the computational domain on the air-subsurface interface.One choice of boundary conditions adopts a primary field corresponding to the layered background of the 3-D model to approximate the total field on the boundary (Cai et al. 2017).In this case, the required primary field is calculated using a fast Hankel transform.Another choice is to set the EM fields on the boundary to zero, that is E = 0, accounting for attenuation and geometric spreading of the EM field with increasing distance from the source J s .Thus, this second option requires that the boundary is at large distance from the source, but it is simple to implement and widely used in geo-EM problems (e.g.Badea et al. 2001;Nam et al. 2007;Börner et al. 2008;Ansari & Farquharson 2014;Chung et al. 2014).We thus use the so-called Dirichlet boundary condition which guarantees the uniqueness of the electric field by setting the tangential components of the electric field to zero at the boundaries, as follows: where is the boundary of the computational domain .

Discretization
We start by defining the space of functions with a well-defined curl where is assumed to be bounded and simply connected (Monk et al. 2003).Functions of this space are characterized by the continuity of their tangential components at sufficiently smooth interfaces and by allowing normal components to be discontinuous across element boundaries (Nédélec 1980).To include the boundary conditions, we require the vector functions u in H to satisfy n × u = 0 on the boundary , thus introducing the space H 0 (curl, ).The weak form of the curl-curl equation for the electric field in eq. ( 1) is obtained by multiplying eq. ( 1) by test functions ∈ H 0 (curl, ) and integrating over the whole domain .Thus, this demands the average residual to be zero.Upon integration by parts, this leads to the problem: Find E ∈ H 0 (curl, ) such that (Monk et al. 2003) for all ∈ H 0 (curl, ).Note that the surface integrals resulting from integration by parts vanish along interior interfaces because of the continuity of tangential E and along the exterior faces of because of the application of the homogeneous Dirichlet boundary conditions in eq. ( 2).The simply connected and bounded domain is decomposed into a set of non-overlapping hexahedral elements.To discretize this function space on the mesh, we need to select curl conforming elements that represent a natural finite dimensional subspace approximating H 0 (curl, ) and consist of vector functions which are element-wise polynomial functions of arbitrary degree.Introducing such a subspace V of H 0 (curl, ) defined on the mesh, and letting the test functions reside in V, allows to construct a finite set of basis functions ˜ spanning this subspace.Then, the solution E e of any element e is expanded in terms of the vector basis functions where N r is the number of basis functions for an element and E e k is the coefficient of the unknown electric field at the kth edge of the eth element.Each such coefficient is referred to as a degree of freedom.Choosing the test functions from the same space as the basis functions (i.e.= ˜ l e for any l = 1, . . ., N r and e = 1, . . ., N h with N h denoting the number of hexahedra) yields the following equation where J e s is the source current density in element e.This leads to a discrete linear system of equations that can be stated in matrix notation as where K and M are the stiffness and mass matrices, respectively, collecting the contributions from all elements.The vector e represents the coefficients of the unknown electric field, s comprises the source term entries.

Vector basis functions
At this point, we specify the basis functions in the reference element, which are constructed using 1-D Lagrange polynomials of degree N based on the corresponding Gauss and GLL quadrature points.The N + 1 Lagrange polynomials of Nth order in a 1-D reference element ξ ∈ [ − 1, 1] are defined by where ξ j are the aforementioned Gauss, respectively, GLL points.This set of polynomials is characterized by the following property and thus fulfil the discrete orthogonality condition as each polynomial is exactly one at its corresponding quadrature point and zero at all other points.The choice of the quadrature points is motivated by a few reasons and has been discussed thoroughly in Fichtner (2010).
Here, the main arguments are summarized.First, using the GLL points guarantees that |φ i (ξ i ) = 0 for any polynomial order [see Sect.A.2.3 starting from equation (A.30) in Fichtner 2010].This, in combination with the increasing concentration of quadrature points towards the boundaries, can suppress Runge's phenomenon (cf.section A.2.3, Fichtner 2010).Secondly, the GLL points are also Fekete points meaning that this set of quadrature points maximizes the determinant of the Vandermonde matrix, which in return ensures that interpolation errors between collocation points due to numerical inaccuracies at the quadrature points are minimized (cf.sections A.2.1 and A.2.4, Fichtner 2010).Thirdly, there exists a quadrature scheme based on the GLL points (Abramowitz & Stegun 1964).Hence, one can apply existing GLL quadrature formulae to approximate the integrals in eq. ( 6).
Extending our reference element to 3-D by defining a cubic element with coordinates ] and using the established 1-D Lagrange polynomials as building blocks, we express the vector basis functions as mixed-order products of these blocks along the three axis directions of the reference coordinate system where φ G and φ GLL designate 1-D Lagrange polynomials based on Gauss and GLL quadrature points, respectively.N denotes the polynomial order, ξ , η and ζ are the unit vectors in the directions of the reference coordinate system, and ξ rst , η rst and ζ rst are the vector basis functions parallel to the corresponding axial direction.It follows thus that each element contains N b = 3 × N × (N + 1) 2 degrees of freedom (DOF) for polynomial order N. Thereof, 12N DOFs are located on the edges of the element, 6 × (N − 1)2N on the faces and 3N(N − 1) 2 inside the element (Nédélec 1980).Raising the polynomial degree increases the number of DOF on edges and faces, thus strengthening the coupling between neighbouring elements.Table 1 lists the total number of DOF as a function of the polynomial degree N.
In order to integrate an arbitrary function in the reference element and project it onto the corresponding physical element, a coordinate transformation from the reference to the physical coordinate system is required, that is each reference element needs to be mapped onto any physical element.This mapping operation is depicted in Fig. 1.The transformation allows to apply the same numerical quadrature procedure to all the elements within the mesh.Consequently, we need to establish the mappings of basis functions and the curls of the basis functions between the reference and physical domains.The appropriate relationships are given by (Peterson et al. 1998;Lee et al. 2006;Swartz & Davidson 2007) where J is the Jacobian matrix of coordinate transformation, J −1 its inverse and det J its determinant.We note that the determinant is written without is absolute value as the numbering scheme is right-handed, thus yielding a positive determinant as opposed to left-handed numbering schemes for which the determinant becomes negative (Swartz & Davidson 2007).These transforms are H(curl)-conforming, because they maintain the tangential continuity of the field components (Cohen & Duruflé 2007).The Jacobian matrix is defined as follows: where N e m denote the shape functions and x m , y m and z m are the anchor nodes of the element.The anchor nodes always contain the eight vertices of an element and may also include additional anchor nodes on edges, faces and inside the element.Each element in the physical space is defined by such a set of anchor nodes and their associated shape functions.Using these components, any position vector in the physical element can be related to a corresponding position vector in the reference cube.The transformation between the two domains is expressed as with the aforementioned definitions, and where M denotes the number of anchor nodes.Generally, the shape functions are defined in terms of triple products of Lagrange polynomials, the degrees of which can be adjusted depending on the complexity of the elements, such that where φ GLL m refers to the Lagrangian polynomial at anchor node m based on GLL quadrature points.Distorted elements with straight edges and flat surfaces are accurately represented by linear shape functions based on products of Lagrange polynomials of degree 1 and by its eight vertices constituting the anchor nodes, such that However, when dealing with more complex elements (e.g.curved edges and faces), approximation to the geometry requires additional anchor nodes and the corresponding shape functions are built using Lagrange polynomials of degree 2 or higher (Fichtner 2010).Using higher order shape functions to approximate the deformed elements introduces a complicated coordinate transformation factor as remarked by Grayver & Bürg (2014), Kordy et al. (2016) and Kamm et al. (2020).This factor increases the complexity of the integrands in the system matrix.It affects the accuracy of the quadrature adversely, and thus the accuracy of the computed matrix entries.The same authors also offer different approaches to mitigate this problem, for example by using higher order quadrature (Kordy et al. 2016) or applying Chebyshev approximation (Kamm et al. 2020) to maintain the accuracy of the integration.

Assembly of local matrices and numerical integration
In this section, we outline the assembly of the local mass and stiffness matrices and subsequently briefly detail how to perform the numerical integration.To start, we recall that the assembly of local mass and stiffness matrices includes the evaluation of products of two basis functions ( ˜ k with ˜ l ) and the curls of two basis functions (∇ × ˜ k with ∇ × ˜ l ), respectively (see eq. 6).Further, eq. ( 10) states that each set of vector basis functions parallel to its corresponding axial direction has N(N + 1) 2 unique combinations of r, s, t, which combined correspond to the 3N(N + 1) 2 local degrees of freedom.Thus, the sizes for the local mass and stiffness matrices are 3N(N + 1) 2 by 3N(N + 1) 2 .As each combination can be ascribed to a single degree of freedom, one can construct local numbering scheme arrays Id ξ , Id η and Id ζ for each axial direction, that is to say for each component of the vector basis function, storing the respective numbering.Thus, each such 1-D array contains N(N + 1) 2 entries.Note that these arrays are local and completely independent of the global numbering of degrees of freedom.These arrays are prerequisites to determine the indices of any aforementioned product within the local mass and stiffness matrices.Next, we outline the assembly of the local stiffness matrix.We can insert the transformations given by eq. ( 11) into eq.( 6), expand the product of the two curls and sum over indices r, s, t, once for each of the transformed basis functions k and l , which yields the following: where indices k and l compound r k , s k , t k and r l , s l , t l .It can be readily seen that there are 36 terms in eq. ( 16) which can further be condensed into nine combinations of the directional derivatives with regard to the three components of the vector basis functions.Each such combination corresponds to a single entry of the local stiffness matrix.The information about the indices thereof is already stored in the aforementioned mapping operators Id ξ , Id η and Id ζ and can be retrieved by indices k and l.It follows that each summand can be inserted into the local stiffness matrix according to the determined indices.This is then repeated for each pass in the double summation yielding nine new and different locations within the local stiffness matrix.Upon completion, the double summation has passed through all the locations within the local stiffness matrix and added up the corresponding contributions.The components A pq with p and q = 1, 2, 3 collect the coordinate transformation information and physical properties of the respective element e and can be written as the following matrix: Hence, the local stiffness matrix K e has to be preallocated and is then filled in as detailed by the double summation in eq. ( 16), where each pass fills in a part of it.
Similarly, the local mass matrix can be expanded as where J −1 is the inverse of the Jacobian matrix, and where the components B pq with p and q = 1, 2, 3 collect the coordinate transformation information and physical properties of the respective element e and can be written as the following matrix Again, the local mass matrix M e is preallocated analogously as for the stiffness matrix, and for each pass in the double summation, k and l access the corresponding mapping operators Id ξ , Id η and Id ζ and retrieve the indices for all the terms, which are then added in the local mass matrix at the appropriate locations.
The volume integrals stated in eqs ( 16) and ( 18) can only be evaluated numerically.Picking one entry of the local stiffness matrix, here, the evaluation for the interaction of all derivatives of ξ k with all derivatives of ξ l as an example, we outline the numerical integration in detail.We point out that the considered interactions only appear four times which can be easily seen in eq. ( 16).Inserting the definitions of the basis functions given in eq. ( 10), and limiting the consideration to one combination of (r k , s k , t k ) and (r l , s l , t l ), the integration reads 1 Following the approach introduced by Durufle ( 2006), the numerical integration is based on GLL quadrature rule defined on corresponding quadrature points, that is GLL quadrature points.We remark that the choice of the quadrature points and rule constitutes one of the main difference to finite element approaches which use Gauss quadrature and Gauss quadrature points for integration if required.In classic FEM, quadrature itself can often be avoided altogether due to the simple nature of the basis functions (e.g.piece-wise linear hat functions) which allow analytical integration to be used instead.
We remark that both the stiffness and mass matrices involve products of basis functions of order N and thus polynomials of order 2N.According to the property of numerical integration, the GLL quadrature of order N exactly integrates polynomials up to order 2N − 1.Thus, using GLL quadrature of order N + 1 for evaluating the integrals of the stiffness and mass matrices yields the exact evaluations of these integrations (Lee et al. 2006).However, in case of the mass matrix, the (N + 1)th order of the GLL quadrature results in a non-diagonal matrix.To make it diagonal, one has to use the reduced (N − 1, N, N)th order of the GLL quadrature for the terms involving rst ξ (Lee et al. 2006;Huang et al. 2019), where the orders reflect the three axial directions.For terms involving vector basis functions in the other two axial directions, the orders of quadrature have to be permuted correspondingly.The reduced quadrature orders and thus approximate evaluations of the integrals have been shown to yield the same accuracy as the exact evaluations (Lee et al. 2006).
Having said all this, the first term in eq. ( 20) after numerical integration results in with (ξ GLL rm , η GLL sm , ζ GLL tm ) and w GLL m = w GLL rm w GLL sm w GLL tm being the coordinates of a quadrature point and the associated weights, respectively.Recalling the property φ GLL i (η GLL j ) = δ i, j , we can eliminate the summation over index s m .By definition of the Lagrange polynomial, a similar relationship does not hold for φ G evaluated at GLL quadrature points.Furthermore, it becomes evident that the summation is only non-zero if indices s k and s l coincide.The remaining summands in eq. ( 20) can be simplified similarly and the integrations become summations: From eq. ( 22) we can see that the numerical integration involves the computation of 1-D Lagrangian basis functions associated with Gauss integration points at GLL quadrature points, as well as evaluating the derivatives of Lagrange polynomials at GLL quadrature points.The values of the former and the latter are stored in N × (N + 1) and (N + 1) × (N + 1) lookup tables, respectively.We emphasize that both lookup tables need to be computed only once, and then stored for subsequent use.All the other interactions of the basis functions can be inferred by applying the approach outlined above.Once the global matrices are being assembled, it is imperative that the orientations of the local degrees of freedom must be taken into account with respect to the global degrees of freedom to ensure the continuity of the tangential components of the unknowns, because general hexahedral meshes do not admit consistent global and local orientations.This issue has been detailed in Anjam & Valdman (2015), Fuentes et al. (2015), Agelek et al. (2017) and Kynch & Ledger (2017), and we refer the reader to the aforementioned publications for further information.Our algorithm adopts the implementation by Durufle (2006).Briefly, the orientation conflict is resolved by evaluating whether the local direction of a degree of freedom coincides with the global direction.If the local and global directions of a degree of freedom are opposite to each other, the rows and columns of the elementary matrices associated with the degree of freedom in question are multiplied by −1.

Source term
In CSEM modelling, long grounded wire sources or loop sources are commonly decomposed into finite-length current segments which are placed along the edges of elements (Haber 2014;Li et al. 2017;Rochlitz et al. 2019).Each such segment can be viewed as an electric dipole.
For instance, for an electric dipole oriented in direction of a vector dl, the current density is defined as (Ward & Hohmann 1988) where x 0 , y 0 and z 0 indicate the source coordinates, I denotes the current, dl is the dipole respectively edge length, δ is the Dirac delta function and dl is a unit vector in direction of the source edge.By inserting this expression into eq.( 6), its right-hand side can be evaluated, and it follows that any element of the source vector s in eq. ( 7) pertaining to a source current segment, after assembly, amounts to where e l is the corresponding basis function evaluated at the projected source coordinates on the reference element.Hence, the source contribution of a current segment is distributed over N degrees of freedom corresponding to the polynomial order which determines the number of degrees of freedom along an edge.The sign is determined by the orientation of the local direction of the corresponding degree of freedom with respect to the global direction.

Mesh generation
Creating meshes for simple half-space or layered-earth models is fairly simple.However, it becomes increasingly more challenging to generate meshes with sufficient quality for geometrically complicated models.It is also well known that the mesh quality influences the accuracy of the solution (cf.Schwarzbach 2009).Without touching upon details, it is worth mentioning that tools for unstructured hexahedral meshing providing reliable frameworks for automated mesh generation of 3-D arbitrary geometries is an area of ongoing research and remains a non-trivial challenge (Kovalev 2005), in particular in comparison to mesh generation based on tetrahedra (Shepherd & Johnson 2008;Staten et al. 2010).Although various software packages exist, they are usually only applicable to a certain class of problems.Here and in order to carefully set up high-quality hexahedral meshes, we use the relatively simple, flexible and open-source software Gmsh (Geuzaine & Remacle 2009).Gmsh allows the design of complex 2-D models which can be extruded to obtain 3-D models.Arbitrary topography can be implemented by moving the sets of anchor nodes of elements at the air-Earth interface in z-direction to represent the desired surface topography (Nam et al. 2007;Grayver & Kolev 2015;Kordy et al. 2016;Kamm et al. 2020).To better describe the coordinate transformation of the potentially non-planar element surfaces, 2nd order elements with additional anchor nodes as described in Section 2.3 are used.Higher order elements also allow for curved edges as used in Grayver et al. (2019) for 3-D MT modelling in a spherical Earth.Gmsh also enables the specification of several domains (known as physical surfaces and volumes).These entities are then used to set markers for each element providing a simple way to attribute physical properties.Furthermore, Gmsh accommodates the generation of meshes with variable element size which facilitates mesh refinement around the source, the receivers and the structural boundaries as well as the growth of element size towards the domain boundaries.As to the choice of the domain size to mitigate boundary effects, we take the plane-wave skin depth criterion as a guideline to determine adequate limits.In particular, the plane-wave skin depth is evaluated using the background conductivity σ b , permeability μ b and permittivity ε b and the used source frequency f.The truncation of boundaries is then set to approximately five plane-wave skin depths away from the area of interest to ensure sufficient attenuation of the electric field at the boundaries.In case the same mesh is used for multiple source frequencies, the lowest frequency is used to obtain the plane-wave skin depth and the domain boundaries are set accordingly.

Implementation
Apart from the pre-processing including mesh generation and the visualization part of the post-processing step, the above described numerical procedure has been implemented in a stand-alone Fortran code.The implementation features unstructured hexahedral meshes, high-order polynomial approximations and spatially variable constitutive parameters σ , μ and ε.The system of linear equations (eq.7) is solved directly using the multifrontal massively parallel sparse direct solver MUMPS (Amestoy et al. 2001).MUMPS can use both shared-and distributed-memory parallelism.

N U M E R I C A L E X A M P L E S
To verify the accuracy of our implementation, we select five scenarios.conductivity for any test is set to 10 −8 S m -1 .Further, note that each simulation takes displacement currents into consideration, and if not stated otherwise the magnetic permeability is that of free space.Numerical accuracy is assessed using the measure of relative deviation for the various field components given at receiver site rs defined as: where v ref rs and v SEM rs denote the real part, imaginary part or the amplitude of an individual electric or magnetic field component of the reference solution and the spectral element result, respectively.In case of 3-D models, which do not have analytical solutions we assess our solutions using the results of another 3-D modelling code which serve as the reference for comparison.All the numerical tests have been performed on the computer cluster Rackham operated by Uppsala's Multidisciplinary Center for Advanced Computational Science (UPPMAX).All simulations have been run on a single node containing 20 cores meaning that the algorithm only utilizes shared-memory parallelism.As to the technical specifications, each compute node consists of two Intel Xeon E5 2630 v4 processors at 2.20 GHz/core (10 cores, 20 threads, 25 MB LLC and a bandwidth of 68.3 GB s -1 ), and has either 8×16384MB (128GB), 16×16384MB (256GB) or 16×65536MB (1TB) of ECC 2400 MHz DIMM DRAM memory.

1-D examples
We briefly describe the 1-D models used for some of the numerical experiments.Note that each scenario solely outlines the model layout as well as the source setup.However, information concerning the source frequency, receiver layout and the mesh discretization are given in the corresponding subsections as they may vary from simulation to simulation.
Scenario 1. Homogeneous half-space model as depicted in Fig. 2(a) with a half-space conductivity of 10 −4 S m -1 , respectively, in which the EM field is excited by a 200-m-long grounded cable oriented along the x-axis and centred around the centre point of the computational domain, which is henceforth called origin and has coordinates (0,0,0).The source moment is 100 Am.
Scenario 2. Layered model as illustrated in Fig. 2(b) consisting of a 500-m-thick conductive layer of 0.01 S m -1 starting at a depth of 500 m embedded in a 10 −4 S m -1 half-space.The source consists of a 200-m-long grounded cable oriented along the x-axis and centred around the origin.The source moment is 100 Am.

Results for Scenario 1
In this subsection, we present a comparison with semi-analytical solutions for Scenario 1.Note that the source frequency is set to 1000 Hz.Furthermore, due to the simplistic geometry of Scenario 1, the computational domain is discretized using 46 × 46 × 14 non-uniform Downloaded from https://academic.oup.com/gji/article/232/2/1427/6696010 by Uppsala University user on 22 November 2022 Table 2. Discretization information for Scenarios 1 and 2: Computational domain size, the total number of elements and the resulting numbers of degrees of freedom used for the homogeneous half-space [Fig.2(a)] and the layered models [Fig.2(b)].The different model extensions indicate the domain sizes used for simulations within various frequency ranges, namely 1-100 and 100-10 000 Hz.The smallest cell size for all these models is 100 × 100 × 100 m 3 .All simulations performed with these domain parameters use 3rd order basis functions.rectangular elements.The source is subdivided into two 100-m-long segments, thus avoiding extensive mesh refinement around the source itself as opposed to finite element discretizations based on tetrahedral meshes, that require fine meshing.Having said this, the smallest elements are still located in the vicinity of the source.However, the element size increases steadily outwards to the domain boundaries, and mesh refinement around receivers is not needed.Essential information concerning the domain size, the total number of elements and the resulting number of degrees of freedom for the described model is summarized in Table 2.

Model
We compute the electric and magnetic fields for receivers placed at a 45 • angle to the source direction and with source-receiver offsets between 1 and 5 km spaced at 250 m increments using the spectral element method based on 3rd order basis functions.We point out that only a subset of all the solutions is shown.Fig. 3 shows the x-component of the electric field as well as the y-component of the magnetic field for a signal frequency of 1000 Hz.For comparison, it also displays the semi-analytical reference solutions obtained with the 1-D forward modelling module of EMILIA (Kalscheuer et al. 2012(Kalscheuer et al. , 2015)), which computes the solutions using fast Hankel transforms (FHT; Weidelt 1986;Christensen 1990).The spectral element and reference solutions are in good agreement with each other.This is further corroborated Table 3. Deviations for Scenario 1: Average relative deviations of all non-zero electric and magnetic field components at a signal frequency of 1000 Hz for the homogeneous half-space model in Fig. 2(a) across the entire range of source-receiver offsets shown in Fig. 3.Note that the imaginary part of the y-component of the electric field is zero in theory and thus the deviation is not calculated.

Field component
Average when considering the relative deviations as plotted in the bottom row of Fig. 3.As one can see, for most parts the relative deviations lie well below and around 2 per cent.Exceptions are found in connection with sign reversals which impair the accuracy due to the sharp change of the amplitude over a short distance.In case of this simulation, the moderate deviation is further enhanced by the edge lengths of the elements, as they are larger than the scales over which sharp field changes and sign reversals occur, that is a single element has to resolve them.Even third order interpolation struggles to resolve such abrupt and strong variations in such an environment.Nevertheless, the accuracy is still within acceptable margins.We conjecture that smaller elements would improve the accuracy, and so would higher order approximations.In addition, a slight increase in relative deviations can be observed with increasing receiver offset.This could be related to the larger element sizes towards the domain boundaries which adversely affect the interpolation.It may also indicate emerging influence of boundary reflections.
The average relative deviations across all the receivers at a frequency of 1000 Hz for all other electric and magnetic field components with exception of the vertical electric field are listed in Table 3. Visualizations of the components not displayed in Fig. 3 are deliberately omitted as the SEM responses coincide very accurately with the reference solutions.Furthermore, the deviation behaviour described above is also observed for the components not shown, although to a lesser degree.Fig. 3 and Table 3 demonstrate that the spectral element solution is accurate.

Results for Scenarios 1 and 2 over frequency
To demonstrate the behaviour of the solutions over a broad frequency range, we tested our algorithm for 21 frequencies in the range from 1 to 10 000 Hz for Scenarios 1 and 2. This frequency band is motivated by land-based surveys which typically use sources transmitting within this range.Before describing the discretization of the model, we remark that it is impractical to have a single optimal mesh discretization for all frequencies within the considered range.The reason is that the attenuation and phase behaviour of EM fields depends on the frequency and the physical properties of the media, and so do the skin depth and wave length.On account of this, the domain boundaries and element size have to be chosen adequately in order to suppress/mitigate boundary reflections and effects of undersampling, respectively.We thus choose to perform the simulations on two differently sized computational domains.The domain extensions set for the lower (1-100 Hz) and higher (100-10 000 Hz) frequency range are indicated in Table 2. Apart from the domain size, which may vary according to the considered frequency, Scenario 1 is kept as described previously including the source setup; the volumes of the hexahedra are just scaled using a constant factor.As far as Scenario 2 in Fig. 2(b) is concerned, the model domain is meshed using 29 × 29 × 32 and 32 × 34 × 27 irregularly spaced rectangular elements for the lower and higher frequency ranges, respectively (Table 2).Regarding the vertical distribution of the elements within the subsurface, the uppermost layer consists of 5 equally spaced segments, and the conductive stratum is divided into 10, respectively, 5 elements of equal vertical edge length.The underlying half-space is discretized in vertical direction using 10 segments steadily increasing in thickness with depth.
The spectral element results based on 3rd order basis functions for a cross-line receiver at 2.5 km source-receiver offset are plotted in Fig. 4 for the homogeneous half-space and layered models in Figs 2(a) and (b), respectively.As a reference, the semi-analytical solutions are shown as well.The amplitude comparison exhibits good agreement across the entire frequency range.Table 4 reports the respective average relative deviation for each component.As can be seen, the mean relative deviations are well below 1 per cent for both models.However, the layered model exhibits marginally higher deviations due to the additional conductivity contrast in the subsurface that contributes to the non-smoothness of the CSEM field compromising the accuracy of the solution slightly.The results presented in Fig. 4 and Table 4 confirm the validity of the spectral element solutions across the selected frequency range and for layered 1-D models.

3-D examples
We briefly outline the 3-D models devised for the subsequent numerical experiments.Each scenario describes the model layout, source information including source layout, source moment and source frequency as well as receiver layout.Discretization information is relegated to the corresponding subsections describing the results of the numerical experiments.cover.The source is a 400 m long y-directed grounded cable extending from (500, −300, 0) to (500,100,0) m.The source moment is 400 Am and the source frequency is 100 Hz.Receivers are deployed across the target body at intervals of 100 m in between the coordinates (−4000, 0, 0) and (0,0,0) m, meaning that the observation line is perpendicular to the transmission direction and offset by 500 m.
Scenario 4. 3-D model having a trough and a ridge with a maximal depression and elevation of 400 m as illustrated in Fig. 2(d).The trough is separated from the ridge by a small plateau of 500 m width as depicted in Fig. 6.Both features extend all the way to the boundary in y-direction.The subsurface has a homogeneous conductivity of 10 −4 S m -1 .The source is a 200-m-long grounded cable oriented along the x-axis and centred around the origin.The source frequency is 100 Hz and the source moment is 100 Am.On the main profile, four receivers are deployed on the slope of the trough with a spacing of 500 m starting from 1 km distance to 2.5 km, one receiver is located on the plateau at 3.25 km distance, and three additional receivers are placed on the slope of the ridge ranging from 4 to 5 km distance with a spacing of 500 m.All receivers of this main profile are offset by 1 km distance in y-direction and marked by red dots in Fig. 6.In addition, we present results of a cross-line profile with nine receivers located at x = 0 km and y = 1, . . ., 5 km at 0.5 km spacing.These cross-line receivers are marked by blue dots in Fig. 6.Scenario 5. Conceived 3-D example involving two conductive subsurface structures and a cover layer of 400-600 m thickness that includes a topographic feature emulating a foothill (see Fig. 7).The dimensions of the two bodies are illustrated in Fig. 8.The conductivities of the air, background, cover layer and the two bodies are 10 −8 , 10 −4 , 0.01, 1 and 0.1 S m -1 , respectively.The source is an x-directed grounded cable extending from (−100, 0, 0) to (100,0,0) m.The transmitted frequency is 10 Hz and the source moment is 1 Am.Receivers are spread over the area of interest and are indicated as red and black points in Fig. 7.

Results for Scenario 3 with variable magnetic permeability
In this subsection, results from numerical simulations of Scenario 3 are presented.The simulation includes 90 714 elements, is based on 2nd order basis functions and yields 2 200 218 degrees of freedom on a domain that spans 10 × 10 × 10 km with cell volumes ranging from about 3.3 × 10 4 to 2.6 × 10 10 m 3 .Fig. 5 depicts an extract of a vertical slice at y = 0 m of the subsurface part of the mesh used for this example.Superimposed are cuts through the hexahedral elements highlighting the unstructured mesh design.It further shows some level of mesh refinement around the source and the buried body emphasizing the disregard of mesh optimization for receivers.In the bottom panel, the extracted section presenting the mesh discretization of the buried body accentuates how little refinement is required.In contrast, finite element methods using first-order polynomial shape functions and hexahedral or tetrahedral meshes require fine meshes even around receiver sites (e.g.Ren et al. 2013;Rulff et al. 2021).
Electric and magnetic fields due to a source transmitting at 100 Hz are compared to finite element solutions computed using the open-source software package custEM (Rochlitz et al. 2019).It is remarked that the finite element solutions are computed using second-order vector basis functions as well.Fig. 9 shows the comparisons between the x-components of the electric and magnetic fields.The relative deviation plots reveal reasonable agreement between the numerical schemes considering the different discretization procedures based on tetrahedral or hexahedral meshes and thus varying and incomparable mesh quality parameters that influence the numerical accuracy of the solutions for each method.Further, the growing deviations for the imaginary part of the x-component of the magnetic field at offsets >3 km are due to boundary effects.We only show these two components because the deviations of the ones depicted are representative of all the field responses and the average relative deviations of the omitted components listed in Table 5 attest to this.To complement the comparison, Table 6 reports the computation times for both methods using the same computing platform with the same number of CPU cores.Note that the total run time is further split into t A , which encompasses the handling of input information, the assembly of the system matrix and the right-hand side vector as well as the writing out of data, and t S , which entails the time the solver spends analysing, factorizing and solving the assembled system.In addition, it lists the number of elements used to discretize the corresponding meshes and the resulting degrees of freedom associated with them.As the number of degrees of freedom is kept in a comparable range it is not surprising that the total run times are approximately the same.It is, however, worth noting that the spectral element mesh requires about half as many elements as the finite element mesh to discretize the computational domain for vector basis functions of the same order.
To show that the code can handle variable magnetic permeability distributions, we slightly alter Scenario 3 displayed in Figs 2(c) and 5. Namely, the thin conductive layer is removed and the relative magnetic permeability of the buried body is increased.We recompute the CSEM responses for relative magnetic permeabilities of 1.3, 1.5 and 2. Receivers are deployed across the body ranging from coordinates (−2400, 500, 0) to (200,500,0) m.The surface projection of the body in x and y-direction extends from −1336 to −663 m and −1000 to 2000 m, respectively.Results for the field components with the largest discrepancies are plotted in Fig. 10.One can see that the change of relative magnetic permeability of the small-scale buried body does influence the CSEM responses, particularly for the receivers located above it.However, the influence is not very strong and the responses converge again with increasing distance from the anomaly.Overall, we could observe that the imaginary parts of the CSEM responses are more affected by the change in magnetic permeability than the real ones.

Results for Scenario 4
Our next example considers the model described in Scenario 4. The domain is discretized using 49 140 elements and spans 30 × 25.48 × 30 km.The cell volumes range from around 7 × 10 4 to 1.435 × 10 11 m 3 .We used the spectral element method based on 2nd order basis functions resulting in 1 194 460 degrees of freedom.Our results are compared to two finite element solutions based on tetrahedral mesh discretizations, obtained using the open-source software package custEM in combination with 2nd order vector basis functions (Rochlitz et al. 2019) and a code by Rulff et al. (2021) using 1st order vector basis functions, and to the corresponding half-space responses.Table 6 compares custEM and the spectral element approaches, in terms of numbers of elements used for mesh discretization and the associated system sizes (numbers of DOF) as well as their run times.For the main profile (red dots in Fig. 6), Fig. 11 displays the components of the electric field tangential to the topography in the x−z plane (E t ) and in the y-direction (E y ) for all three numerical schemes and also plots the response of a homogeneous half-space with flat topography to show the influence of the topography.One can observe good agreement   between the three numerical schemes.One can also visually discern the small differences between the responses of the flat and trough-ridge topographies (cf.Fig. 11).Note that due to the the small amplitude of elevation change the trough-ridge topography only elicits a weak topography effect.Nonetheless, the influence of topography is clearly identifiable when considering field components that vanish for the flat topography.From theory, it is known that the y-component of the electric field as well as the x-component of the magnetic field are zero when the receivers are deployed in a cross-line configuration across a homogeneous half-space without topography.For the model with topography, these field components do not vanish as shown in Fig. 12 for the receivers on the cross-line profile (blue dots in Fig. 6) demonstrating the effect of topography on CSEM responses.

Results for Scenario 5
Given that the models presented hitherto deal with relatively simple structures and surface topographies and to examine the ability of our algorithm to model responses of a complex subsurface, we consider the model described in Scenario 5.In order to represent topography at the air-Earth interface, the vertices of the elements at that interface are moved vertically such that they follow the desired topography similarly to Nam et al. (2007) and Kordy et al. (2016).However, to better represent the distorted elements, we use 2nd order shape functions and associated anchor nodes as described in Section 2.3.We emphasize that these nodal shape functions and associated anchor nodes to describe the elements in the mesh and the coordinate transformation from the physical to the reference domain are not to be confused with the  6).For comparison, the corresponding spectral element solutions for a flat surface are plotted.According to semi-analytical solutions, these components ought to be zero.However, in case of numerical computations, they cannot be expected to be identical to zero.Nonetheless, they should be significantly smaller than the non-vanishing components.As can be seen, the responses for the model with topography are not vanishing, clearly showing the influence of the topography on the CSEM responses.vector basis functions used to represent the electric field and test functions required for the spectral element discretization.The computational domain spans a volume of 30 × 25.48 × 30 km and is discretized using 182,700 hexahedral elements.The simulation is based on 2nd order basis functions which yields 4 422 294 degrees of freedom.Our results are compared to a tetrahedral-based finite element algorithm (Rulff et al. 2021) showing overall good agreement.Table 7 contains the key computational parameters involved in the simulations for each method.Figs 13 and 14 present the responses of all components along a receiver line perpendicular to the source at an offset of 2.5 km and another one parallel to the source at an offset of 2.73 km as displayed by black dots in Fig. 7.It is noted that the x-and y-components of the electric field are rotated in order to follow the slope of the surface topography.The two numerical schemes show good agreement which is further corroborated by the average relative deviations for the real and imaginary parts for each component across all receivers as listed in Table 8.With regard to the average deviations of the real and imaginary parts for the electric and magnetic fields, at first glance one might say that 5-10 per cent deviation is considerable.However, a closer look reveals that the largest deviations occur towards larger source-receiver offsets as well as for the E y , H x , H z components for in-line receivers (y-coordinate equal to zero).It is important to note that the responses for these components of the in-line receiver only record 3-D effects, because they would be zero in a 1-D case and are thus of considerably lower amplitude than the E x and H y components (Fig. 13).Due to these lower responses, it is likely that differences in meshing and interpolation schemes of different methods might result in more pronounced effects at such receiver locations.In addition, the handling of how to represent the topography with hexahedral and tetrahedral meshes is not identical, thus yielding surface topographies that are only approximately the same.In particular, the tetrahedral mesh consists solely of elements with planar faces as opposed to the hexahedral mesh that has non-planar faces.Evidently, this influences the interpolation of the electric and magnetic fields at receiver locations and constitutes an additional source of deviation.Having said all this, we argue that 5-10 per cent deviation is acceptable.

Convergence study for Scenarios 1 and 2
Having verified our numerical scheme, we have yet to investigate whether high-order polynomial approximations are advantageous with respect to discretization error.In the following, we illuminate the convergence behaviour of the discretization error of the spectral element method for CSEM applications.Note that the relative deviation in this subsection can be understood as a proxy for the discretization error.
First, we address the plain strategy of simply increasing the polynomial degree for the same mesh discretization.This is commonly referred to as p-refinement.To do so, we set up a fairly coarse mesh for Scenarios 1 and 2 respectively, and compute solutions for polynomial orders 1 to 5. In all cases, the mesh is composed of 14 × 14 × 14 elements and domain dimensions remain unchanged.The vertical discretization for the subsurface of the layered model (Scenario 2) is done using two and three evenly spaced edge lengths for the top and conductive layers, respectively.The underlying half-space consists of two irregularly spaced edges along the vertical axis.The source setups are described in Section 3.1 and the transmitted source frequency is 1000 Hz.Receivers are deployed in cross-line configuration spaced at 250 m, and range from 1 to 5 km offset from the source.It is worth noting that all the receivers are encompassed by just two elements meaning that the receivers are located on a common face of the elements.
Fig. 15 shows the convergence curves.It depicts the average deviations relative to the semi-analytical 1-D solutions as obtained for the different electric and magnetic field components as a function of the polynomial order and the degrees of freedom.The dashed horizontal line indicates the 5 per cent deviation threshold as a reference.It is evident that the first-order approximation yields inaccurate solutions, which presumably is due to the inadequate mesh discretization.The results for most components are improved considerably with each subsequent increase of the polynomial order of the approximation.This improvement of the accuracy is ascribed to the used high order polynomials, which assume Nth order dependency of the electric field within an element and (N − 1)th order for the magnetic field.Hence, higher degree polynomials enable to capture the spatial variations of the EM field more accurately.Simulations based on 4th and 5th order basis functions achieve sufficiently accurate solutions even in view of the rather poorly designed coarse meshes.Speaking of the crude mesh, it is worth recalling that all the receivers are encompassed by only two elements.Thus, these results give evidence that mesh refinement around receivers has limited influence on the accuracy of the responses at receiver sites.Consequently, mesh design can be focused on interfaces characterized by conductivity contrasts, where the solution is non-smooth and mesh refinement is better suited to enhance accuracy than increasing the approximation order.
Apart from the error convergence other numerical quantities of interest are listed in Table 9.For each polynomial order, it details the resulting number of degrees of freedom, the total run time and memory used to perform the factorization obtained as output by the diagnostics of MUMPS.The reported total run times as well as memory consumption are given for the layered model.However, they are almost identical for the homogeneous half-space model, as it has the same number of elements as the layered model.Looking at the numbers shows a  Rulff et al. (2021) for the real and the imaginary parts of the electric and magnetic field components at receivers with a constant y-offset of 2.73 km parallel to the source cable for the example with two anomalies and topography (see Figs 7 and 8).substantial increase for all quantities as the polynomial order is incremented.It becomes apparent that the enhanced accuracy at low mesh resolution for high-order approximations comes at the price of high memory demands and prolonged simulations times.Regarding the run times, the breakdown into assembly and solver times given in Table 9 reveals that the solving stage takes up around 70 per cent of the total time.Furthermore, it indicates that the ratio between the assembly and solving phase is not influenced by the polynomial order of the spectral element approximation and is roughly 1 2.8 : 1 in this example.Extrapolating the memory requirement to higher order approximations (i.e.polynomial degree ≥4 ) based on finely meshed models reveals that such simulations are hardly feasible without relying on high performance computing facilities.From a practical point of view and in context of memory consumption, this could possibly restrict the use of high order polynomial approximations for complex geometries that require extensive meshes resulting in an excessive number of degrees of freedom.Nonetheless, in case of moderately finely meshed models, 2nd and 3rd order approximations become computationally affordable and yield sufficiently accurate results as seen for the presented 3-D example.
The presented convergence curves which plot the deviation representative of the discretization error against the number of degrees of freedom disclose no information about the time taken to attain a certain error threshold.From a practical point of view, we request a fast Convergence study for Scenarios 1 and 2: Convergence curves of the electric and magnetic field components split in real and imaginary parts for increasing polynomial order (p = 1, . . ., 5) and, hence, increasing number of DOF.The mesh consists of 14 × 14 × 14 hexahedra.On the left, results obtained for the half-space model, and on the right for the layered model (Fig. 2).Deviations are relative to semi-analytical 1-D solutions.
Table 9. Computational information for Scenario 2: Computational parameters recorded for the layered model depicted in Fig. 2(b) relating to the polynomial degree (cf.right-hand panel of Fig. 15).The mesh is discretized using 14 × 14 × 14 elements.The run time t is further split into assembly (t A ) and solver times (t S ). and accurate numerical method.Hence, we need to find the best suited strategy that minimizes the total runtime while ensuring an accurate solution.An appropriate approach to accomplish this is to simply plot the relative deviations for simulations with different polynomial orders and several mesh discretizations as a function of total run time.
The initial mesh for the layered model as described above is common to all simulations.Then, we progressively introduce global mesh refinement.For the three-layer model, the results are displayed in Fig. 16 and show the mean relative deviations in terms of numbers of degrees of freedom, CPU time, number of elements and memory requirements for the simulations.The following observations can be made: First, all convergence curves exhibit similar behaviour of the deviation as the mesh is globally refined.Namely, the addition of elements respectively degrees of freedom reduces the the discretization error rapidly.Then, the convergence rates begin to level off and, at that stage, further refinement ceases to improve the accuracy significantly.Secondly, overall higher degree polynomials yield slightly more accurate solutions for a comparable number of degrees of freedom.Looking at the number of elements involved to achieve this reveals that substantially fewer elements are required to do so.In terms of run time, the curves indicate that it takes less time to obtain similar error levels for high order approximations.From a computational point of view, they also suggest the use of 2nd and 3rd order polynomials as these approximations perform comparably to higher order ones but benefit from allowing for more densely meshed models which is crucial for complicated geometries with distinct conductivity contrasts that cause discontinuities in the normal component of the electric field.Thirdly, the aforementioned levelling off may be due to the global refinement technique which mainly concentrates on increasing the global mesh density and, thus, on decreasing the global discretization error which may not translate to a decrease in relative deviation at receiver sites.Whereas higher polynomial degrees are efficient in regions with smooth solutions (e.g. in the air), increasing the global mesh density proves as not particularly effective.Instead, localized refinement around conductivity contrasts would be better suited to decrease the error at measuring sites.Penultimately and remarkably, the 4th order approximation gives quite accurate solutions even for the initial coarse mesh.Finally, further numerical investigation and/or theoretical considerations into the behaviour of the discretization error for the spectral element method is needed to better understand its behaviour.
To put the convergence behaviour in context, Fig. 16 includes the theoretical quasi-optimal slope of the convergence rate for each polynomial order as expected of global and non-goal-oriented adaptive mesh refinement techniques for problems with singularities (Chen et al. 2007).This quasi-optimality property states that the discretization error e convergences according to |e| ≤ C N − p/3 DO F , where C is a mesh size and shape function order independent coefficient, N DOF denotes the number of degrees of freedom and p is the polynomial order.It order shape functions to describe the transformation mapping, the current implementation allows for elements with non-planar faces similar to previous works and curved edges.Higher order shape functions, however, potentially affect the accuracy of the quadrature adversely and thus of the evaluated matrix entries.Strategies to maintain adequate accuracy of the integration exist (e.g.Kordy et al. 2016;Kamm et al. 2020, using higher order quadrature or applying Chebyshev approximation).At this point, we recall that fully automated hexahedral mesh generation is challenging in itself.Depending on the meshing software, its inherent limitations can impose restrictions on the complexity of geophysical models or make the meshing process cumbersome and challenging.However, we emphasize that the flexibility for geophysical applications might be vastly improved by future hexahedral meshing software.
All simulations clearly indicate that the use of high order polynomials mitigates the dependency of the solution on mesh discretization, particularly in proximity of the transmitter and the receivers.For equivalent mesh resolutions, they approximate the varying field behaviour far better than low order interpolation functions.Additionally, higher polynomial degrees have been shown to be particularly efficient in decreasing the solution error in areas without conductivity contrasts.This offers a certain degree of flexibility concerning the mesh design, that is mesh refinement can be focused on regions characterized by conductivity contrasts.It is conceded here that this type of refinement should not be confused with any kind of adaptive mesh refinement, as our approach simply allows to define areas of higher element density when generating the mesh.Still, it is worth mentioning that this results in meshes with fewer elements when compared to the finite element method.
The performed convergence study exhibits that increasing the approximation order, also known as p-refinement, improves the accuracy.This provides an additional mechanism besides mesh refinement.However, higher polynomial orders are inherently more resource intensive, which might render them unsuitable for complex models demanding finer meshes.Hence, the question concerning the optimal balance between accuracy and computational effort remains open.In other words, which polynomial degree and underlying mesh discretization deliver accurate results whilst requiring minimal resources and time?For the limited case studies shown here, the use of 2nd and 3rd order polynomial spectral element approximations together with reasonably refined yet coarse meshes proves most beneficial.Although polynomials of 4th or even higher degree provide the best accuracy with respect to degrees of freedom and would allow for even coarser meshes, they turn out to be computationally less attractive due to their resource requirements.Two remarks are due.First, this finding concerning the most advantageous polynomial order has also been observed in publications studying finite element methods (cf.Schwarzbach et al. 2011;Grayver & Kolev 2015;Rochlitz et al. 2019).Secondly, further investigation into the optimal combination of mesh discretization and approximation order for the spectral element method is needed to transfer this statement to a wider range of relevant problems.Furthermore, the convergence rates for our spectral element simulations are slightly superoptimal for 1st and 2nd order vector basis functions when compared to the postulated theoretical quasi-optimal slope attainable by global and non-goal-oriented adaptive mesh refinement techniques (Chen et al. 2007).This favourable behaviour, however, could not be observed for higher-order basis functions which can be explained by the irregularity of the right-hand side and the solution and the discontinuous distribution of conductivity.Recovering optimal to superoptimal convergence rates may be achieved by adaptive mesh refinement and a posteriori error estimation techniques as used in finite element approaches (e.g.Key & Weiss 2006;Chen et al. 2007;Ren et al. 2013;Grayver & Kolev 2015).
From a computational point of view, the use of the multifrontal solver MUMPS is expensive and intensive, thus relying on large computing capabilities.In terms of requirements, machines with 128-256 GB RAM are sufficient for medium-sized realistic models.To expand the feasible scope of the method to large and complex models involves extending the current implementation to incorporate distributed memory parallelism.In addition, it requires the availability of high performance computing clusters.Having said this, it is of future interest to add this type of parallelism to the algorithm.Another approach to mitigate the resource issue is to apply iterative solution methods in combination with preconditioners.
In contrast to previous spectral element studies in the area of microwave engineering (e.g. Lee et al. 2006;Lee & Liu 2007;Lee et al. 2009) which mainly focus on micro-wave applications involving dielectric, but non-conducting and non-magnetic anomalies, our examples investigate challenges associated with field propagation through with magnetically permeable and electrically conductive anomalies which is of highest relevance for geophysical applications.Further, the microwave-engineering examples presented do not consider source terms which is an integral difference to our test cases.Another difference concerns the order of magnitude of spatial and frequency scales for microwave applications which are not comparable to geophysical applications.In comparison to previous publications in EM geophysics which solely deal with air-borne implementations based on the primary-secondary field decomposition approach, our implementation presents the first land-based algorithm using a total-field formulation.Our 3-D and topography examples also demonstrate the use of unstructured hexahedral meshes with more comprehensive models.Previously, only simple examples with lightly distorted elements have been presented.In addition, our implementation accounts for spatially variable magnetic permeability and dielectric permittivity distributions which has not been considered in previous spectral element studies with geophysical context.
In conclusion, we have developed the first numerical scheme for 3-D land-based controlled-source EM forward modelling using the spectral element method.The algorithm adopts a total field formulation, handles arbitrary conductivity, magnetic permeability and dielectric permittivity distributions, and accommodates complex-shaped structures and topography.The numerical experiments yield accurate EM responses at the receiver sites and indicate reduced dependency on mesh discretization.They also prove the use of higher order polynomial spectral element approximation to be advantageous.Furthermore, they encourage simulations based on 2nd and 3rd order polynomials as these orders provided balanced trade-offs between mesh size, accuracy and computational resources.However, even higher order polynomial

Figure 1 .
Figure 1.Standard cubic reference element in the ξηζ -coordinate system (left-hand panel) projected to the xyz-coordinate system (right-hand panel).

Figure 2 .
Figure 2. Schematic side views of the four models used to test the spectral element implementation in Scenarios 1-4.None of the illustrations is true to scale.The illustrations contain the conductivities of the various physical entities.

Figure 3 .
Figure 3.Comparison for Scenario 1: Spectral element results and semi-analytical solutions for the homogeneous half-space model [Fig.2(a)] with a source frequency of a 1000 Hz.Absolute values of the real and imaginary parts of the x-component of the electric field (top left-hand panel) and the y-component of the magnetic field (top right-hand panel) for receivers located along a profile at a 45 • angle to the direction of the source and offset by 1-5 km from the source are displayed.Relative deviations are shown in the two bottom panels.

Figure 4 .
Figure 4. Comparisons for Scenarios 1 and 2: Spectral element results and the semi-analytical solutions for the homogeneous half-space model (left-hand panel) and the layered model (right-hand panel) depicted in Figs 2(a) and (b), respectively.Shown are the amplitudes for a cross-line receiver offset by 2.5 km as function of frequency.

Figure 5 .
Figure 5. Visualization of Scenario 3: Vertical slice at y = 0 m showing an extract of the subsurface part of the buried body model in Fig. 2(c).Colours indicate the distinct geological entities.Quadrilateral elements are superimposed in white to show the unstructured mesh design.For better visualization, the buried body is displayed in the bottom panel.

Figure 6 .
Figure 6.Visualization of Scenario 4: Air-Earth interface showing the trough and ridge topography of the model in Fig. 2(d) as well as the surface mesh.The vertical scale is exaggerated.The green, red and blue dots indicate the locations of the transmitter and the receivers of the main profile and the receivers of the cross-line profile, respectively.

Figure 7 .
Figure 7. Visualization of Scenario 5: Surface topography and surface mesh.The vertical scale is exaggerated.The green and red dots indicate transmitter and receiver locations.Black dots display the two receiver lines (one perpendicular an the other parallel to the transmitter) for which comparisons of the electric and magnetic field components to the finite element solutions are shown.

Figure 8 .
Figure 8. Visualization of Scenario 5: 3-D depiction of the the conductive subsurface structures for the model with two conductive anomalies and topography in Fig. 7.

Figure 9 .
Figure 9.Comparison for Scenario 3: Our spectral element solutions and the finite element solutions obtained using custEM (Rochlitz et al. 2019) for the model in Figs 2(c) and 5 showing the absolute values of the real and imaginary parts for the x-components of the electric and magnetic fields.Relative deviations are shown in the two bottom panels.

Figure 10 .Figure 11 .
Figure 10.Comparison for Scenario 3: Responses for various relative magnetic permeability values (μ r ) of the buried body shown in Figs 2(c) and 5 for a source transmitting at 100 Hz.The plots display the imaginary part of the x-and z-components of the electric and magnetic fields, respectively, for four different values of the relative magnetic permeability μ r of the buried body.The results indicate that variable magnetic permeability distribution does influence the CSEM responses and should be taken into account when modelling CSEM data.

Figure 12 .
Figure12.Comparison for Scenario 4: SEM responses for the y-and x-components of the electric and magnetic fields, respectively, for the cross-line configuration of the ridge-trough topography model (see Fig.2(d) and blue dots in Fig.6).For comparison, the corresponding spectral element solutions for a flat surface are plotted.According to semi-analytical solutions, these components ought to be zero.However, in case of numerical computations, they cannot be expected to be identical to zero.Nonetheless, they should be significantly smaller than the non-vanishing components.As can be seen, the responses for the model with topography are not vanishing, clearly showing the influence of the topography on the CSEM responses.

Figure 14 .
Figure 14.Comparison for Scenario 5: Our spectral element solutions and the finite element solutions byRulff et al. (2021) for the real and the imaginary parts of the electric and magnetic field components at receivers with a constant y-offset of 2.73 km parallel to the source cable for the example with two anomalies and topography (see Figs7 and 8).

Figure
Figure15.Convergence study for Scenarios 1 and 2: Convergence curves of the electric and magnetic field components split in real and imaginary parts for increasing polynomial order (p = 1, . . ., 5) and, hence, increasing number of DOF.The mesh consists of 14 × 14 × 14 hexahedra.On the left, results obtained for the half-space model, and on the right for the layered model (Fig.2).Deviations are relative to semi-analytical 1-D solutions.

Table 1 .
Number of degrees of freedom per element depending on polynomial order N.

Table 5 .
Deviations for Scenario 3: Average relative deviations of the electric and magnetic field components of the spectral and finite element results for the 3-D model shown in Figs 2(c) and 5.

Table 6 .
Computational information for Scenarios 3 and 4: The computational times [split into assembly tasks (t A ) and solver-related tasks (t S )] obtained on the same computing platform with the same number of CPU cores for Scenario 3 shown in Figs2(c) and 5 and for Scenario 4 displayed in Figs2(c) and 6.The numbers of elements used for the mesh discretization were chosen such that the resulting numbers of degrees of freedom (DOF) for the spectral and finite element methods are comparable.

Table 7 .
Computational information for Scenario 5: The computational times [split into assembly tasks (t A ) and solver-related tasks (t S )] obtained on the same computing platform with the same number of CPU cores for the 3-D model with two conductive anomalies and topography depicted in Figs7 and 8.The numbers of elements used for the mesh discretization are chosen such that the resulting numbers of degrees of freedom (DOF) for the spectral and finite element methods are comparable.

Table 8 .
Deviations for Scenario 5: Average relative deviations of the real and the imaginary parts of the electric and magnetic field components of the spectral and finite element results across all receivers for the 3-D model with two conductive anomalies and topography illustrated in Figs 7 and 8.