forked from john-b-schneider/uFDTD
-
Notifications
You must be signed in to change notification settings - Fork 0
/
fdtd-multidimensional.tex
3022 lines (2662 loc) · 124 KB
/
fdtd-multidimensional.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\chapter{Two-Dimensional FDTD Simulations\label{chap:multi}}
%\setcounter{page}{1}
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\footnotetext[2]{Lecture notes by John Schneider. {\tt
fdtd-multidimensional.tex}}
\section{Introduction}
One of the truly compelling features of the FDTD method is that the
simplicity the method enjoys in one dimension is largely maintained in
higher dimensions. The complexity of other numerical techniques often
increases substantially as the number of dimensions increases. With
the FDTD method, if you understand the algorithm in one dimension, you
should have no problem understanding the basic algorithm in two or
three dimensions. Nevertheless, introducing additional dimensions has
to come at the price of some additional complexity.
This chapter provides details concerning the implementation of
two-dimensional simulations with either the magnetic field or the
electric field orthogonal to the normal to the plane of propagation,
i.e., TM$^z$ or TE$^z$ polarization, respectively. Multidimensional
simulations require that we think in terms of a multidimensional grid
and thus we require multidimensional arrays to store the fields.
Since we are primarily interested in writing programs in C, we begin
with a discussion of multidimensional arrays in the C language.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Multidimensional Arrays \label{sec:multiArrays}}
Whether an array is one-, two-, or three-dimensional, it ultimately is
using a block of contiguous memory where each element has a single
address. The distinction between the different dimensions is
primarily a question of how we want to think about, and access, array
elements. For higher-dimensional arrays, we want to specify two, or
more, indices to dictate the element of interest. However, regardless
of the number of indices, ultimately a single address dictates which
memory is being accessed. The translation of multiple indices to a
single address can be left to the compiler or that translation is
something we can do ourselves.
The code shown in Fragment \ref{frag:twoDDynamic} illustrates how one
can think and work with what is effectively a two-dimensional array
even though the memory allocation is essentially the same as was used
with the one-dimensional array considered in Fragment
\ref{frag:oneDDynamic}. In Fragment \ref{frag:twoDDynamic} the user
is prompted to enter the desired number of rows and columns which are
stored in {\tt num\_rows} and {\tt num\_columns}, respectively. In
line \ref{twoDDynamicB} the pointer {\tt ez} is set to the start of a
block of memory which can hold {\tt num\_rows} $\times$ {\tt
num\_columns} doubles. Thus sufficient space is available to store
the desired number of rows and columns, but this pointer is
dereferenced with a single index (or offset).
\begin{fragment}
Fragment of code demonstrating the construction of a two-dimensional
array. \label{frag:twoDDynamic}
\codemiddle
\begin{lstlisting}
#define Ez(I, J) ez[(I) * num_columns + (J)] /*@ \label{twoDDynamicA} @*/
\end{lstlisting}
\mbox{}\hspace{.5in}$\vdots$
\begin{lstlisting}[firstnumber=2]
double *ez;
int num_rows, num_columns, m, n;
printf("Enter the number of row and columns: ");
scanf("%d %d", &num_rows, &num_columns);
ez = calloc(num_rows * num_columns, sizeof(double)); /*@ \label{twoDDynamicB} @*/
for (m=0; m < num_rows; m++) /*@ \label{twoDDynamicD} @*/
for (n=0; n < num_columns; n++)
Ez(m, n) = m * n; /*@ \label{twoDDynamicC} @*/
\end{lstlisting}
\end{fragment}
In this code the trick to thinking and working in two dimensions,
i.e., working with two indices instead of one, is the macro which is
shown in line \ref{twoDDynamicA}. This macro tells the preprocessor
that every time the string {\tt Ez} appears with two arguments---here
the first argument is labeled {\tt I} and the second argument is
labeled {\tt J}---the compiler should translate that to {\tt
ez[(I) * num\_columns + (J)]}. Note the uppercase {\tt E} in {\tt Ez}
distinguishes this from the pointer {\tt ez}. {\tt I} and {\tt J} in
the macro are just dummy arguments. They merely specify how the first
and second arguments should be used in the translated code. Thus, if
{\tt Ez(2, 3)} appears in the source code, the macro tells the
preprocessor to translate that to {\tt ez[(2) * num\_columns + (3)]}.
In this way we have specified two indices, but the macro translates
those indices into an expression which evaluates to a single number
which represents the offset into a one-dimensional array (the array
{\tt ez}). Even though {\tt Ez} is technically a macro, we will refer
to it as an array. Note that, as seen in line \ref{twoDDynamicC},
{\tt Ez}'s indices are enclosed in parentheses, not brackets.
To illustrate this further, assume the user specified that there are
$4$ columns ({\tt num\_columns} is $4$) and $3$ rows. When the row
number increments by one, that corresponds to moving $4$ locations
forward in memory. The following shows the arrangement of the
two-dimensional array {\tt Ez(m, n)} where {\tt m} is used for the row
index and {\tt n} is used for the column index.
\begin{center}
\begin{tabular}{l|llll}
&n=0 & n=1 & n=2 & n=3 \\ \hline
m=0 &Ez(0, 0) & Ez(0, 1) & Ez(0, 2) & Ez(0, 3) \\
m=1 &Ez(1, 0) & Ez(1, 1) & Ez(1, 2) & Ez(1, 3) \\
m=2 &Ez(2, 0) & Ez(2, 1) & Ez(2, 2) & Ez(2, 3)
\end{tabular}
\end{center}
The {\tt Ez} array really corresponds to the one-dimensional {\tt ez}
array. The macro calculates the index for the {\tt ez} array based on
the arguments (i.e., indices) of the {\tt Ez} array. The following
shows the same table of values, but in terms of the {\tt ez} array.
\begin{center}
\begin{tabular}{l|llll}
&n=0 & n=1 & n=2 & n=3 \\ \hline
m=0 &ez[0] & ez[1] & ez[2] & ez[3] \\
m=1 &ez[4] & ez[5] & ez[6] & ez[7] \\
m=2 &ez[8] & ez[9] & ez[10] & ez[11]
\end{tabular}
\end{center}
Again, in this example, when the row is incremented by one, the array
index is incremented by $4$ (which is the number of columns). This is
due to the fact that we are storing the elements by rows. An entire
row of values is stored, then the next row, and so on. Each row
contains {\tt num\_columns} elements.
Instead of storing things by rows, we could have easily employed
``column-centric storage'' where an entire column is stored contiguously.
This would be accomplished by using the macro
\begin{code}
#define Ez(I, J) ez[(I) + (J) * num_rows]
\end{code}
This would be used instead of line \ref{twoDDynamicA} in Fragment
\ref{frag:twoDDynamic}. If this were used, each time the row is
incremented by one the index of {\tt ez} is incremented by one. If
the column is incremented by one, the index of {\tt ez} would be
incremented by {\tt num\_rows}. In this case the elements of the {\tt
ez} array would correspond to elements of the {\tt Ez} array as shown
below:
\begin{center}
\begin{tabular}{l|llll}
&n=0 & n=1 & n=2 & n=3 \\ \hline
m=0 &ez[0] & ez[3] & ez[6] & ez[9] \\
m=1 &ez[1] & ez[4] & ez[7] & ez[10] \\
m=2 &ez[2] & ez[5] & ez[8] & ez[11]
\end{tabular}
\end{center}
Note that when the row index is incremented by one, the index of {\tt
ez} is also incremented by one. However, when the column is
incremented by one, the index of {\tt ez} is incremented by $3$, which
is the number of rows. This type of column-centric storage is used in
FORTRAN. However, multidimensional arrays in C are generally assumed
to be stored in row-order. Thus column-centric storage will {\em not}
be considered further and we will use row-centric macros similar to
the one presented in Fragment \ref{frag:twoDDynamic}.
When an array is stored by rows, it is most efficient to access the
array one row at a time---not one column at a time. Lines
\ref{twoDDynamicD} through \ref{twoDDynamicC} of Fragment
\ref{frag:twoDDynamic} demonstrate this by using two loops to set the
elements of {\tt Ez} to the product of the row and column indices.
The inner loop is over the row and the outer loop sets the column.
This is more efficient than if these loops were interchanged (although
there is likely to be no difference for small arrays). This is a
consequence of the way memory is stored both on the disk and in the
CPU cache.
Memory is accessed in small chunks known as pages. If the CPU needs
access to a certain variable that is not already in the cache, it will
generate a page fault (and servicing a page fault takes more time than
if the variable were already in the cache). When the page gets to the
cache it contains more than just the variable the CPU wanted---it
contains other variables which were stored in memory adjacent to the
variable of interest (the page may contain many variables). If the
subsequent CPU operations involve a variable that is already in the
cache, that operation can be done very quickly. It is most likely
that that variable will be in the cache, and the CPU will not have to
generate a page fault, if that variable is adjacent in memory to the
one used in the previous operation. Thus, assuming row-centric
storage, working with arrays row-by-row is the best way to avoid
needless page faults.
It is important to note that we should not feel constrained to
visualize our arrays in terms of the standard way in which arrays are
displayed! Typically two-dimensional arrays are displayed in a table
with the first element in the upper, left-hand corner of the table.
The first index gives the row number and the second index gives the
column number. FDTD simulations are modeling a physical space, not a
table of numbers. In two dimensions we will be concerned with spatial
dimensions $x$ and $y$. It is convention to give the $x$ location as
the first argument and the $y$ location as the second argument, i.e.,
$E_z(x,y)$. It is also often the case that it is convenient to think
of the lower left-hand corner of some finite rectangular region of
space as the origin. It is perfectly acceptable to use the array
mechanics which have been discussed so far but to imagine them
corresponding to an arrangement in space which corresponds to our
usual notion of variation in the $x$ and $y$ directions. So, for
example, in the case of a $3$ by $4$ array, one can visualize the
nodes as being arranged in the following way:
\begin{center}
\begin{tabular}{l|lllll|lll}
n=3 &Ez(0,3) & Ez(1,3) & Ez(2,3) &&
n=3 &ez[3]&ez[7]&ez[11]\\
n=2 &Ez(0,2) & Ez(1,2) & Ez(2,2) &&
n=2 &ez[2]&ez[6]&ez[10]\\
n=1 &Ez(0,1) & Ez(1,1) & Ez(2,1) &
\hspace{.2in}$\Longleftrightarrow$\hspace{.2in}\mbox{}&
n=1 &ez[1]&ez[5]&ez[9]\\
n=0 &Ez(0,0) & Ez(1,0) & Ez(2,0) &&
n=0 &ez[0]&ez[4]&ez[8]\\\cline{1-4}\cline{6-9}
&m=0 & m=1 & m=2 &&
&m=0 &m=1 &m=2 \\
\end{tabular}
\end{center}
Nothing has changed in terms of the implementation of the macro to
realize this two-dimensional array---the only difference is the way
the elements have been displayed. The depiction here is natural when
thinking in terms of variations in $x$ and $y$, where the first index
corresponds to $x$ and the second index corresponds to $y$. The
previous depiction was natural to the way most people discuss tables
of data. Regardless of how we think of the arrays, it is still true
that the second index is the one that should vary most rapidly in the
case of nested loops (i.e., one should strive to have consecutive
operations access consecutive elements in memory).
As mentioned in Sec.\ \ref{sec:memAllocation}, it is always best to
check that an allocation of memory was successful. If {\tt calloc()}
is unable to allocated the requested memory, it will return {\tt
NULL}. After every allocation we could add code to check that the
request was successful. However, as we did in one-dimension, a better
approach is again offered with the use of macros. Fragment
\ref{frag:allocMacro2D} shows a macro that can be used to allocate
memory for a two-dimensional array.
\begin{fragment}
Macro for allocating memory for a two-dimensional array.
\label{frag:allocMacro2D}
\codemiddle
\begin{lstlisting}
#define ALLOC_2D(PNTR, NUMX, NUMY, TYPE) \
PNTR = (TYPE *)calloc((NUMX) * (NUMY), sizeof(TYPE)); \
if (!PNTR) { \
perror("ALLOC_2D"); \
fprintf(stderr, \
"Allocation failed for " #PNTR ". Terminating...\n");\
exit(-1); \
}
\end{lstlisting}
\end{fragment}
The macro {\tt ALLOC\_2D()} is similar to {\tt ALLOC\_1D()}, which was
presented in Fragment\ \ref{frag:allocMacro}, except it takes four
arguments instead of three. The first argument is a pointer, the
second is the number of rows, the third is the number of columns, and
the fourth is the data type. As an example of how this macro could be
used, line \ref{twoDDynamicB} of Fragment
\ref{frag:twoDDynamic} could be replaced with
\begin{code}
ALLOC_2D(ez, num_rows, num_columns, double);
\end{code}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Two Dimensions: TM$^z$ Polarization \label{sec:tmzPolarization}}
The one-dimensional problems considered thus far assumed a non-zero
$z$ component of the electric field and variation only in the $x$
direction. This required the existence of a non-zero $y$ component of
the magnetic field. Here the field is assumed to vary in both the $x$
and $y$ directions, but not the $z$ direction. From the outset we
will include the possibility of a magnetic conductivity $\sigma_m$.
With these assumptions Faraday's law becomes
\begin{equation}
-\sigma_m\Hvec -\mu \frac{\partial \Hvec}{\partial t} =
\nabla \times \Evec =
\left|
\begin{array}{ccc}
\unitvec{x} & \unitvec{y} & \unitvec{z} \\
\frac{\partial}{\partial x} & \frac{\partial}{\partial y} & 0 \\
0 & 0 & E_z
\end{array}
\right|
= \unitvec{x}\frac{\partial E_z}{\partial y}
-\unitvec{y}\frac{\partial E_z}{\partial x}.
\label{eq:faradayTMz}
\end{equation}
Since the right-hand side only has non-zero components in the $x$ and
$y$ directions, the time-varying components of the magnetic field can
only have non-zero $x$ and $y$ components (we are not concerned with
static fields nor a rather pathological case where the magnetic
current $\sigma_m\Hvec$ cancels the time-varying field
$\partial\Hvec/\partial t$). The magnetic field is transverse to the
$z$ direction and hence this is designated the TM$^z$ case. Ampere's
law becomes
\begin{equation}
\sigma \Evec + \epsilon \frac{\partial \Evec}{\partial t} =
\nabla \times \Hvec =
\left|
\begin{array}{ccc}
\unitvec{x} & \unitvec{y} & \unitvec{z} \\
\frac{\partial}{\partial x} & \frac{\partial}{\partial y} & 0 \\
H_x & H_y & 0
\end{array}
\right|
= \unitvec{z}\left(\frac{\partial H_y}{\partial x}
-\frac{\partial H_x}{\partial y}\right).
\label{eq:ampereTMz}
\end{equation}
The scalar equations obtained from \refeq{eq:faradayTMz} and
\refeq{eq:ampereTMz} are
\begin{eqnarray}
-\sigma_m H_x - \mu\frac{\partial H_x}{\partial t} &=&
\frac{\partial E_z}{\partial y}, \label{eq:faradayTMzX}
\\
\sigma_m H_y + \mu\frac{\partial H_y}{\partial t} &=&
\frac{\partial E_z}{\partial x}, \label{eq:faradayTMzY}
\\
\sigma E_z + \epsilon\frac{\partial E_z}{\partial t} &=&
\frac{\partial H_y}{\partial x}
-\frac{\partial H_x}{\partial y}. \label{eq:ampereTMzZ}
\end{eqnarray}
Note that, ignoring the conduction terms for a moment, the temporal
derivative of the magnetic field is related to the spatial derivative
of the electric field and vice versa. The only difference from the
one-dimensional case is the additional field component $H_x$ and
the derivatives in the $y$ direction.
Space-time is now discretized so that
\refeq{eq:faradayTMzX}--\refeq{eq:ampereTMzZ} can be expressed in
terms of finite-differences. From these difference equations the future
fields can be expressed in terms of past fields. Following the
notation used in Sec.\ \ref{sec:update}, the following notation will be
used:
\begin{eqnarray}
H_x(x,y,t) \!&=&\! H_x(m\Delx, n\Dely, q\Delt) =
\fdtd{H_x}{m,n}{q}, \\
H_y(x,y,t) \!&=&\! H_y(m\Delx, n\Dely, q\Delt) =
\fdtd{H_y}{m,n}{q}, \\
E_z(x,y,t) \!&=&\! E_z(m\Delx, n\Dely, q\Delt) = \fdtd{E_z}{m,n}{q}.
\end{eqnarray}
As before, the index $m$ corresponds to the spatial step in the $x$
direction while the index $q$ corresponds to the temporal step.
Additionally the index $n$ represents the spatial step in the $y$
direction. The spatial step sizes in the $x$ and $y$ directions are
$\Delx$ and $\Dely$, respectively (these need not be equal).
In order to obtain the necessary update-equations, each of the field
components must be staggered in space. However, it is not necessary
to stagger all the field components in time. The electric field must
be offset from the magnetic field, but the magnetic field components
do not need to be staggered relative to each other---all the magnetic
field components can exist at the same time. A suitable spatial
staggering of the electric and magnetic field components is shown in
Fig.\ \ref{fig:tmzGrid}.
\begin{figure}
\begin{center}
\epsfig{width=5in,file=Figures/Fdtd-multidimensional/tmz-fdtd-grid.eps}
\end{center} \caption{Spatial arrangement of electric- and
magnetic-field nodes for TM$^z$ polarization. The electric-field
nodes are shown as circles and the magnetic-field nodes as squares
with a line that indicates the orientation of the field component.
The somewhat triangularly shaped dashed lines indicate groupings of
nodes that have the same array indices. For example, in the lower
left corner of the grid all the nodes would have indices in a
computer program of $(m=0,n=0)$. In this case the spatial offset of
the fields is implicitly understood. This grouping is repeated
throughout the grid. However, groups at the top of the grid lack an
$H_x$ node and groups at the right edge lack an $H_y$ node. The
diagram at the bottom left of the figure indicates nodes with their
offsets given explicitly in the spatial arguments whereas the
diagram at the bottom right indicates how the same nodes would be
specified in a computer program where the offsets are understood
implicitly.} \label{fig:tmzGrid}
\end{figure}
When we say the dimensions of a TM$^z$ grid is $M\times N$, that
corresponds to the dimensions of the $E_z$ array. We will ensure the
grid is terminated such that there are electric-field nodes on the
edge of the grid. Thus, the $H_x$ array would by $M\times (N-1)$
while the $H_y$ array would be $(M-1)\times N$.
In the arrangement of nodes shown in Fig.\ \ref{fig:tmzGrid} we will
assume the electric field nodes fall at integer spatial steps and the
magnetic field nodes are offset a half spatial step in either the $x$
or $y$ direction. As with one dimension, the electric field is
assumed to exist at integer multiples of the temporal step while both
magnetic fields components are offset a half time-step from the
electric fields. With this arrangement in mind,
the finite difference approximation of \refeq{eq:faradayTMzX} expanded
about the space-time point $(m\Delx,(n+1/2)\Dely,q\Delt)$ is
\begin{eqnarray}
\lefteqn{\hspace{-.4in}
-\sigma_m \frac{\fdtdh{H_x}{m,n+\half}{q+\half} +
\fdtdh{H_x}{m,n+\half}{q-\half}}{2}
-\mu\frac{\fdtdh{H_x}{m,n+\half}{q+\half} -
\fdtdh{H_x}{m,n+\half}{q-\half}}{\Delt}
=} \nonumber \\
& & \hspace{3.3in}
\frac{\fdtd{E_z}{m,n+1}{q}-\fdtd{E_z}{m,n}{q}}{\Dely}.
\label{eq:fdtdTMzX}
\end{eqnarray}
This can be solved for the future value
$\fdtdh{H_x}{m,n+\half}{q+\half}$ in terms of the ``past''
values. The resulting update equation is
\begin{equation}
\fdtdh{H_x}{m,n+\half}{q+\half} =
\frac{1-\frac{\sigma_m\Delt}{2\mu}}{1+\frac{\sigma_m\Delt}{2\mu}}
\fdtdh{H_x}{m,n+\half}{q-\half} -
\frac{1}{1+\frac{\sigma_m\Delt}{2\mu}}
\frac{\Delt}{\mu\Dely}\left(\fdtd{E_z}{m,n+1}{q}-
\fdtd{E_z}{m,n}{q}\right).
\label{eq:updateHxTMz}
\end{equation}
As was the case in one dimension, the material parameters $\mu$ and
$\sigma_m$ are those which pertain at the given evaluation point.
The update equation for the $y$ component of the magnetic field is
obtained by the finite-difference approximation of
\refeq{eq:faradayTMzY} expanded about the space-time point
$((m+1/2)\Delx,n\Dely,q\Delt)$. The resulting equation is
\begin{equation}
\fdtdh{H_y}{m+\half,n}{q+\half} =
\frac{1-\frac{\sigma_m\Delt}{2\mu}}{1+\frac{\sigma_m\Delt}{2\mu}}
\fdtdh{H_y}{m+\half,n}{q-\half} +
\frac{1}{1+\frac{\sigma_m\Delt}{2\mu}}
\frac{\Delt}{\mu\Delx}\left(\fdtd{E_z}{m+1,n}{q}-
\fdtd{E_z}{m,n}{q}\right).
\label{eq:updateHyTMz}
\end{equation}
Again, the material parameters $\mu$ and $\sigma_m$ are those which
pertain at the given evaluation point. Note that $H_y$ nodes are
offset in space from $H_x$ nodes. Hence the $\mu$ and $\sigma_m$
appearing in \refeq{eq:updateHxTMz} and \refeq{eq:updateHyTMz} are not
necessarily the same even when $m$ and $n$ are the same.
The electric-field update equation is obtained via the finite-difference
approximation of \refeq{eq:ampereTMzZ} expanded about
$(m\Delx,n\Dely,(q+1/2)\Delt)$:
\begin{eqnarray}
\fdtd{E_z}{m,n}{q+1} &=&
\frac{1-\frac{\sigma\Delt}{2\epsilon}}{1+\frac{\sigma\Delt}{2\epsilon}}
\fdtd{E_z}{m,n}{q} +
\frac{1}{1+\frac{\sigma\Delt}{2\epsilon}}
\left(
\frac{\Delt}{\epsilon\Delx}
\left\{
\fdtdh{H_y}{m+\half,n}{q+\half} - \fdtdh{H_y}{m-\half,n}{q+\half}
\right\}\right.
\nonumber\\
&& \hspace{1.2in} \mbox{} -
\left.
\frac{\Delt}{\epsilon\Dely}
\left\{
\fdtdh{H_x}{m,n+\half}{q+\half} - \fdtdh{H_x}{m,n-\half}{q+\half}
\right\}
\right).
\end{eqnarray}
A uniform grid is one in which the spatial step size is the same in
all directions. Assuming a uniform grid such that
$\Delx=\Dely=\delta$, we define the following quantities
\begin{eqnarray}
\chxh(m,n+1/2) &=&
\left.
\frac{1-\frac{\sigma_m\Delt}{2\mu}}{1+\frac{\sigma_m\Delt}{2\mu}}
\right|_{m\delta,(n+1/2)\delta}, \\
\chxe(m,n+1/2) &=&
\left.
\frac{1}{1+\frac{\sigma_m\Delt}{2\mu}}\frac{\Delt}{\mu\delta}
\right|_{m\delta,(n+1/2)\delta}, \\
\chyh(m+1/2,n) &=&
\left.
\frac{1-\frac{\sigma_m\Delt}{2\mu}}{1+\frac{\sigma_m\Delt}{2\mu}}
\right|_{(m+1/2)\delta,n\delta}, \\
\chye(m+1/2,n) &=&
\left.
\frac{1}{1+\frac{\sigma_m\Delt}{2\mu}}\frac{\Delt}{\mu\delta}
\right|_{(m+1/2)\delta,n\delta}, \\
\ceze(m,n) &=&
\left.
\frac{1-\frac{\sigma\Delt}{2\epsilon}}{1+\frac{\sigma\Delt}{2\epsilon}}
\right|_{m\delta,n\delta}, \\
\cezh(m,n) &=&
\left.
\frac{1}{1+\frac{\sigma\Delt}{2\epsilon}}
\frac{\Delt}{\epsilon\delta}
\right|_{m\delta,n\delta}.
\end{eqnarray}
These quantities appear in the update equations and employ the
following naming convention: the first letter identifies the quantity
as a constant which does not vary in time (one can also think of this
$C$ as representing the word coefficient), the next two letters
indicate the field being updated, and the last letter indicates the
type of field this quantity multiplies. For example, $\chxh$
appears in the update equation for $H_x$ and it multiples the previous
value of the magnetic field. On the other hand, $\chxe$, which
also appears in the update equation for $H_x$, multiplies the electric
fields.
To translate these update equations into a form that is suitable for
use in a computer program, we follow the approach that was used in
one dimension: explicit references to the time step are dropped and
the spatial offsets are understood. As illustrated in Fig.\
\ref{fig:tmzGrid}, an $H_y$ node is assumed to be a half spatial step
further in the $x$ direction than the corresponding $E_z$ node with
the same indices. Similarly, an $H_x$ node is assumed to be a half
spatial step further in the $y$ direction than the corresponding $E_z$
node with the same indices. Thus, in C, the update equations could be
written
\begin{code}
Hx(m, n) = Chxh(m, n) * Hx(m, n) -
Chxe(m, n) * (Ez(m, n + 1) - Ez(m, n));
Hy(m, n) = Chyh(m, n) * Hy(m, n) +
Chye(m, n) * (Ez(m + 1, n) - Ez(m, n));
Ez(m, n) = Ceze(m, n) * Ez(m, n) +
Cezh(m, n) * ((Hy(m, n) - Hy(m - 1, n)) - (Hx(m, n) - Hx(m, n - 1)));
\end{code}
The reason that the ``arrays'' appearing in these equations start with
an uppercase letter and use parentheses (instead of two pairs of
brackets that would be used with traditional two-dimensional arrays
in C) is because these terms are actually macros consistent with the
usage described in Sec.\ \ref{sec:multiArrays}. In order for these
equations to be useful, they have to be contained within loops that
cycle over the spatial indices and these loops must themselves be
contained within a time-stepping loop. Additional considerations are
initialization of the arrays, the introduction of energy, and
termination of the grid. This issues are covered in the following sections.
\section{TM$^z$ Example}
To illustrate the simplicity of the FDTD method in two dimensions, let
us consider a simulation of a TM$^z$ grid which is $101$ nodes by $81$
nodes and filled with free space. The grid will be terminated on
electric field nodes which will be left at zero (so that the
simulation is effectively of a rectangular resonator with PEC walls).
A Ricker wavelet with $20$ points per wavelength at its most energetic
frequency is hardwired to the electric-field node at the center of the
grid.
Before we get to the core of the code, we are now at a point where it
is convenient to split the main header file into multiple header
files: one defining the {\tt Grid} structure, one defining various
macros, one giving the allocation macros, and one providing the
function prototypes. Not all the ``{\tt .c}'' files need to include
each of these header files.
The arrangement of the code is shown in Fig.\ \ref{fig:filesTmZ}. In
this figure the header files {\tt fdtd-grid1.h}, {\tt
fdtd-alloc1.h}, {\tt fdtd-macro-tmz.h}, and {\tt fdtd-proto1.h} are
shown in a single box but they exist as four separate files (as will
be shown below).
\begin{figure}
\begin{center}
\epsfig{width=4.1in,file=Figures/Fdtd-multidimensional/tmzdemo1-files.eps}
\end{center} \caption{The files associated with a simple TM$^z$
simulation with a hard source at the center of the grid. The four
header files with an {\tt fdtd-} prefix are lumped into a single
box. Not all these files are included in each of the files to which
this box is linked. See the code for the specifics related to the
inclusion of these files. }
\label{fig:filesTmZ}
\end{figure}
The contents of {\tt fdtd-grid1.h} are shown in Program
\ref{pro:fdtdgrid1h}. The {\tt Grid} structure, which begins on line
\ref{fdtdgrid1hB}, now has elements for any of the possible electric
or magnetic field components as well as their associated coefficient
arrays. Note that just because all the pointers are declared, they do
not have to be used or point to anything useful. The {\tt Grid}
structure shown here could be used for a 1D simulation---it provides
elements for everything that was needed to do a 1D simulation---but
most of the pointers would be unused, i.e., those elements that
pertain to anything other than a 1D simulation would be ignored.
The way we will distinguish between what different grids are being
used for is by setting the ``{\tt type}'' field of the grid. Note
that line \ref{fdtdgrid1hA} creates a {\tt GRIDTYPE} enumeration.
This command merely serves to set the value of {\tt oneDGrid} to zero,
the value of {\tt teZGrid} to one, and the value of {\tt tmZGrid} to
two. (The value of {\tt threeDGrid} would be three, but we are not
yet concerned with three-dimensional grids.) A {\tt Grid} will have
its {\tt type} set to one of these values. Functions can then check
the {\tt type} and act accordingly.
\begin{program}
{\tt fdtd-grid1.h} Contents of the header file that defines the {\tt
Grid} structure. This structure now contains pointers for each of the
possible field values. However, not all these pointers would be used
for any particular grid. The pointers that are meaningful would be
determined by the ``{\tt type}'' of the grid. The {\tt type} takes on
one of the values of the {\tt GRIDTYPE} enumeration.
\label{pro:fdtdgrid1h}
\codemiddle
\begin{lstlisting}
#ifndef _FDTD_GRID1_H
#define _FDTD_GRID1_H
enum GRIDTYPE {oneDGrid, teZGrid, tmZGrid, threeDGrid}; /*@ \label{fdtdgrid1hA} @*/
struct Grid { /*@ \label{fdtdgrid1hB} @*/
double *hx, *chxh, *chxe;
double *hy, *chyh, *chye;
double *hz, *chzh, *chze;
double *ex, *cexe, *cexh;
double *ey, *ceye, *ceyh;
double *ez, *ceze, *cezh;
int sizeX, sizeY, sizeZ;
int time, maxTime;
int type;
double cdtds;
};
typedef struct Grid Grid;
#endif
\end{lstlisting}
\end{program}
The contents of {\tt fdtd-alloc1.h} are shown in Program
\ref{pro:fdtdalloc1h}. This header file merely provides the
memory-allocation macros that have been discussed previously.
\begin{program}
{\tt fdtd-alloc1.h} Contents of the header file that defines the
memory allocation macros suitable for 1D and 2D arrays.
\label{pro:fdtdalloc1h}
\codemiddle
\begin{lstlisting}
#ifndef _FDTD_ALLOC1_H
#define _FDTD_ALLOC1_H
#include <stdio.h>
#include <stdlib.h>
/* memory allocation macros */
#define ALLOC_1D(PNTR, NUM, TYPE) \
PNTR = (TYPE *)calloc(NUM, sizeof(TYPE)); \
if (!PNTR) { \
perror("ALLOC_1D"); \
fprintf(stderr, \
"Allocation failed for " #PNTR ". Terminating...\n");\
exit(-1); \
}
#define ALLOC_2D(PNTR, NUMX, NUMY, TYPE) \
PNTR = (TYPE *)calloc((NUMX) * (NUMY), sizeof(TYPE)); \
if (!PNTR) { \
perror("ALLOC_2D"); \
fprintf(stderr, \
"Allocation failed for " #PNTR ". Terminating...\n");\
exit(-1); \
}
#endif
\end{lstlisting}
\end{program}
The contents of {\tt fdtd-macro-tmz.h} are shown in Program
\ref{pro:fdtdmacrotmzh}. This file provides the macros used to access
the field arrays and elements of a pointer to a {\tt Grid} structure.
Thus far all the macros we have used assumed the {\tt Grid} pointer
was called {\tt g}. The macros provided in lines
\ref{fdtdmacrotmzHA}--\ref{fdtdmacrotmzHB} no longer make this
assumption. Instead, one specifies the name of the pointer as the
first argument. To this point in our code there is no need for this
added degree of freedom. We only considered code that has one pointer
to a {\tt Grid} and we have consistently named it {\tt g}. However,
as we will see when we discuss the TFSF boundary, it is convenient to
have the ability to refer to different grids.
The macros in lines \ref{fdtdmacrotmzHC}--\ref{fdtdmacrotmzHD} do
assume the {\tt Grid} pointer is named {\tt g}. These macros are
actually defined in terms of the first set of macros where the first
argument has been set to {\tt g}. Note that although we are
discussing a 2D TM$^z$ problem, this file still provides macros that
can be used for a 1D array. Again, we will see, when we implement a
2D TFSF boundary, that there are valid reasons for doing this. Since
any function that is using these macros will also need to know about a
{\tt Grid} structure, line \ref{fdtdmacrotmzHE} ensures that the {\tt
fdtd-grid1.h} header file is also included.
\begin{program}
{\tt fdtd-macro-tmz.h} Header file providing macros suitable for
accessing the elements and arrays of either a 1D or 2D {\tt Grid}.
There are two distinct sets of macros. The first set takes an
argument that speficies the name of the pointer to the {\tt Grid}
structure. The second set assumes the name of the pointer is {\tt g}.
\label{pro:fdtdmacrotmzh}
\codemiddle
\begin{lstlisting}
#ifndef _FDTD_MACRO_TMZ_H
#define _FDTD_MACRO_TMZ_H
#include "fdtd-grid1.h" /*@ \label{fdtdmacrotmzHE} @*/
/* macros that permit the "Grid" to be specified */
/* one-dimensional grid */
#define Hy1G(G, M) G->hy[M] /*@ \label{fdtdmacrotmzHA} @*/
#define Chyh1G(G, M) G->chyh[M]
#define Chye1G(G, M) G->chye[M]
#define Ez1G(G, M) G->ez[M]
#define Ceze1G(G, M) G->ceze[M]
#define Cezh1G(G, M) G->cezh[M]
/* TMz grid */
#define HxG(G, M, N) G->hx[(M) * (SizeYG(G)-1) + (N)]
#define ChxhG(G, M, N) G->chxh[(M) * (SizeYG(G)-1) + (N)]
#define ChxeG(G, M, N) G->chxe[(M) * (SizeYG(G)-1) + (N)]
#define HyG(G, M, N) G->hy[(M) * SizeYG(G) + (N)]
#define ChyhG(G, M, N) G->chyh[(M) * SizeYG(G) + (N)]
#define ChyeG(G, M, N) G->chye[(M) * SizeYG(G) + (N)]
#define EzG(G, M, N) G->ez[(M) * SizeYG(G) + (N)]
#define CezeG(G, M, N) G->ceze[(M) * SizeYG(G) + (N)]
#define CezhG(G, M, N) G->cezh[(M) * SizeYG(G) + (N)]
#define SizeXG(G) G->sizeX
#define SizeYG(G) G->sizeY
#define SizeZG(G) G->sizeZ
#define TimeG(G) G->time
#define MaxTimeG(G) G->maxTime
#define CdtdsG(G) G->cdtds
#define TypeG(G) G->type /*@ \label{fdtdmacrotmzHB} @*/
/* macros that assume the "Grid" is "g" */
/* one-dimensional grid */
#define Hy1(M) Hy1G(g, M) /*@ \label{fdtdmacrotmzHC} @*/
#define Chyh1(M) Chyh1G(g, M)
#define Chye1(M) Chye1G(g, M)
#define Ez1(M) Ez1G(g, M)
#define Ceze1(M) Ceze1G(g, M)
#define Cezh1(M) Cezh1G(g, M)
/* TMz grid */
#define Hx(M, N) HxG(g, M, N)
#define Chxh(M, N) ChxhG(g, M, N)
#define Chxe(M, N) ChxeG(g, M, N)
#define Hy(M, N) HyG(g, M, N)
#define Chyh(M, N) ChyhG(g, M, N)
#define Chye(M, N) ChyeG(g, M, N)
#define Ez(M, N) EzG(g, M, N)
#define Ceze(M, N) CezeG(g, M, N)
#define Cezh(M, N) CezhG(g, M, N)
#define SizeX SizeXG(g)
#define SizeY SizeYG(g)
#define SizeZ SizeZG(g)
#define Time TimeG(g)
#define MaxTime MaxTimeG(g)
#define Cdtds CdtdsG(g)
#define Type TypeG(g) /*@ \label{fdtdmacrotmzHD} @*/
#endif /* matches #ifndef _FDTD_MACRO_TMZ_H */
\end{lstlisting}
\end{program}
Finally, the contents of {\tt fdtd-proto1.h} are shown in Program
\ref{pro:fdtdproto1}. This file provides the prototypes for the
various functions associated with the simulation. Since a pointer to
a {\tt Grid} appears as an argument to these functions, any file that
includes this header will also need to include {\tt fdtd-grid1.h} as
is done in line \ref{fdtdproto1A}.
\begin{program}
{\tt fdtd-proto1.h} Header file providing the function prototypes.
\label{pro:fdtdproto1}
\codemiddle
\begin{lstlisting}
#ifndef _FDTD_PROTO1_H
#define _FDTD_PROTO1_H
#include "fdtd-grid1.h" /*@ \label{fdtdproto1A} @*/
/* Function prototypes */
void gridInit(Grid *g);
void snapshotInit2d(Grid *g);
void snapshot2d(Grid *g);
void updateE2d(Grid *g);
void updateH2d(Grid *g);
#endif
\end{lstlisting}
\end{program}
The file {\tt tmzdemo1.c}, which contains the {\tt main()} function,
is shown in Program \ref{pro:tmzdemo1}. The program begins with the
inclusion of the necessary header files. Note that only three of the
four {\tt fdtd-} header files are explicitly included. However, both
the header files {\tt fdtd-macro-tmz.h} and {\tt fdtd-proto1.h} ensure
that the ``missing'' file, {\tt fdtd-grid1.h}, is included.
Fields are introduced into the grid by hardwiring the value of an
electric-field node as shown in line \ref{tmzdemo1A}. Because the
source function is used in {\tt main()}, the header file {\tt ezinc.h}
had to be included in this file. Other than those small changes, this
program looks similar to many of the 1D programs which we have
previously considered.
\begin{program}
{\tt tmzdemo1.c}
FDTD implementation of a TM$^z$ grid with a Ricker wavelet source at
the center of the grid. No ABC have been implemented so the
simulation is effectively of a resonator.
\label{pro:tmzdemo1}
\codemiddle
\begin{lstlisting}
/* TMz simulation with Ricker source at center of grid. */
#include "fdtd-alloc1.h"
#include "fdtd-macro-tmz.h"
#include "fdtd-proto1.h"
#include "ezinc.h"
int main()
{
Grid *g;
ALLOC_1D(g, 1, Grid); // allocate memory for Grid
gridInit(g); // initialize the grid
ezIncInit(g);
snapshotInit2d(g); // initialize snapshots
/* do time stepping */
for (Time = 0; Time < MaxTime; Time++) {
updateH2d(g); // update magnetic field
updateE2d(g); // update electric field
Ez(SizeX / 2, SizeY / 2) = ezInc(Time, 0.0); // add a source /*@ \label{tmzdemo1A} @*/
snapshot2d(g); // take a snapshot (if appropriate)
} // end of time-stepping
return 0;
}
\end{lstlisting}
\end{program}
The contents of {\tt gridtmz.c}, which contains the grid
initialization function {\tt gridInit()}, is shown in Program
\ref{pro:gridtmz}. On line \ref{gridtmzA} the type of grid is
defined. This is followed by statements which set the size of the
grid, in both the $x$ and $y$ directions, the duration of the
simulation, and the Courant number. Then, on lines \ref{gridtmzB}
through \ref{gridtmzC}, space is allocated for the field arrays and
their associated coefficients array. Note that although the $E_z$
array is {\tt SizeX}$\times${\tt SizeY}, $H_x$ is {\tt
SizeX}$\times(\mbox{\tt SizeY}-1)$, and $H_y$ is $(\mbox{\tt
SizeX}-1)\times${\tt SizeY}. The remainder of the program merely
sets the coefficient arrays. Here there is no need to include the
header file {\tt fdtd-proto1.h} since this function does not call any
of the functions listed in that file.
\begin{program}
{\tt gridtmz.c} Grid initialization function for a TM$^z$ simulation.
Here the grid is simply homogeneous free space.
\label{pro:gridtmz}
\codemiddle
\begin{lstlisting}
#include "fdtd-macro-tmz.h"
#include "fdtd-alloc1.h"
#include <math.h>
void gridInit(Grid *g) {
double imp0 = 377.0;
int mm, nn;
Type = tmZGrid; /*@ \label{gridtmzA} @*/
SizeX = 101; // x size of domain
SizeY = 81; // y size of domain
MaxTime = 300; // duration of simulation
Cdtds = 1.0 / sqrt(2.0); // Courant number
ALLOC_2D(g->hx, SizeX, SizeY - 1, double); /*@ \label{gridtmzB} @*/
ALLOC_2D(g->chxh, SizeX, SizeY - 1, double);
ALLOC_2D(g->chxe, SizeX, SizeY - 1, double);
ALLOC_2D(g->hy, SizeX - 1, SizeY, double);
ALLOC_2D(g->chyh, SizeX - 1, SizeY, double);
ALLOC_2D(g->chye, SizeX - 1, SizeY, double);
ALLOC_2D(g->ez, SizeX, SizeY, double);
ALLOC_2D(g->ceze, SizeX, SizeY, double);
ALLOC_2D(g->cezh, SizeX, SizeY, double); /*@ \label{gridtmzC} @*/
/* set electric-field update coefficients */
for (mm = 0; mm < SizeX; mm++)
for (nn = 0; nn < SizeY; nn++) {
Ceze(mm, nn) = 1.0;
Cezh(mm, nn) = Cdtds * imp0;
}
/* set magnetic-field update coefficients */
for (mm = 0; mm < SizeX; mm++)
for (nn = 0; nn < SizeY - 1; nn++) {
Chxh(mm, nn) = 1.0;
Chxe(mm, nn) = Cdtds / imp0;
}
for (mm = 0; mm < SizeX - 1; mm++)
for (nn = 0; nn < SizeY; nn++) {
Chyh(mm, nn) = 1.0;
Chye(mm, nn) = Cdtds / imp0;
}
return;
}
\end{lstlisting}
\end{program}
The functions for updating the fields are contained in the file {\tt
updatetmz.c} which is shown in Program \ref{pro:updatetmz}. In line
\ref{updatetmzA} the {\tt Type} is checked (i.e., {\tt g->type} is
checked). If it is {\tt oneDGrid} then only the $H_y$ field is
updated and it only has a single spatial index. If the grid is not a
1D grid, it is assumed to be a TM$^z$ grid. Thus, starting on line
\ref{updatetmzB}, $H_x$ and $H_y$ are updated and they now have two
spatial indices.
\begin{program}
{\tt updatetmz.c} Functions to update the fields. Depending on the
type of grid, the fields can be treated as either one- or
two-dimensional.
\label{pro:updatetmz}
\codemiddle
\begin{lstlisting}
#include "fdtd-macro-tmz.h"
/* update magnetic field */
void updateH2d(Grid *g) {
int mm, nn;
if (Type == oneDGrid) { /*@ \label{updatetmzA} @*/
for (mm = 0; mm < SizeX - 1; mm++)
Hy1(mm) = Chyh1(mm) * Hy1(mm)
+ Chye1(mm) * (Ez1(mm + 1) - Ez1(mm));
} else {
for (mm = 0; mm < SizeX; mm++) /*@ \label{updatetmzB} @*/
for (nn = 0; nn < SizeY - 1; nn++)
Hx(mm, nn) = Chxh(mm, nn) * Hx(mm, nn)
- Chxe(mm, nn) * (Ez(mm, nn + 1) - Ez(mm, nn));
for (mm = 0; mm < SizeX - 1; mm++)
for (nn = 0; nn < SizeY; nn++)
Hy(mm, nn) = Chyh(mm, nn) * Hy(mm, nn)
+ Chye(mm, nn) * (Ez(mm + 1, nn) - Ez(mm, nn));
}
return;
}
/* update electric field */
void updateE2d(Grid *g) {
int mm, nn;
if (Type == oneDGrid) { /*@ \label{updatetmzC} @*/
for (mm = 1; mm < SizeX - 1; mm++)
Ez1(mm) = Ceze1(mm) * Ez1(mm)
+ Cezh1(mm) * (Hy1(mm) - Hy1(mm - 1));
} else {
for (mm = 1; mm < SizeX - 1; mm++)
for (nn = 1; nn < SizeY - 1; nn++)
Ez(mm, nn) = Ceze(mm, nn) * Ez(mm, nn) +
Cezh(mm, nn) * ((Hy(mm, nn) - Hy(mm - 1, nn)) -
(Hx(mm, nn) - Hx(mm, nn - 1)));
}
return;
}
\end{lstlisting}
\end{program}
The function for updating the electric field, {\tt updateE2d()}, only
is responsible for updating the $E_z$ field. However, as shown in
line \ref{updatetmzC}, it still must check the grid type. If this is
a 1D grid, $E_z$ only has a single spatial index and only depends on
$H_y$. If it is not a 1D grid, it is assumed to be a TM$^z$ grid and
$E_z$ now depends on both $H_x$ and $H_y$.
The function to implement the Ricker wavelet is shown in Program
\ref{pro:ricker}. The header file {\tt ezinc.h} is virtually
unchanged from Program \ref{pro:ezincH}. The one minor change is that
instead of including {\tt fdtd2.h}, now the file {\tt
fdtd-macro-tmz.h} is included. Thus {\tt ezinc.h} is not shown.
The initialization function {\tt ezIncInit()} prompts the user to
enter the points per wavelength at which the Ricker wavelet has
maximum energy. In line \ref{ezinc2DA} it also makes a local copy of
the Courant number (since the {\tt Grid} is not passed to the {\tt
ezInc()} function and would not otherwise know this value).
\begin{program}
{\tt ricker.c}
Function to implement a Ricker wavelet. This is a traveling-wave