-
Notifications
You must be signed in to change notification settings - Fork 0
/
basics.tex
2641 lines (2325 loc) · 147 KB
/
basics.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\chapter{Homotopy type theory}
\label{cha:basics}
The central new idea in homotopy type theory is that types can be regarded as
spaces in homotopy theory, or higher-dimensional groupoids in category
theory.
\index{classical!homotopy theory|(}
\index{higher category theory|(}
We begin with a brief summary of the connection between homotopy theory
and higher-dimensional category theory.
In classical homotopy theory, a space $X$ is a set of points equipped
with a topology,
\indexsee{space!topological}{topological space}
\index{topological!space}
and a path between points $x$ and $y$ is represented by
a continuous map $p : [0,1] \to X$, where $p(0) = x$ and $p(1) = y$.
\index{path!topological}
\index{topological!path}
This function can be thought of as giving a point in $X$ at each
``moment in time''. For many purposes, strict equality of paths
(meaning, pointwise equal functions) is too fine a notion. For example,
one can define operations of path concatenation (if $p$ is a path from
$x$ to $y$ and $q$ is a path from $y$ to $z$, then the concatenation $p
\ct q$ is a path from $x$ to $z$) and inverses ($\opp p$ is a path
from $y$ to $x$). However, there are natural equations between these
operations that do not hold for strict equality: for example, the path
$p \ct \opp p$ (which walks from $x$ to $y$, and then back along the
same route, as time goes from $0$ to $1$) is not strictly equal to the
identity path (which stays still at $x$ at all times).
The remedy is to consider a coarser notion of equality of paths called
\emph{homotopy}.
\index{homotopy!topological}
A homotopy between a pair of continuous maps $f :
X_1 \to X_2$ and $g : X_1\to X_2$ is a continuous map $H : X_1
\times [0, 1] \to X_2$ satisfying $H(x, 0) = f (x)$ and $H(x, 1) =
g(x)$. In the specific case of paths $p$ and $q$ from $x$ to $y$, a homotopy is a
continuous map $H : [0,1] \times [0,1] \rightarrow X$
such that $H(s,0) = p(s)$ and $H(s,1) = q(s)$ for all $s\in [0,1]$.
In this case we require also that $H(0,t) = x$ and $H(1,t)=y$ for all $t\in [0,1]$,
so that for each $t$ the function $H(\blank,t)$ is again a path from $x$ to $y$;
a homotopy of this sort is said to be \emph{endpoint-preserving} or \emph{rel endpoints}.
Such a homotopy is the
image in $X$ of a square that fills in the space between $p$ and $q$,
which can be thought of as a ``continuous deformation'' between $p$ and
$q$, or a 2-dimensional \emph{path between paths}.\index{path!2-}
For example, because
$p \ct \opp p$ walks out and back along the same route, you know that
you can continuously shrink $p \ct \opp p$ down to the identity
path---it won't, for example, get snagged around a hole in the space.
Homotopy is an equivalence relation, and operations such as
concatenation, inverses, etc., respect it. Moreover, the homotopy
equivalence classes of loops\index{loop} at some point $x_0$ (where two loops $p$
and $q$ are equated when there is a \emph{based} homotopy between them,
which is a homotopy $H$ as above that additionally satisfies $H(0,t) =
H(1,t) = x_0$ for all $t$) form a group called the \emph{fundamental
group}.\index{fundamental!group} This group is an \emph{algebraic invariant} of a space, which
can be used to investigate whether two spaces are \emph{homotopy
equivalent} (there are continuous maps back and forth whose composites
are homotopic to the identity), because equivalent spaces have
isomorphic fundamental groups.
Because homotopies are themselves a kind of 2-dimensional path, there is
a natural notion of 3-dimensional \emph{homotopy between homotopies},\index{path!3-}
and then \emph{homotopy between homotopies between homotopies}, and so
on. This infinite tower of points, path, homotopies, homotopies between
homotopies, \ldots, equipped with algebraic operations such as the
fundamental group, is an instance of an algebraic structure called a
(weak) \emph{$\infty$-groupoid}. An $\infty$-groupoid\index{.infinity-groupoid@$\infty$-groupoid} consists of a
collection of objects, and then a collection of \emph{morphisms}\indexdef{morphism!in an .infinity-groupoid@in an $\infty$-groupoid} between
objects, and then \emph{morphisms between morphisms}, and so on,
equipped with some complex algebraic structure; a morphism at level $k$ is called a \define{$k$-morphism}\indexdef{k-morphism@$k$-morphism}. Morphisms at each level
have identity, composition, and inverse operations, which are weak in
the sense that they satisfy the groupoid laws (associativity of
composition, identity is a unit for composition, inverses cancel) only
up to morphisms at the next level, and this weakness gives rise to
further structure. For example, because associativity of composition of
morphisms $p \ct (q \ct r) = (p \ct q) \ct r$ is itself a
higher-dimensional morphism, one needs an additional operation relating
various proofs of associativity: the various ways to reassociate $p \ct
(q \ct (r \ct s))$ into $((p \ct q) \ct r) \ct s$ give rise to Mac
Lane's pentagon\index{pentagon, Mac Lane}. Weakness also creates non-trivial interactions between
levels.
Every topological space $X$ has a \emph{fundamental $\infty$-groupoid}
\index{.infinity-groupoid@$\infty$-groupoid!fundamental}
\index{fundamental!.infinity-groupoid@$\infty$-groupoid}
whose
$k$-mor\-ph\-isms are the $k$-dimen\-sional paths in $X$. The weakness of the
$\infty$-group\-oid corresponds directly to the fact that paths form a
group only up to homotopy, with the $(k+1)$-paths serving as the
homotopies between the $k$-paths. Moreover, the view of a space as an
$\infty$-groupoid preserves enough aspects of the space to do homotopy theory:
the fundamental $\infty$-groupoid construction is adjoint\index{adjoint!functor} to the
geometric\index{geometric realization} realization of an $\infty$-groupoid as a space, and this
adjunction preserves homotopy theory (this is called the \emph{homotopy
hypothesis/theorem},
\index{hypothesis!homotopy}
\index{homotopy!hypothesis}
because whether it is a hypothesis or theorem
depends on how you define $\infty$-groupoid). For example, you can
easily define the fundamental group of an $\infty$-groupoid, and if you
calculate the fundamental group of the fundamental $\infty$-groupoid of
a space, it will agree with the classical definition of fundamental
group of that space. Because of this correspondence, homotopy theory
and higher-dimensional category theory are intimately related.
\index{classical!homotopy theory|)}%
\index{higher category theory|)}%
\mentalpause
Now, in homotopy type theory each type can be seen to have the structure
of an $\infty$-groupoid. Recall that for any type $A$, and any $x,y:A$,
we have an identity type $\id[A]{x}{y}$, also written $\idtype[A]{x}{y}$
or just $x=y$. Logically, we may think of elements of $x=y$ as evidence
that $x$ and $y$ are equal, or as identifications of $x$ with
$y$. Furthermore, type theory (unlike, say, first-order logic) allows us
to consider such elements of $\id[A]{x}{y}$ also as individuals which
may be the subjects of further propositions. Therefore, we can
\emph{iterate} the identity type: we can form the type
$\id[{(\id[A]{x}{y})}]{p}{q}$ of identifications between
identifications $p,q$, and the type
$\id[{(\id[{(\id[A]{x}{y})}]{p}{q})}]{r}{s}$, and so on. The structure
of this tower of identity types corresponds precisely to that of the
continuous paths and (higher) homotopies between them in a space, or an
$\infty$-groupoid.\index{.infinity-groupoid@$\infty$-groupoid}
Thus, we will frequently refer to an element $p : \id[A]{x}{y}$ as
a \define{path}
\index{path}
from $x$ to $y$; we call $x$ its \define{start point}
\indexdef{start point of a path}
\indexdef{path!start point of}
and $y$ its \define{end point}.
\indexdef{end point of a path}
\indexdef{path!end point of}
Two paths $p,q : \id[A]{x}{y}$ with the same start and end point are said to be \define{parallel},
\indexdef{parallel paths}
\indexdef{path!parallel}
in which case an element $r : \id[{(\id[A]{x}{y})}]{p}{q}$ can
be thought of as a homotopy, or a morphism between morphisms;
we will often refer to it as a \define{2-path}
\indexdef{path!2-}\indexsee{2-path}{path, 2-}%
or a \define{2-dimensional path}
\index{dimension!of paths}%
\indexsee{2-dimensional path}{path, 2-}\indexsee{path!2-dimensional}{path, 2-}%
Similarly, $\id[{(\id[{(\id[A]{x}{y})}]{p}{q})}]{r}{s}$ is the type of
\define{3-dimensional paths}
\indexdef{path!3-}\indexsee{3-path}{path, 3-}\indexsee{3-dimensional path}{path, 3-}\indexsee{path!3-dimensional}{path, 3-}%
between two parallel 2-dimensional paths, and so on. If the
type $A$ is ``set-like'', such as \nat, these iterated identity types
will be uninteresting (see \cref{sec:basics-sets}), but in the
general case they can model non-trivial homotopy types.
%% (Obviously, the
%% notation ``$\id[A]{x}{y}$'' has its limitations here. The style
%% $\idtype[A]{x}{y}$ is only slightly better in iterations:
%% $\idtype[{\idtype[{\idtype[A]{x}{y}}]{p}{q}}]{r}{s}$.)
An important difference between homotopy type theory and classical homotopy theory is that homotopy type theory provides a \emph{synthetic}
\index{synthetic mathematics}%
\index{geometry, synthetic}%
\index{Euclid of Alexandria}%
description of spaces, in the following sense. Synthetic geometry is geometry in the style of Euclid~\cite{Euclid}: one starts from some basic notions (points and lines), constructions (a line connecting any two points), and axioms
(all right angles are equal), and deduces consequences logically. This is in contrast with analytic
\index{analytic mathematics}%
geometry, where notions such as points and lines are represented concretely using cartesian coordinates in $\R^n$---lines are sets of points---and the basic constructions and axioms are derived from this representation. While classical homotopy theory is analytic (spaces and paths are made of points), homotopy type theory is synthetic: points, paths, and paths between paths are basic, indivisible, primitive notions.
Moreover, one of the amazing things about homotopy type theory is that all of the basic constructions and axioms---all of the
higher groupoid structure----arises automatically from the induction
principle for identity types.
Recall from \cref{sec:identity-types} that this says that if
\begin{itemize}
\item for every $x,y:A$ and every $p:\id[A]xy$ we have a type $D(x,y,p)$, and
\item for every $a:A$ we have an element $d(a):D(a,a,\refl a)$,
\end{itemize}
then
\begin{itemize}
\item there exists an element $\indid{A}(D,d,x,y,p):D(x,y,p)$ for \emph{every} two elements $x,y:A$ and $p:\id[A]xy$, such that $\indid{A}(D,d,a,a,\refl a) \jdeq d(a)$.
\end{itemize}
In other words, given dependent functions
\begin{align*}
D & :\prd{x,y:A} (\id{x}{y}) \to \type\\
d & :\prd{a:A} D(a,a,\refl{a})
\end{align*}
there is a dependent function
\[\indid{A}(D,d):\prd{x,y:A}{p:\id{x}{y}} D(x,y,p)\]
such that
\begin{equation}\label{eq:Jconv}
\indid{A}(D,d,a,a,\refl{a})\jdeq d(a)
\end{equation}
for every $a:A$.
Usually, every time we apply this induction rule we will either not care about the specific function being defined, or we will immediately give it a different name.
Informally, the induction principle for identity types says that if we want to construct an object (or prove a statement) which depends on an inhabitant $p:\id[A]xy$ of an identity type, then it suffices to perform the construction (or the proof) in the special case when $x$ and $y$ are the same (judgmentally) and $p$ is the reflexivity element $\refl{x}:x=x$ (judgmentally).
When writing informally, we may express this with a phrase such as ``by induction, it suffices to assume\dots''.
This reduction to the ``reflexivity case'' is analogous to the reduction to the ``base case'' and ``inductive step'' in an ordinary proof by induction on the natural numbers, and also to the ``left case'' and ``right case'' in a proof by case analysis on a disjoint union or disjunction.\index{induction principle!for identity type}%
The ``conversion rule''~\eqref{eq:Jconv} is less familiar in the context of proof by induction on natural numbers, but there is an analogous notion in the related concept of definition by recursion.
If a sequence\index{sequence} $(a_n)_{n\in \mathbb{N}}$ is defined by giving $a_0$ and specifying $a_{n+1}$ in terms of $a_n$, then in fact the $0^{\mathrm{th}}$ term of the resulting sequence \emph{is} the given one, and the given recurrence relation relating $a_{n+1}$ to $a_n$ holds for the resulting sequence.
(This may seem so obvious as to not be worth saying, but if we view a definition by recursion as an algorithm\index{algorithm} for calculating values of a sequence, then it is precisely the process of executing that algorithm.)
The rule~\eqref{eq:Jconv} is analogous: it says that if we define an object $f(p)$ for all $p:x=y$ by specifying what the value should be when $p$ is $\refl{x}:x=x$, then the value we specified is in fact the value of $f(\refl{x})$.
This induction principle endows each type with the structure of an $\infty$-groupoid\index{.infinity-groupoid@$\infty$-groupoid}, and each function between two types the structure of an $\infty$-functor\index{.infinity-functor@$\infty$-functor} between two such groupoids. This is interesting from a mathematical point of view, because it gives a new way to work with
$\infty$-groupoids. It is interesting from a type-theoretic point of view, because it reveals new operations that are associated with each type and function. In the remainder of this chapter, we begin to explore this structure.
\section{Types are higher groupoids}
\label{sec:equality}
\index{type!identity|(}%
\index{path|(}%
\index{.infinity-groupoid@$\infty$-groupoid!structure of a type|(}%
We now derive from the induction principle the beginnings of the structure of a higher groupoid.
We begin with symmetry of equality, which, in topological language, means that ``paths can be reversed''.
\begin{lem}\label{lem:opp}
For every type $A$ and every $x,y:A$ there is a function
\begin{equation*}
(x= y)\to(y= x)
\end{equation*}
denoted $p\mapsto \opp{p}$, such that $\opp{\refl{x}}\jdeq\refl{x}$ for each $x:A$.
We call $\opp{p}$ the \define{inverse} of $p$.
\indexdef{path!inverse}%
\indexdef{inverse!of path}%
\index{equality!symmetry of}%a
\index{symmetry!of equality}%
\end{lem}
Since this is our first time stating something as a ``Lemma'' or ``Theorem'', let us pause to consider what that means.
Recall that propositions (statements susceptible to proof) are identified with types, whereas lemmas and theorems (statements that have been proven) are identified with \emph{inhabited} types.
Thus, the statement of a lemma or theorem should be translated into a type, as in \cref{sec:pat}, and its proof translated into an inhabitant of that type.
According to the interpretation of the universal quantifier ``for every'', the type corresponding to \cref{lem:opp} is
\[ \prd{A:\UU}{x,y:A} (x= y)\to(y= x). \]
The proof of \cref{lem:opp} will consist of constructing an element of this type, i.e.\ deriving the judgment $f:\prd{A:\UU}{x,y:A} (x= y)\to(y= x)$ for some $f$.
We then introduce the notation $\opp{(\blank)}$ for this element $f$, in which the arguments $A$, $x$, and $y$ are omitted and inferred from context.
(As remarked in \cref{sec:types-vs-sets}, the secondary statement ``$\opp{\refl{x}}\jdeq\refl{x}$ for each $x:A$'' should be regarded as a separate judgment.)
\begin{proof}[First proof]
Assume given $A:\UU$, and
let $D:\prd{x,y:A}(x= y) \to \type$ be the type family defined by $D(x,y,p)\defeq (y= x)$.
In other words, $D$ is a function assigning to any $x,y:A$ and $p:x=y$ a type, namely the type $y=x$.
Then we have an element
\begin{equation*}
d\defeq \lam{x} \refl{x}:\prd{x:A} D(x,x,\refl{x}).
\end{equation*}
Thus, the induction principle for identity types gives us an element
\narrowequation{ \indid{A}(D,d,x,y,p): (y= x)}
for each $p:(x= y)$.
We can now define the desired function $\opp{(\blank)}$ to be $\lam{p} \indid{A}(D,d,x,y,p)$, i.e.\ we set $\opp{p} \defeq \indid{A}(D,d,x,y,p)$.
The conversion rule~\eqref{eq:Jconv} gives $\opp{\refl{x}}\jdeq \refl{x}$, as required.
\end{proof}
We have written out this proof in a very formal style, which may be helpful while the induction rule on identity types is unfamiliar.
To be even more formal, we could say that \cref{lem:opp} and its proof together consist of the judgment
\begin{narrowmultline*}
\lam{A}{x}{y}{p} \indid{A}((\lam{x}{y}{p} (y=x)), (\lam{x} \refl{x}), x, y, p)
\narrowbreak : \prd{A:\UU}{x,y:A} (x= y)\to(y= x)
\end{narrowmultline*}
(along with an additional equality judgment).
However, eventually we prefer to use more natural language, such as in the following equivalent proof.
\begin{proof}[Second proof]
We want to construct, for each $x,y:A$ and $p:x=y$, an element $\opp{p}:y=x$.
By induction, it suffices to do this in the case when $y$ is $x$ and $p$ is $\refl{x}$.
But in this case, the type $x=y$ of $p$ and the type $y=x$ in which we are trying to construct $\opp{p}$ are both simply $x=x$.
Thus, in the ``reflexivity case'', we can define $\opp{\refl{x}}$ to be simply $\refl{x}$.
The general case then follows by the induction principle, and the conversion rule $\opp{\refl{x}}\jdeq\refl{x}$ is precisely the proof in the reflexivity case that we gave.
\end{proof}
We will write out the next few proofs in both styles, to help the reader become accustomed to the latter one.
Next we prove the transitivity of equality, or equivalently we ``concatenate paths''.
\begin{lem}\label{lem:concat}
For every type $A$ and every $x,y,z:A$ there is a function
\begin{equation*}
(x= y) \to (y= z)\to (x= z)
\end{equation*}
written $p \mapsto q \mapsto p\ct q$, such that $\refl{x}\ct \refl{x}\jdeq \refl{x}$ for any $x:A$.
We call $p\ct q$ the \define{concatenation} or \define{composite} of $p$ and $q$.
\indexdef{path!concatenation}%
\indexdef{path!composite}%
\indexdef{concatenation of paths}%
\indexdef{composition!of paths}%
\index{equality!transitivity of}%
\index{transitivity!of equality}%
\end{lem}
\begin{proof}[First proof]
Let $D:\prd{x,y:A} (x=y) \to \type$ be the type family
\begin{equation*}
D(x,y,p)\defeq \prd{z:A}{q:y=z} (x=z).
\end{equation*}
Note that $D(x,x,\refl x) \jdeq \prd{z:A}{q:x=z} (x=z)$.
Thus, in order to apply the induction principle for identity types to this $D$, we need a function of type
\begin{equation}\label{eq:concatD}
\prd{x:A} D(x,x,\refl{x})
\end{equation}
which is to say, of type
\[ \prd{x,z:A}{q:x=z} (x=z). \]
Now let $E:\prd{x,z:A}{q:x=z}\type$ be the type family $E(x,z,q)\defeq (x=z)$.
Note that $E(x,x,\refl x) \jdeq (x=x)$.
Thus, we have the function
\begin{equation*}
e(x) \defeq \refl{x} : E(x,x,\refl{x}).
\end{equation*}
By the induction principle for identity types applied to $E$, we obtain a function
\begin{equation*}
d : \prd{x,z:A}{q:x=z} E(x,z,q).
\end{equation*}
But $E(x,z,q)\jdeq (x=z)$, so the type of $d$ is~\eqref{eq:concatD}.
Thus, we can use this function $d$ and apply the induction principle for identity types to $D$, to obtain our desired function of type
\begin{equation*}
\prd{x,y,z:A} (y=z) \to (x=y) \to (x=z).
\end{equation*}
The conversion rules for the two induction principles give us $\refl{x}\ct \refl{x}\jdeq \refl{x}$ for any $x:A$.
\end{proof}
\begin{proof}[Second proof]
We want to construct, for every $x,y,z:A$ and every $p:x=y$ and $q:y=z$, an element of $x=z$.
By induction on $p$, it suffices to assume that $y$ is $x$ and $p$ is $\refl{x}$.
In this case, the type $y=z$ of $q$ is $x=z$.
Now by induction on $q$, it suffices to assume also that $z$ is $x$ and $q$ is $\refl{x}$.
But in this case, $x=z$ is $x=x$, and we have $\refl{x}:(x=x)$.
\end{proof}
The reader may well feel that we have given an overly convoluted proof of this lemma.
In fact, we could stop after the induction on $p$, since at that point what we want to produce is an equality $x=z$, and we already have such an equality, namely $q$.
Why do we go on to do another induction on $q$?
The answer is that, as described in the introduction, we are doing \emph{proof-relevant} mathematics.
\index{mathematics!proof-relevant}%
When we prove a lemma, we are defining an inhabitant of some type, and it can matter what \emph{specific} element we defined in the course of the proof, not merely the type inhabited by that element (that is, the \emph{statement} of the lemma).
\cref{lem:concat} has three obvious proofs: we could do induction over $p$, induction over $q$, or induction over both of them.
If we proved it three different ways, we would have three different elements of the same type.
It's not hard to show that these three elements are equal (see \cref{ex:basics:concat}), but as they are not \emph{definitionally} equal, there can still be reasons to prefer one over another.
In the case of \cref{lem:concat}, the difference hinges on the computation rule.
If we proved the lemma using a single induction over $p$, then we would end up with a computation rule of the form $\refl{y} \ct q \jdeq q$.
If we proved it with a single induction over $q$, we would have instead $p\ct\refl{y}\jdeq p$, while proving it with a double induction (as we did) gives only $\refl{x}\ct\refl{x} \jdeq \refl{x}$.
\index{mathematics!formalized}%
The asymmetrical computation rules can sometimes be convenient when doing formalized mathematics, as they allow the computer to simplify more things automatically.
However, in informal mathematics, and arguably even in the formalized case, it can be confusing to have a concatenation operation which behaves asymmetrically and to have to remember which side is the ``special'' one.
Treating both sides symmetrically makes for more robust proofs; this is why we have given the proof that we did.
(However, this is admittedly a stylistic choice.)
The table below summarizes the ``equality'', ``homotopical'', and ``higher-groupoid" points of view on what we have done so far.
\begin{center}
\medskip
\begin{tabular}{ccc}
\toprule
Equality & Homotopy & $\infty$-Groupoid\\
\midrule
reflexivity\index{equality!reflexivity of} & constant path & identity morphism\\
symmetry\index{equality!symmetry of} & inversion of paths & inverse morphism\\
transitivity\index{equality!transitivity of} & concatenation of paths & composition of morphisms\\
\bottomrule
\end{tabular}
\medskip
\end{center}
In practice, transitivity is often applied to prove an equality by a chain of intermediate steps.
We will use the common notation for this such as $a=b=c=d$.
If the intermediate expressions are long, or we want to specify the witness of each equality, we may write
\begin{align*}
a &= b & \text{(by $p$)}\\ &= c &\text{(by $q$)} \\ &= d &\text{(by $r$)}.
\end{align*}
In either case, the notation indicates construction of the element $(p\ct q)\ct r: (a=d)$.
(We choose left-associativity for concreteness, although in view of \cref{thm:omg}\ref{item:omg4} below it makes little difference.)
If it should happen that $b$ and $c$, say, are judgmentally equal, then we may write
\begin{align*}
a &= b & \text{(by $p$)}\\ &\jdeq c \\ &= d &\text{(by $r$)}
\end{align*}
to indicate construction of $p\ct r : (a=d)$.
We also follow common mathematical practice in not requiring the justifications in this notation (``by $p$'' and ``by $r$'') to supply the exact witness needed; instead we allow them to simply mention the most important (or least obvious) ingredient in constructing that witness.
For instance, if ``Lemma A'' states that for all $x$ and $y$ we have $f(x)=g(y)$, then we may write ``by Lemma A'' as a justification for the step $f(a) = g(b)$, trusting the reader to deduce that we apply Lemma A with $x\defeq a$ and $y\defeq b$.
We may also omit a justification entirely if we trust the reader to be able to guess it.
Now, because of proof-relevance, we can't stop after proving ``symmetry'' and ``transitivity'' of equality: we need to know that these \emph{operations} on equalities are well-behaved.
(This issue is invisible in set theory, where symmetry and transitivity are mere \emph{properties} of equality, rather than structure on
paths.)
From the homotopy-theoretic point of view, concatenation and inversion are just the ``first level'' of higher groupoid structure --- we also need coherence\index{coherence} laws on these operations, and analogous operations at higher dimensions.
For instance, we need to know that concatenation is \emph{associative}, and that inversion provides \emph{inverses} with respect to concatenation.
\begin{lem}\label{thm:omg}%[The $\omega$-groupoid structure of types]
\index{associativity!of path concatenation}%
\index{unit!law for path concatenation}%
Suppose $A:\type$, that $x,y,z,w:A$ and that $p:x= y$ and $q:y = z$ and $r:z=w$.
We have the following:
\begin{enumerate}
\item $p= p\ct \refl{y}$ and $p = \refl{x} \ct p$.\label{item:omg1}
\item $\opp{p}\ct p= \refl{y}$ and $p\ct \opp{p}= \refl{x}$.
\item $\opp{(\opp{p})}= p$.
\item $p\ct (q\ct r)= (p\ct q)\ct r$.\label{item:omg4}
\end{enumerate}
\end{lem}
Note, in particular, that \ref{item:omg1}--\ref{item:omg4} are themselves propositional equalities, living in the identity types \emph{of} identity types, such as $p=_{x=y}q$ for $p,q:x=y$.
Topologically, they are \emph{paths of paths}, i.e.\ homotopies.
It is a familiar fact in topology that when we concatenate a path $p$ with the reversed path $\opp p$, we don't literally obtain a constant path (which corresponds to the equality $\refl{}$ in type theory) --- instead we have a homotopy, or higher path, from $p\ct\opp p$ to the constant path.
\begin{proof}[Proof of~\cref{thm:omg}]
All the proofs use the induction principle for equalities.
\begin{enumerate}
\item \emph{First proof:} let $D:\prd{x,y:A} (x=y) \to \type$ be the type family given by
\begin{equation*}
D(x,y,p)\defeq (p= p\ct \refl{y}).
\end{equation*}
Then $D(x,x,\refl{x})$ is $\refl{x}=\refl{x}\ct\refl{x}$.
Since $\refl{x}\ct\refl{x}\jdeq\refl{x}$, it follows that $D(x,x,\refl{x})\jdeq (\refl{x}=\refl{x})$.
Thus, there is a function
\begin{equation*}
d\defeq\lam{x} \refl{\refl{x}}:\prd{x:A} D(x,x,\refl{x}).
\end{equation*}
Now the induction principle for identity types gives an element $\indid{A}(D,d,p):(p= p\ct\refl{y})$ for each $p:x= y$.
The other equality is proven similarly.
\mentalpause
\noindent
\emph{Second proof:} by induction on $p$, it suffices to assume that $y$ is $x$ and that $p$ is $\refl x$.
But in this case, we have $\refl{x}\ct\refl{x}\jdeq\refl{x}$.
\item \emph{First proof:} let $D:\prd{x,y:A} (x=y) \to \type$ be the type family given by
\begin{equation*}
D(x,y,p)\defeq (\opp{p}\ct p= \refl{y}).
\end{equation*}
Then $D(x,x,\refl{x})$ is $\opp{\refl{x}}\ct\refl{x}=\refl{x}$.
Since $\opp{\refl{x}}\jdeq\refl{x}$ and $\refl{x}\ct\refl{x}\jdeq\refl{x}$, we get that $D(x,x,\refl{x})\jdeq (\refl{x}=\refl{x})$.
Hence we find the function
\begin{equation*}
d\defeq\lam{x} \refl{\refl{x}}:\prd{x:A} D(x,x,\refl{x}).
\end{equation*}
Now path induction gives an element $\indid{A}(D,d,p):\opp{p}\ct p=\refl{y}$ for each $p:x= y$ in $A$.
The other equality is similar.
\mentalpause
\noindent \emph{Second proof} By induction, it suffices to assume $p$ is $\refl x$.
But in this case, we have $\opp{p} \ct p \jdeq \opp{\refl x} \ct \refl x \jdeq \refl x$.
\item \emph{First proof:} let $D:\prd{x,y:A} (x=y) \to \type$ be the type family given by
\begin{equation*}
D(x,y,p)\defeq (\opp{\opp{p}}= p).
\end{equation*}
Then $D(x,x,\refl{x})$ is the type $(\opp{\opp{\refl x}}=\refl{x})$.
But since $\opp{\refl{x}}\jdeq \refl{x}$ for each $x:A$, we have $\opp{\opp{\refl{x}}}\jdeq \opp{\refl{x}} \jdeq\refl{x}$, and thus $D(x,x,\refl{x})\jdeq(\refl{x}=\refl{x})$.
Hence we find the function
\begin{equation*}
d\defeq\lam{x} \refl{\refl{x}}:\prd{x:A} D(x,x,\refl{x}).
\end{equation*}
Now path induction gives an element $\indid{A}(D,d,p):\opp{\opp{p}}= p$ for each $p:x= y$.
\mentalpause
\noindent \emph{Second proof:} by induction, it suffices to assume $p$ is $\refl x$.
But in this case, we have $\opp{\opp{p}}\jdeq \opp{\opp{\refl x}} \jdeq \refl x$.
\item \emph{First proof:} let $D_1:\prd{x,y:A} (x=y) \to \type$ be the type family given by
\begin{equation*}
D_1(x,y,p)\defeq\prd{z,w:A}{q:y= z}{r:z= w} \big(p\ct (q\ct r)= (p\ct q)\ct r\big).
\end{equation*}
Then $D_1(x,x,\refl{x})$ is
\begin{equation*}
\prd{z,w:A}{q:x= z}{r:z= w} \big(\refl{x}\ct(q\ct r)= (\refl{x}\ct q)\ct r\big).
\end{equation*}
To construct an element of this type, let $D_2:\prd{x,z:A}{q:x=z} \type$ be the type family
\begin{equation*}
D_2 (x,z,q) \defeq \prd{w:A}{r:z=w} \big(\refl{x}\ct(q\ct r)= (\refl{x}\ct q)\ct r\big).
\end{equation*}
Then $D_2(x,x,\refl{x})$ is
\begin{equation*}
\prd{w:A}{r:x=w} \big(\refl{x}\ct(\refl{x}\ct r)= (\refl{x}\ct \refl{x})\ct r\big).
\end{equation*}
To construct an element of \emph{this} type, let $D_3:\prd{x,w:A}{r:x=w} \type$ be the type family
\begin{equation*}
D_3(x,w,r) \defeq \big(\refl{x}\ct(\refl{x}\ct r)= (\refl{x}\ct \refl{x})\ct r\big).
\end{equation*}
Then $D_3(x,x,\refl{x})$ is
\begin{equation*}
\big(\refl{x}\ct(\refl{x}\ct \refl{x})= (\refl{x}\ct \refl{x})\ct \refl{x}\big)
\end{equation*}
which is definitionally equal to the type $(\refl{x} = \refl{x})$, and is therefore inhabited by $\refl{\refl{x}}$.
Applying the path induction rule three times, therefore, we obtain an element of the overall desired type.
\mentalpause
\noindent \emph{Second proof:} by induction, it suffices to assume $p$, $q$, and $r$ are all $\refl x$.
But in this case, we have
\begin{align*}
p\ct (q\ct r)
&\jdeq \refl{x}\ct(\refl{x}\ct \refl{x})\\
&\jdeq \refl{x}\\
&\jdeq (\refl{x}\ct \refl x)\ct \refl x\\
&\jdeq (p\ct q)\ct r.
\end{align*}
Thus, we have $\refl{\refl{x}}$ inhabiting this type. \qedhere
\end{enumerate}
\end{proof}
\begin{rmk}
There are other ways to define these higher paths.
For instance, in \cref{thm:omg}\ref{item:omg4} we might do induction only over one or two paths rather than all three.
Each possibility will produce a \emph{definitionally} different proof, but they will all be equal to each other.
Such an equality between any two particular proofs can, again, be proven by induction, reducing all the paths in question to reflexivities and then observing that both proofs reduce themselves to reflexivities.
\end{rmk}
In view of \cref{thm:omg}\ref{item:omg4}, we will often write $p\ct q\ct r$ for $(p\ct q)\ct r$, and similarly $p\ct q\ct r \ct s$ for $((p\ct q)\ct r)\ct s$ and so on.
We choose left-associativity for definiteness, but it makes no real difference.
We generally trust the reader to insert instances of \cref{thm:omg}\ref{item:omg4} to reassociate such expressions as necessary.
We are still not really done with the higher groupoid structure: the paths~\ref{item:omg1}--\ref{item:omg4} must also satisfy their own higher coherence\index{coherence} laws, which are themselves higher paths,
\index{associativity!of path concatenation!coherence of}%
\index{globular operad}%
\index{operad}%
\index{groupoid!higher}%
and so on ``all the way up to infinity'' (this can be made precise using e.g.\ the notion of a globular operad).
However, for most purposes it is unnecessary to make the whole infinite-dimensional structure explicit.
One of the nice things about homotopy type theory is that all of this structure can be \emph{proven} starting from only the inductive property of identity types, so we can make explicit as much or as little of it as we need.
In particular, in this book we will not need any of the complicated combinatorics involved in making precise notions such as ``coherent structure at all higher levels''.
In addition to ordinary paths, we will use paths of paths (i.e.\ elements of a type $p =_{x=_A y} q$), which as remarked previously we call \emph{2-paths}\index{path!2-} or \emph{2-dimensional paths}, and perhaps occasionally paths of paths of paths (i.e.\ elements of a type $r = _{p =_{x=_A y} q} s$), which we call \emph{3-paths}\index{path!3-} or \emph{3-dimensional paths}.
It is possible to define a general notion of \emph{$n$-dimensional path}
\indexdef{path!n-@$n$-}%
\indexsee{n-path@$n$-path}{path, $n$-}%
\indexsee{n-dimensional path@$n$-dimensional path}{path, $n$-}%
\indexsee{path!n-dimensional@$n$-dimensional}{path, $n$-}%
(see \cref{ex:npaths}), but we will not need it.
We will, however, use one particularly important and simple case of higher paths, which is when the start and end points are the same.
In set theory, the proposition $a=a$ is entirely uninteresting, but in homotopy theory, paths from a point to itself are called \emph{loops}\index{loop} and carry lots of interesting higher structure.
Thus, given a type $A$ with a point $a:A$, we define its \define{loop space}
\index{loop space}%
$\Omega(A,a)$ to be the type $\id[A]{a}{a}$.
We may sometimes write simply $\Omega A$ if the point $a$ is understood from context.
Since any two elements of $\Omega A$ are paths with the same start and end points, they can be concatenated;
thus we have an operation $\Omega A\times \Omega A\to \Omega A$.
More generally, the higher groupoid structure of $A$ gives $\Omega A$ the analogous structure of a ``higher group''.
It can also be useful to consider the loop space\index{loop space!iterated}\index{iterated loop space} \emph{of} the loop space of $A$, which is the space of 2-dimensional loops on the identity loop at $a$.
This is written $\Omega^2(A,a)$ and represented in type theory by the type $\id[({\id[A]{a}{a}})]{\refl{a}}{\refl{a}}$.
While $\Omega^2(A,a)$, as a loop space, is again a ``higher group'', it now also has some additional structure resulting from the fact that its elements are 2-dimensional loops between 1-dimensional loops.
\begin{thm}[Eckmann--Hilton]\label{thm:EckmannHilton}
The composition operation on the second loop space
%
\begin{equation*}
\Omega^2(A)\times \Omega^2(A)\to \Omega^2(A)
\end{equation*}
is commutative: $\alpha\ct\beta = \beta\ct\alpha$, for any $\alpha, \beta:\Omega^2(A)$.
\index{Eckmann--Hilton argument}%
\end{thm}
\begin{proof}
First, observe that the composition of $1$-loops $\Omega A\times \Omega A\to \Omega A$ induces an operation
\[
\star : \Omega^2(A)\times \Omega^2(A)\to \Omega^2(A)
\]
as follows: consider elements $a, b, c : A$ and 1- and 2-paths,
%
\begin{align*}
p &: a = b, & r &: b = c \\
q &: a = b, & s &: b = c \\
\alpha &: p = q, & \beta &: r = s
\end{align*}
%
as depicted in the following diagram (with paths drawn as arrows).
% Changed this to xymatrix in the name of having uniform source code,
% maybe the original using xy looked better (I think it was too big).
% It is commented out below in case you want to reinstate it.
\[
\xymatrix@+5em{
{a} \rtwocell<10>^p_q{\alpha}
&
{b} \rtwocell<10>^r_s{\beta}
&
{c}
}
\]
Composing the upper and lower 1-paths, respectively, we get two paths $p\ct r,\ q\ct s : a = c$, and there is then a ``horizontal composition''
%
\begin{equation*}
\alpha\hct\beta : p\ct r = q\ct s
\end{equation*}
%
between them, defined as follows.
First, we define $\alpha \rightwhisker r : p\ct r = q\ct r$ by path induction on $r$, so that
\[ \alpha \rightwhisker \refl{b} \jdeq \opp{\mathsf{ru}_p} \ct \alpha \ct \mathsf{ru}_q \]
where $\mathsf{ru}_p : p = p \ct \refl{b}$ is the right unit law from \cref{thm:omg}\ref{item:omg1}.
We could similarly define $\rightwhisker$ by induction on $\alpha$, or on all paths in sight, resulting in different judgmental equalities, but for present purposes the definition by induction on $r$ will make things simpler.
Similarly, we define $q\leftwhisker \beta : q\ct r = q\ct s$ by induction on $q$, so that
\[ \refl{b} \leftwhisker \beta \jdeq \opp{\mathsf{lu}_r} \ct \beta \ct \mathsf{lu}_s \]
where $\mathsf{lu}_r$ denotes the left unit law.
The operations $\leftwhisker$ and $\rightwhisker$ are called \define{whiskering}\indexdef{whiskering}.
Next, since $\alpha \rightwhisker r$ and $q\leftwhisker \beta$ are composable 2-paths, we can define the \define{horizontal composition}
\indexdef{horizontal composition!of paths}%
\indexdef{composition!of paths!horizontal}%
by:
\[
\alpha\hct\beta\ \defeq\ (\alpha\rightwhisker r) \ct (q\leftwhisker \beta).
\]
Now suppose that $a \jdeq b \jdeq c$, so that all the 1-paths $p$, $q$, $r$, and $s$ are elements of $\Omega(A,a)$, and assume moreover that $p\jdeq q \jdeq r \jdeq s\jdeq \refl{a}$, so that $\alpha:\refl{a} = \refl{a}$ and $\beta:\refl{a} = \refl{a}$ are composable in both orders.
In that case, we have
\begin{align*}
\alpha\hct\beta
&\jdeq (\alpha\rightwhisker\refl{a}) \ct (\refl{a}\leftwhisker \beta)\\
&= \opp{\mathsf{ru}_{\refl{a}}} \ct \alpha \ct \mathsf{ru}_{\refl{a}} \ct \opp{\mathsf{lu}_{\refl a}} \ct \beta \ct \mathsf{lu}_{\refl{a}}\\
&\jdeq \opp{\refl{\refl{a}}} \ct \alpha \ct \refl{\refl{a}} \ct \opp{\refl{\refl a}} \ct \beta \ct \refl{\refl{a}}\\
&= \alpha \ct \beta.
\end{align*}
(Recall that $\mathsf{ru}_{\refl{a}} \jdeq \mathsf{lu}_{\refl{a}} \jdeq \refl{\refl{a}}$, by the computation rule for path induction.)
On the other hand, we can define another horizontal composition analogously by
\[
\alpha\hct'\beta\ \defeq\ (p\leftwhisker \beta)\ct (\alpha\rightwhisker s).
\]
and we similarly learn that
\[
\alpha\hct'\beta\ =\ (\refl{a}\leftwhisker \beta)\ct (\alpha\rightwhisker \refl{a}) = \beta\ct\alpha.
\]
\index{interchange law}%
But, in general, the two ways of defining horizontal composition agree, $\alpha\hct\beta = \alpha\hct'\beta$, as we can see by induction on $\alpha$ and $\beta$ and then on the two remaining 1-paths, to reduce everything to reflexivity.
Thus we have
\[\alpha \ct \beta = \alpha\hct\beta = \alpha\hct'\beta = \beta\ct\alpha.
\qedhere
\]
\end{proof}
The foregoing fact, which is known as the \emph{Eckmann--Hilton argument}, comes from classical homotopy theory, and indeed it is used in \cref{cha:homotopy} below to show that the higher homotopy groups of a type are always abelian\index{group!abelian} groups.
The whiskering and horizontal composition operations defined in the proof are also a general part of the $\infty$-groupoid structure of types.
They satisfy their own laws (up to higher homotopy), such as
\[ \alpha \rightwhisker (p\ct q) = (\alpha \rightwhisker p) \rightwhisker q \]
and so on.
From now on, we trust the reader to apply path induction whenever needed to define further operations of this sort and verify their properties.
As this example suggests, the algebra of higher path types is much more intricate than just the groupoid-like structure at each level; the levels interact to give many further operations and laws, as in the study of iterated loop spaces in homotopy theory.
Indeed, as in classical homotopy theory, we can make the following general definitions:
\begin{defn} \label{def:pointedtype}
A \define{pointed type}
\indexsee{pointed!type}{type, pointed}%
\indexdef{type!pointed}%
$(A,a)$ is a type $A:\type$ together with a point $a:A$, called its \define{basepoint}.
\indexdef{basepoint}%
We write $\pointed{\type} \defeq \sm{A:\type} A$ for the type of pointed types in the universe $\type$.
\end{defn}
\begin{defn} \label{def:loopspace}
Given a pointed type $(A,a)$, we define the \define{loop space}
\indexdef{loop space}%
of $(A,a)$ to be the following pointed type:
\[\Omega(A,a)\defeq ((\id[A]aa),\refl a).\]
An element of it will be called a \define{loop}\indexdef{loop} at $a$.
For $n:\N$, the \define{$n$-fold iterated loop space} $\Omega^{n}(A,a)$
\indexdef{loop space!iterated}%
\indexsee{loop space!n-fold@$n$-fold}{loop space, iterated}%
of a pointed type $(A,a)$ is defined recursively by:
\begin{align*}
\Omega^0(A,a)&\defeq(A,a)\\
\Omega^{n+1}(A,a)&\defeq\Omega^n(\Omega(A,a)).
\end{align*}
An element of it will be called an \define{$n$-loop}
\indexdef{loop!n-@$n$-}%
\indexsee{n-loop@$n$-loop}{loop, $n$-}%
or an \define{$n$-dimensional loop}
\indexsee{loop!n-dimensional@$n$-dimensional}{loop, $n$-}%
\indexsee{n-dimensional loop@$n$-dimensional loop}{loop, $n$-}%
at $a$.
\end{defn}
We will return to iterated loop spaces in \cref{cha:hlevels,cha:hits,cha:homotopy}.
\index{.infinity-groupoid@$\infty$-groupoid!structure of a type|)}%
\index{type!identity|)}
\index{path|)}%
\section{Functions are functors}
\label{sec:functors}
\index{function|(}%
\index{functoriality of functions in type theory@``functoriality'' of functions in type theory}%
Now we wish to establish that functions $f:A\to B$ behave functorially on paths.
In traditional type theory, this is equivalently the statement that functions respect equality.
\index{continuity of functions in type theory@``continuity'' of functions in type theory}%
Topologically, this corresponds to saying that every function is ``continuous'', i.e.\ preserves paths.
\begin{lem}\label{lem:map}
Suppose that $f:A\to B$ is a function.
Then for any $x,y:A$ there is an operation
\begin{equation*}
\apfunc f : (\id[A] x y) \to (\id[B] {f(x)} {f(y)}).
\end{equation*}
Moreover, for each $x:A$ we have $\apfunc{f}(\refl{x})\jdeq \refl{f(x)}$.
\indexdef{application!of function to a path}%
\indexdef{path!application of a function to}%
\indexdef{function!application to a path of}%
\indexdef{action!of a function on a path}%
\end{lem}
The notation $\apfunc f$ can be read either as the \underline{ap}plication of $f$ to a path, or as the \underline{a}ction on \underline{p}aths of $f$.
\begin{proof}[First proof]
Let $D:\prd{x,y:A} (x=y) \to \type$ be the type family defined by
\[D(x,y,p)\defeq (f(x)= f(y)).\]
Then we have
\begin{equation*}
d\defeq\lam{x} \refl{f(x)}:\prd{x:A} D(x,x,\refl{x}).
\end{equation*}
By path induction, we obtain $\apfunc f : \prd{x,y:A} (x=y) \to (f(x)=f(y))$.
The computation rule implies $\apfunc f({\refl{x}})\jdeq\refl{f(x)}$ for each $x:A$.
\end{proof}
\begin{proof}[Second proof]
By induction, it suffices to assume $p$ is $\refl{x}$.
In this case, we may define $\apfunc f(p) \defeq \refl{f(x)}:f(x)= f(x)$.
\end{proof}
We will often write $\apfunc f (p)$ as simply $\ap f p$.
This is strictly speaking ambiguous, but generally no confusion arises.
It matches the common convention in category theory of using the same symbol for the application of a functor to objects and to morphisms.
We note that $\apfunc{}$ behaves functorially, in all the ways that one might expect.
\begin{lem}\label{lem:ap-functor}
For functions $f:A\to B$ and $g:B\to C$ and paths $p:\id[A]xy$ and $q:\id[A]yz$, we have:
\begin{enumerate}
\item $\apfunc f(p\ct q) = \apfunc f(p) \ct \apfunc f(q)$.\label{item:apfunctor-ct}
\item $\apfunc f(\opp p) = \opp{\apfunc f (p)}$.\label{item:apfunctor-opp}
\item $\apfunc g (\apfunc f(p)) = \apfunc{g\circ f} (p)$.\label{item:apfunctor-compose}
\item $\apfunc {\idfunc[A]} (p) = p$.
\end{enumerate}
\end{lem}
\begin{proof}
Left to the reader.
\end{proof}
\index{function|)}%
As was the case for the equalities in \cref{thm:omg}, those in \cref{lem:ap-functor} are themselves paths, which satisfy their own coherence laws (which can be proved in the same way), and so on.
\section{Type families are fibrations}
\label{sec:fibrations}
\index{type!family of|(}%
\index{transport|(defstyle}%
Since \emph{dependently typed} functions are essential in type theory, we will also need a version of \cref{lem:map} for these.
However, this is not quite so simple to state, because if $f:\prd{x:A} B(x)$ and $p:x=y$, then $f(x):B(x)$ and $f(y):B(y)$ are elements of distinct types, so that \emph{a priori} we cannot even ask whether they are equal.
The missing ingredient is that $p$ itself gives us a way to relate the types $B(x)$ and $B(y)$.
\begin{lem}[Transport]\label{lem:transport}
Suppose that $P$ is a type family over $A$ and that $p:\id[A]xy$.
Then there is a function $\transf{p}:P(x)\to P(y)$.
\end{lem}
\begin{proof}[First proof]
Let $D:\prd{x,y:A} (\id{x}{y}) \to \type$ be the type family defined by
\[D(x,y,p)\defeq P(x)\to P(y).\]
Then we have the function
\begin{equation*}
d\defeq\lam{x} \idfunc[P(x)]:\prd{x:A} D(x,x,\refl{x}),
\end{equation*}
so that the induction principle gives us $\indid{A}(D,d,x,y,p):P(x)\to P(y)$ for $p:x= y$, which we define to be $\transf p$.
\end{proof}
\begin{proof}[Second proof]
By induction, it suffices to assume $p$ is $\refl x$.
But in this case, we can take $\transf{(\refl x)}:P(x)\to P(x)$ to be the identity function.
\end{proof}
Sometimes, it is necessary to notate the type family $P$ in which the transport operation happens.
In this case, we may write
\[\transfib P p \blank : P(x) \to P(y).\]
Recall that a type family $P$ over a type $A$ can be seen as a property of elements of $A$, which holds at $x$ in $A$ if $P(x)$ is inhabited.
Then the transportation lemma says that $P$ respects equality, in the sense that if $x$ is equal to $y$, then $P(x)$ holds if and only if $P(y)$ holds.
In fact, we will see later on that if $x=y$ then actually $P(x)$ and $P(y)$ are \emph{equivalent}.
Topologically, the transportation lemma can be viewed as a ``path lifting'' operation in a fibration.
\index{fibration}%
\indexdef{total!space}%
We think of a type family $P:A\to \type$ as a \emph{fibration} with base space $A$, with $P(x)$ being the fiber over $x$, and with $\sm{x:A}P(x)$ being the \define{total space} of the fibration, with first projection $\sm{x:A}P(x)\to A$.
The defining property of a fibration is that given a path $p:x=y$ in the base space $A$ and a point $u:P(x)$ in the fiber over $x$, we may lift the path $p$ to a path in the total space starting at $u$ (and this lifting can be done continuously).
The point $\trans p u$ can be thought of as the other endpoint of this lifted path.
We can also define the path itself in type theory:
\begin{lem}[Path lifting property]\label{thm:path-lifting}
\indexdef{path!lifting}%
\indexdef{lifting!path}%
Let $P:A\to\type$ be a type family over $A$ and assume we have $u:P(x)$ for some $x:A$.
Then for any $p:x=y$, we have
\begin{equation*}
\mathsf{lift}(u,p):(x,u)=(y,\trans{p}{u})
\end{equation*}
in $\sm{x:A}P(x)$, such that $\ap{\proj1}{\mathsf{lift}(u,p)} = p$.
\end{lem}
\begin{proof}
Left to the reader.
We will prove a more general theorem in \cref{sec:compute-sigma}.
\end{proof}
In classical homotopy theory, a fibration is defined as a map for which there \emph{exist} liftings of paths; while in contrast, we have just shown that in type theory, every type family comes with a \emph{specified} ``path-lifting function''.
This accords with the philosophy of constructive mathematics, according to which we cannot show that something exists except by exhibiting it.
\index{continuity of functions in type theory@``continuity'' of functions in type theory}%
It also ensures automatically that the path liftings are chosen ``continuously'', since as we have seen, all functions in type theory are ``continuous''.
\begin{rmk}
Although we may think of a type family $P:A\to \type$ as like a fibration, it is generally not a good idea to say things like ``the fibration $P:A\to\type$'', since this sounds like we are talking about a fibration with base $\type$ and total space $A$.
To repeat, when a type family $P:A\to \type$ is regarded as a fibration, the base is $A$ and the total space is $\sm{x:A} P(x)$.
We may also occasionally use other topological terminology when speaking about type families.
For instance, we may refer to a dependent function $f:\prd{x:A} P(x)$ as a \define{section}
\indexdef{section!of a type family}%
of the fibration $P$, and we may say that something happens \define{fiberwise}
\indexdef{fiberwise}%
if it happens for each $P(x)$.
For instance, a section $f:\prd{x:A} P(x)$ shows that $P$ is ``fiberwise inhabited''.
\end{rmk}
\index{function!dependent|(}
Now we can prove the dependent version of \cref{lem:map}.
The topological intuition is that given $f:\prd{x:A} P(x)$ and a path $p:\id[A]xy$, we ought to be able to apply $f$ to $p$ and obtain a path in the total space of $P$ which ``lies over'' $p$, as shown below.
\begin{center}
\begin{tikzpicture}[yscale=.5,xscale=2]
\draw (0,0) arc (-90:170:8ex) node[anchor=south east] {$A$} arc (170:270:8ex);
\draw (0,6) arc (-90:170:8ex) node[anchor=south east] {$\sm{x:A} P(x)$} arc (170:270:8ex);
\draw[->] (0,5.8) -- node[auto] {$\proj1$} (0,3.2);
\node[circle,fill,inner sep=1pt,label=left:{$x$}] (b1) at (-.5,1.4) {};
\node[circle,fill,inner sep=1pt,label=right:{$y$}] (b2) at (.5,1.4) {};
\draw[decorate,decoration={snake,amplitude=1}] (b1) -- node[auto,swap] {$p$} (b2);
\node[circle,fill,inner sep=1pt,label=left:{$f(x)$}] (b1) at (-.5,7.2) {};
\node[circle,fill,inner sep=1pt,label=right:{$f(y)$}] (b2) at (.5,7.2) {};
\draw[decorate,decoration={snake,amplitude=1}] (b1) -- node[auto] {$f(p)$} (b2);
\end{tikzpicture}
\end{center}
We \emph{can} obtain such a thing from \cref{lem:map}.
Given $f:\prd{x:A} P(x)$, we can define a non-dependent function $f':A\to \sm{x:A} P(x)$ by setting $f'(x)\defeq (x,f(x))$, and then consider $\ap{f'}{p} : f'(x) = f'(y)$.
Since $\proj1 \circ f' \jdeq \idfunc[A]$, by \cref{lem:ap-functor} we have $\ap{\proj1}{\ap{f'}{p}} = p$; thus $\ap{f'}{p}$ does ``lie over'' $p$ in this sense.
However, it is not obvious from the \emph{type} of $\ap{f'}{p}$ that it lies over any specific path in $A$ (in this case, $p$), which is sometimes important.
The solution is to use the transport lemma.
By \cref{thm:path-lifting} we have a canonical path $\mathsf{lift}(u,p)$ from $(x,u)$ to $(y,\trans p u)$ which lies over $p$.
Thus, any path from $u:P(x)$ to $v:P(y)$ lying over $p$ should factor through $\mathsf{lift}(u,p)$, essentially uniquely, by a path from $\trans p u$ to $v$ lying entirely in the fiber $P(y)$.
Thus, up to equivalence, it makes sense to define ``a path from $u$ to $v$ lying over $p:x=y$'' to mean a path $\trans p u = v$ in $P(y)$.
And, indeed, we can show that dependent functions produce such paths.
\begin{lem}[Dependent map]\label{lem:mapdep}
\indexdef{application!of dependent function to a path}%
\indexdef{path!application of a dependent function to}%
\indexdef{function!dependent!application to a path of}%
\indexdef{action!of a dependent function on a path}%
Suppose $f:\prd{x: A} P(x)$; then we have a map
\[\apdfunc f : \prd{p:x=y}\big(\id[P(y)]{\trans p{f(x)}}{f(y)}\big).\]
\end{lem}
\begin{proof}[First proof]
Let $D:\prd{x,y:A} (\id{x}{y}) \to \type$ be the type family defined by
\begin{equation*}
D(x,y,p)\defeq \trans p {f(x)}= f(y).
\end{equation*}
Then $D(x,x,\refl{x})$ is $\trans{(\refl{x})}{f(x)}= f(x)$.
But since $\trans{(\refl{x})}{f(x)}\jdeq f(x)$, we get that $D(x,x,\refl{x})\jdeq (f(x)= f(x))$.
Thus, we find the function
\begin{equation*}
d\defeq\lam{x} \refl{f(x)}:\prd{x:A} D(x,x,\refl{x})
\end{equation*}
and now path induction gives us $\apdfunc f(p):\trans p{f(x)}= f(y)$ for each $p:x= y$.
\end{proof}
\begin{proof}[Second proof]
By induction, it suffices to assume $p$ is $\refl x$.
But in this case, the desired equation is $\trans{(\refl{x})}{f(x)}\jdeq f(x)$, which holds judgmentally.
\end{proof}
We will refer generally to paths which ``lie over other paths'' in this sense as \emph{dependent paths}.
\indexsee{dependent!path}{path, dependent}%
\index{path!dependent}%
They will play an increasingly important role starting in \cref{cha:hits}.
In \cref{sec:computational} we will see that for a few particular kinds of type families, there are equivalent ways to represent the notion of dependent paths that are sometimes more convenient.
Now recall from \cref{sec:pi-types} that a non-dependently typed function $f:A\to B$ is just the special case of a dependently typed function $f:\prd{x:A} P(x)$ when $P$ is a constant type family, $P(x) \defeq B$.
In this case, $\apdfunc{f}$ and $\apfunc{f}$ are closely related, because of the following lemma:
\begin{lem}\label{thm:trans-trivial}
If $P:A\to\type$ is defined by $P(x) \defeq B$ for a fixed $B:\type$, then for any $x,y:A$ and $p:x=y$ and $b:B$ we have a path
\[ \transconst Bpb : \transfib P p b = b. \]
\end{lem}
\begin{proof}[First proof]
Fix a $b:B$, and let $D:\prd{x,y:A} (\id{x}{y}) \to \type$ be the type family defined by
\[ D(x,y,p) \defeq (\transfib P p b = b). \]
Then $D(x,x,\refl x)$ is $(\transfib P{\refl{x}}{b} = b)$, which is judgmentally equal to $(b=b)$ by the computation rule for transporting.
Thus, we have the function
\[ d \defeq \lam{x} \refl{b} : \prd{x:A} D(x,x,\refl x). \]
Now path induction gives us an element of
\narrowequation{
\prd{x,y:A}{p:x=y}(\transfib P p b = b),}
as desired.
\end{proof}
\begin{proof}[Second proof]
By induction, it suffices to assume $y$ is $x$ and $p$ is $\refl x$.
But $\transfib P {\refl x} b \jdeq b$, so in this case what we have to prove is $b=b$, and we have $\refl{b}$ for this.
\end{proof}
Thus, for any $x,y:A$ and $p:x=y$ and $f:A\to B$, by concatenating with $\transconst Bp{f(x)}$ and its inverse, respectively, we obtain functions
\begin{align}
\big(f(x) = f(y)\big) &\to \big(\trans{p}{f(x)} = f(y)\big)\label{eq:ap-to-apd}
\qquad\text{and} \\
\big(\trans{p}{f(x)} = f(y)\big) &\to \big(f(x) = f(y)\big).\label{eq:apd-to-ap}
\end{align}
In fact, these functions are inverse equivalences (in the sense to be introduced in \cref{sec:basics-equivalences}), and they relate $\apfunc f (p)$ to $\apdfunc f (p)$.
\begin{lem}\label{thm:apd-const}
For $f:A\to B$ and $p:\id[A]xy$, we have
\[ \apdfunc f(p) = \transconst B p{f(x)} \ct \apfunc f (p). \]
\end{lem}
\begin{proof}[First proof]
Let $D:\prd{x,y:A} (\id xy) \to \type$ be the type family defined by
\[ D(x,y,p) \defeq \big(\apdfunc f (p) = \transconst Bp{f(x)} \ct \apfunc f (p)\big). \]
Thus, we have
\[D(x,x,\refl x) \jdeq \big(\apdfunc f (\refl x) = \transconst B{\refl x}{f(x)} \ct \apfunc f ({\refl x})\big).\]
But by definition, all three paths appearing in this type are $\refl{f(x)}$, so we have
\[ \refl{\refl{f(x)}} : D(x,x,\refl x). \]
Thus, path induction gives us an element of $\prd{x,y:A}{p:x=y} D(x,y,p)$, which is what we wanted.
\end{proof}
\begin{proof}[Second proof]
By induction, it suffices to assume $y$ is $x$ and $p$ is $\refl x$.
In this case, what we have to prove is $\refl{f(x)} = \refl{f(x)} \ct \refl{f(x)}$, which is true judgmentally.
\end{proof}
Because the types of $\apdfunc{f}$ and $\apfunc{f}$ are different, it is often clearer to use different notations for them.
% We may sometimes use a notation $\apd f p$ for $\apdfunc{f}(p)$, which is similar to the notation $\ap f p$ for $\apfunc{f}(p)$.
\index{function!dependent|)}%
At this point, we hope the reader is starting to get a feel for proofs by induction on identity types.
From now on we stop giving both styles of proofs, allowing ourselves to use whatever is most clear and convenient (and often the second, more concise one).
Here are a few other useful lemmas about transport; we leave it to the reader to give the proofs (in either style).
\begin{lem}\label{thm:transport-concat}
Given $P:A\to\type$ with $p:\id[A]xy$ and $q:\id[A]yz$ while $u:P(x)$, we have
\[ \trans{q}{\trans{p}{u}} = \trans{(p\ct q)}{u}. \]
\end{lem}
\begin{lem}\label{thm:transport-compose}
For a function $f:A\to B$ and a type family $P:B\to\type$, and any $p:\id[A]xy$ and $u:P(f(x))$, we have
\[ \transfib{P\circ f}{p}{u} = \transfib{P}{\apfunc f(p)}{u}. \]
\end{lem}
\begin{lem}\label{thm:ap-transport}
For $P,Q:A\to \type$ and a family of functions $f:\prd{x:A} P(x)\to Q(x)$, and any $p:\id[A]xy$ and $u:P(x)$, we have
\[ \transfib{Q}{p}{f_x(u)} = f_y(\transfib{P}{p}{u}). \]
\end{lem}
\index{type!family of|)}%
\index{transport|)}
\section{Homotopies and equivalences}
\label{sec:basics-equivalences}
\index{homotopy|(defstyle}%
So far, we have seen how the identity type $\id[A]xy$ can be regarded as a type of \emph{identifications}, \emph{paths}, or \emph{equivalences} between two elements $x$ and $y$ of a type $A$.
Now we investigate the appropriate notions of ``identification'' or ``sameness'' between \emph{functions} and between \emph{types}.
In \cref{sec:compute-pi,sec:compute-universe}, we will see that homotopy type theory allows us to identify these with instances of the identity type, but before we can do that we need to understand them in their own right.
Traditionally, we regard two functions as the same if they take equal values on all inputs.
Under the propositions-as-types interpretation, this suggests that two functions $f$ and $g$ (perhaps dependently typed) should be the same if the type $\prd{x:A} (f(x)=g(x))$ is inhabited.
Under the homotopical interpretation, this dependent function type consists of \emph{continuous} paths or \emph{functorial} equivalences, and thus may be regarded as the type of \emph{homotopies} or of \emph{natural isomorphisms}.\index{isomorphism!natural}%
We will adopt the topological terminology for this.
\begin{defn} \label{defn:homotopy}
Let $f,g:\prd{x:A} P(x)$ be two sections of a type family $P:A\to\type$.
A \define{homotopy}
from $f$ to $g$ is a dependent function of type
\begin{equation*}
(f\htpy g) \defeq \prd{x:A} (f(x)=g(x)).
\end{equation*}
\end{defn}
Note that a homotopy is not the same as an identification $(f=g)$.
However, in \cref{sec:compute-pi} we will introduce an axiom making homotopies and identifications ``equivalent''.
The following proofs are left to the reader.
\begin{lem}\label{lem:homotopy-props}
Homotopy is an equivalence relation on each function type $A\to B$.
That is, we have elements of the types
\begin{gather*}
\prd{f:A\to B} (f\htpy f)\\
\prd{f,g:A\to B} (f\htpy g) \to (g\htpy f)\\
\prd{f,g,h:A\to B} (f\htpy g) \to (g\htpy h) \to (f\htpy h).
\end{gather*}
\end{lem}
% This is judgmental and is \cref{ex:composition}.
% \begin{lem}