This repository has been archived by the owner on Mar 22, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Paper.tex
355 lines (280 loc) · 29.3 KB
/
Paper.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
\documentclass{article}
% 4200 words
% Reference checking
% \usepackage{refcheck}
% Margins and page size
\usepackage[letterpaper, margin=1in]{geometry}
% Babel
\usepackage[english]{babel}
% Smart quotes
\usepackage[autostyle, english = american]{csquotes}
\MakeOuterQuote{"}
% Package for matrices
\usepackage{amsmath}
% References
\usepackage{cite}
% Indent the first paragraph of each section
\usepackage{indentfirst}
% Links
\usepackage{hyperref}
% Set fonts
\usepackage{fontspec}
\setmainfont[Ligatures=TeX]{Times New Roman}
\setmonofont{Fira Code}[Scale=MatchLowercase]
% Figure captions
\usepackage{subcaption}
\usepackage[font=small,labelfont=bf]{caption}
% Image figures
\usepackage{graphicx}
\graphicspath{ {./img/} }
% Colors for code figures
\usepackage{color}
\definecolor{codegreen}{rgb}{0, 0.6, 0}
\definecolor{codegray}{rgb}{0.5, 0.5, 0.5}
\definecolor{codepurple}{rgb}{0.58, 0, 0.82}
\definecolor{backcolour}{rgb}{0.97, 0.97, 0.97}
% Define code figures
\usepackage{listings}
\lstdefinestyle{codefigure}{
backgroundcolor=\color{backcolour},
basicstyle=\ttfamily,
commentstyle=\color{codegreen},
keywordstyle=\color{magenta},
numberstyle=\ttfamily\small\color{codegray},
stringstyle=\color{codepurple},
breakatwhitespace=false,
breaklines=true,
keepspaces=true,
numbers=left,
numbersep=5pt,
showspaces=false,
showstringspaces=false,
showtabs=false,
tabsize=2
}
% Use code format
\lstset{
style=codefigure
}
% Set IEEE header and footer
\usepackage{fancyhdr}
\pagestyle{fancy}
\fancyhf{}
\renewcommand{\headrulewidth}{0pt}
\fancyfoot[R]{\thepage}
% Create title
\usepackage{titling}
\title{\Large\textbf{3D Graphics Rendering with OpenGL}}
\author{Jerred Shepherd}
% Document start
\begin{document}
% Title Page
\begin{titlingpage}
\maketitle
\begin{abstract}
Computer graphics is an essential component to any consumer facing computer. Efficiently rendering computer graphics requires the use of specialized hardware in the form of graphics processing units. GPUs work differently than the CPUs which programmers are experienced with. This difference is due to the GPUs approach to parallelization. Graphics APIs have been created to help programmers write performant and portable code for GPUs. This paper will introduce the core concepts of 3D graphics rendering and OpenGL, a popular and widely-supported graphics API.
\end{abstract}
\end{titlingpage}
% Body
\section{Introduction}
Computers have become widespread and have touched many aspects of modern-day life. From new forms of entertainment such as video games to artificial intelligence, they have revolutionized the way to work, learn, and play. Computer graphics in particular have been a driving force behind the adoption of computers because they allow users to easily interact with computers. Programmers use computer graphics to display user interfaces, render video games, and even animate entire films. Every day millions of people interact with interfaces on phones, laptops, and other devices that allow them to focus on the work they are doing rather than how they communicate with their device. % TODO maybe this could still be improved
Plenty of tools to manipulate 2D and 3D graphics already exist, such as: 3D object editors, image processing applications, UI libraries, and game engines. These tools allow programmers to quickly get work done, but they may not know how their work is being accomplished. Without this understanding it may be harder for the programmer to debug or optimize their code. By learning the lower-level concepts of computer graphics, one is able to better use higher-level abstractions and tools. % TODO this is still a little awkward
This paper intends to introduce the reader to the broad concepts of graphics processing. This includes the purpose of graphics processing units, why graphics APIs exist, and an introduction to a commonly used cross-platform graphics API --- OpenGL. OpenGL will be discussed so that the reader has an overview of the concepts of OpenGL and can create an OpenGL program. This paper will focus on the use OpenGL version 3.2 which is widely supported and is very similar to the most recent version of OpenGL, which is 4.7.
\section{Background}
Computer graphics is predisposed towards parallel processing due to the repetitive and independent nature of the calculations that it requires \cite[p.~4]{sellers2016}. Graphics processing units (GPUs) were created in order to meet the unique hardware requirements of computer graphics and are fundamentally different from more traditional central processing units (CPUs). While CPUs generally have a few very fast cores, GPUs take another approach by having a large number of slow cores. Figure \ref{fig:architecture} illustrates this difference in architecture. It shows the massive difference in arithmetic logic units (ALUs) between CPUs and GPUs. These ALUs give GPUs an edge over CPUs when performing mathematical operations and are a core part of the parallel processing capability that GPUs possess.
\begin{figure}[h]
\centering
\begin{subfigure}[h]{0.39\textwidth}
\includegraphics[width=\textwidth]{cpu}
\caption{CPU Architecture.}
\label{fig:cpu}
\end{subfigure}
\begin{subfigure}[h]{0.39\textwidth}
\includegraphics[width=\textwidth]{gpu}
\caption{GPU Architecture.}
\label{fig:gpu}
\end{subfigure}
\caption{A comparison of typical CPU and GPU architectures \cite{larkin2016}.}
\label{fig:architecture}
\end{figure}
A difference also exists between the two in their approach to threading. An application running on a CPU may generally have only few threads running due to the high cost of creation and context switches. GPUs have lower cost thread creation and can more quickly context switch, and they often require tens of thousands of threads in order to fully reach their processing potential \cite{larkin2016}. Despite the slower performance of an individual GPU core, GPUs easily outperform CPUs in tasks related to graphics processing due to their parallel processing capabilities. Figure \ref{fig:performance} shows the historical theoretical performance gap between CPUs and GPUs in giga floating-point operations per second (GFLOP/s), which is a common measure of graphical computation speed. The graph shows that GPUs have a clear performance advantage, with the gap becoming exponentially larger as time goes on.
\begin{figure}[h]
\centering
\includegraphics[height=7cm]{cpu-vs-gpu}
\caption{A performance comparison of CPUs and GPUs in theoretical peak GLFOP/s over 14 years \cite{galloy2013}.}
\label{fig:performance}
\end{figure}
Using GPUs effectively is especially important when rendering user interfaces, which generally should remain responsive at all times. Because of the architectural differences between GPUs and CPUs, GPUs are programmed differently than CPUs so that their peak performance can be reached. In addition to writing high-level code which is compiled into assembly, GPUs are also programmed by using APIs that are exposed by the manufacturer of the graphics card. GPU manufacturers may support proprietary APIs which is very common on cards that are made to be used in systems such as game consoles and arcade machines, or they may implement standardized APIs such as OpenGL, Metal, Vulkan, or DirectX.
While using a proprietary API may allow a programmer to get more performance out of the hardware they are targeting, it will severely limit the application's portability. A program that is written to run on a GPU using OpenGL should be compatible with any other GPU implementing the same version of OpenGL. These APIs help programmers to focus on the code they are writing rather than what hardware the code will be executed on. The APIs are created so that they are at a low enough level that maximum performance can be achieved while also being at a high enough level so that programmers can easily use them \cite[p.~4]{sellers2016}.
\section{Graphics Rendering}
Computer displays, such as monitors, use a 2D matrix of pixels to display images. The dimensions of this matrix are referred to as the display's resolution. Traditionally each pixel of a color display has a red, blue, and green value, which together determine the color of the pixel. Graphics rendering is how the color of these pixels are determined so that objects such as text, images, and user interfaces can be conveyed \cite{mckesson2018}. The higher the resolution of a display the more detail that can be shown on elements that are drawn. For reference, displays commonly range from standard HD at 1280x720, to very high resolutions such as 3840 x 2160, referred to as 4K. Standard HD is becoming less common while 4K is slowing gaining market share, especially on high-end monitors, TVs, and other consumer electronics.
Graphics rendering is done by positioning primitives such as points, lines, and triangles in a 3D space. After these primitives are defined, their locations are transformed, and they are drawn on the screen through a process called rasterization \cite{mckesson2018}. Figure \ref{fig:triangle-rendering} shows a simple example of the rasterization process. The figure begins by defining a triangle primitive which is composed of three 3D vectors. The rasterizer then determines which fragments the triangle overlaps through a process called scan conversion. One way to do this is to include pixels if their centers are contained within the element. Rasterization then produces list of fragments. A fragment is an element that contributes to the final color of a pixel \cite{sellers2016}. The fragments are assigned a color in later steps of the rendering process, which may be changed even further in the process when effects such as lighting are taken into account.
Although these three primitives may seem simple and insignificant, they are actually the backbone of graphics rendering. Complex 3D models such as those shown in video games are really just meshes of many small, connected triangles. Squares and other quadrilaterals can be created by connecting two triangles together at their hypotenuses.
\begin{figure}[h]
\centering
\includegraphics[height=3cm]{triangle-rendering}
\caption{A visualization of how a single 2D triangle is rasterized \cite{mckesson2018}.}
\label{fig:triangle-rendering}
\end{figure}
\subsection{Matrices}
A two-dimensional object can be drawn by simply defining its x and y coordinates. Three-dimensional objects will not be drawn correctly because the z coordinate does not affect the size of the object. The z coordinate represents how close a point is to a viewer with larger numbers being closer, therefore as the z coordinate increases the object should increase in size as well. This can be fixed by using a projection matrix and matrix multiplication. A projection matrix effects what is visible on the screen, and how it is projected onto it \cite{sellers2016}. Each (x, y, z, w) coordinate will be multiplied by a 4x4 matrix, with the result being the location of the coordinate when projected onto the display.
Before the matrix can be created, the aspect ratio, field of view (FOV), z-near, and z-far variables must be determined. The aspect ratio is equal to a display's width in pixels divided by its height in pixels. z-near and z-far represent the closest and furthest possible z coordinate values respectively. The FOV is the angle of the scene that will be rendered on the screen. Figure \ref{fig:projection} illustrates of these four variables. Figure \ref{fig:matrix} shows an example of a 4x4 projection matrix used for 3D rendering. Orthogonal projection matrices exist for 2D rendering however they are outside of the scope of this paper.
\begin{figure}[h]
\centering
\includegraphics[height=4.5cm]{projection-matrix}
\caption{Projection matrix concepts \cite{hernandez2019}. The camera represents the position of the viewer.}
\label{fig:projection}
\end{figure}
\begin{figure}[h]
\[
\begin{bmatrix}
\dfrac{\dfrac{1}{tan(\dfrac{\mathit{fov}}{2})}}{a} &
0 &
0 &
0 \\ \\
0 &
\dfrac{1}{tan(\dfrac{\mathit{fov}}{2})} &
0 &
0 \\ \\
0 &
0 &
\dfrac{-(z_f + z_n)}{z_f - z_n} &
\dfrac{-(2 * z_f * z_n)}{z_f - z_n} \\ \\
0 &
0 &
-1 &
0
\end{bmatrix}
\]
\caption{A projection matrix where a is the aspect ratio, $\mathit{fov}$ is the field of view, $z_f$ is z-far, and $z_n$ is z-near \cite{hernandez2019}.}
\label{fig:matrix}
\end{figure}
While a projection matrix defines how all rendered geometry appears, a model matrix can be used to transform individual elements. Like a projection matrix, a model matrix is a 4v4 matrix. It is used to translate, rotate, and scale individual objects on the screen. Model matrices are very important in 3D rendering, as they allow objects to be positioned precisely in scenes without having to manually calculate vertex locations. Like the projection matrix, the model matrix is applied by multiplying the position of each vertex by the matrix. The result is the new position of the vertex with translation, rotation, and scaling applied.
\section{OpenGL}
OpenGL is both a specification and a graphics API which is commonly implemented on modern graphics cards. The specification defines how the use of API should affect the graphics card. Like other graphics APIs, this allows users to use the underlying graphics hardware portably. OpenGL's development began at a computer hardware manufacturer named Silicon Graphics Inc. (SGI). SGI created its own proprietary graphics API named IrisGL which was used on its workstations and graphics hardware. IrisGL was cleaned up and formalized into OpenGL. The first version of OpenGL was released to the public as version 1.0 in 1992 \cite{openglwiki2018}. This initial version has been revised many times, with the latest version being 4.6, which was released on July 31st, 2017 \cite{openglwiki2018}.
API bindings exist for C/C++ off of which all other language bindings are based \cite{openglwiki2018}. Bindings exist for a wide variety of other languages such as python, C\#, and LISP and for every major operating system \cite{openglwiki2018}. The Lightweight Java Game Library (lwjgl) is a popular Java library that provides bindings for the OpenGL API and it will be used for example in this paper.
Before OpenGL can be used, an OpenGL context (i.e. a desktop window) must be created. Many libraries exist to do this in a cross-platform manner. One popular library that is bundled with lwjgl is the Graphics Library Framework (GLFW), which makes it very easy to create an OpenGL context with lwjgl. Figure \ref{fig:glfw-creation} shows the code necessary to initialize the GLFW framework, create an operating system window with GLFW, and then finally bind an OpenGL context to the window for drawing. Now that a window and OpenGL context have been created, OpenGL methods can now be called, and the window is ready for drawing.
\begin{figure}[h]
\lstinputlisting[language=Java]{code/glfw.java}
\caption{Creating a GLFW window and binding it to OpenGL in Java with lwjgl \cite{lwjgl}.}
\label{fig:glfw-creation}
\end{figure}
\section{VBOs, VAOs, and Uniforms}
OpenGL provides the standard primitives used in graphics rendering --- points, lines, and triangles. All of these primitives are represented as float arrays which can be passed to OpenGL for rendering. These float arrays are stored in vertex buffer objects (VBOs). VBOs are used to allocate memory on graphics hardware and are bound along with other objects to vertex array objects (VAOs). Creating a VBO with OpenGL is a simple task, but it would be beneficial to understand how the OpenGL APIs work before going further into the subject.
The OpenGL API has state that is shared globally, which includes what vertex array objects and vertex buffer objects are currently bound. Communication between the CPU and GPU is relatively slow so it should be minimized. Minimizing communication is done by buffering data to be drawn ahead of time and reusing the buffered data whenever possible. Due to the limitations of the Java Virtual Machine (JVM) all memory being shared with native libraries must be allocated off of the JVM heap \cite{lwjglwiki}. lwjgl provides utilities to easily allocate this memory through the MemoryStack class. Data created on this stack can be sent to the graphics hardware with lwjgl. Unsigned integers, referred to as names, are used to uniquely identify OpenGL elements such as VBOs and VAOs. Many OpenGL functions take these names as arguments or rely on the currently bound VBO or VAO.
Figure \ref{fig:create-vbo} shows the code needed to store the vertices of a triangle on a graphics card. Line two asks OpenGL to create a new buffer name and store it. Next that buffer is bound so that all subsequent operations that require an array buffer will use it. An array of floats representing the vertices of a triangle are created, stored in native memory, and the finally transferred to the graphics card on line 22. A VBO is now stored on the graphics card, but a VAO must first be created and bound before the VBO can be used.
\begin{figure}[h]
\lstinputlisting[language=Java]{code/vbo.java}
\caption{Creating a VBO and buffering vertex data with lwjgl.}
\label{fig:create-vbo}
\end{figure}
A vertex array object stores state so that drawing can be done quickly and is required to be created before anything can be drawn. A VAO is bound and then set up so that when an object needs to be drawn in the future it only needs to be rebound without any further instruction before drawing. A vertex array simply keeps pointers to buffers which can and should be bound to multiple VAOs to save memory on the graphics hardware. Figure \ref{fig:vbo-vao-ebo} shows the relationship between vertex array objects and vertex buffer objects. Notice how the VAO has what is essentially an array of pointers to buffers. OpenGL allows you to set the pointer at each index to buffers that contain information about what is being drawn. Vertex buffer objects often contain vertex position coordinates, but they can contain other data used to render the element such as color and texture data.
\begin{figure}[h]
\centering
\includegraphics[height=6cm]{vao}
\caption{Two VAOs with one VBO each \cite{devries2019}.}
\label{fig:vbo-vao-ebo}
\end{figure}
As shown in Figure \ref{fig:create-vao}, creating a VAO is similar to creating a VBO. The glGenVertexArrays method allocates space for a VAO on the graphics card and returns its name. The name is then bound with the glBindVertexArray method so that all future methods that require a bound VAO use the newly created VAO. The previously created VBO is rebound on line 8 to ensure that the VAO points to the correct VBO. Line 11 sets the 0th index of the bound VAO to the currently bound VBO object. Its other arguments define the size, type, and layout of the data. The last step is to enable the 0th index on the VAO --- otherwise OpenGL will not pass the buffer when rendering.
\begin{figure}[h]
\lstinputlisting[language=Java]{code/vao.java}
\caption{Creating a VAO and binding a VBO to it.}
\label{fig:create-vao}
\end{figure}
VAOs cannot bind more than 4 elements of a VBO array, making it hard to pass large amounts of data to shaders while rendering. Buffers also must be set per VAO which can make code repetitive if some buffer is shared between many VAOs. Uniforms are a type of variable which aim to solve this problem. Uniforms can store not only vectors, but also matrices. They are intended for use where data is common between several elements. Figure \ref{fig:matrix-uniform} shows a uniform being created to store a 4x4 projection matrix.
\begin{figure}[h]
\lstinputlisting[language=Java]{code/uniform.java}
\caption{Creating a 4x4 matrix uniform.}
\label{fig:matrix-uniform}
\end{figure}
% TODO revise
\subsection{Indexed Rendering}
Consider the case of drawing a square. This would require two right triangles of equal dimensions, each connected at their hypotenuses. Each triangle would only have one unique coordinate, with the other two being shared at the hypotenuses. Right now, this would have to be sent as six coordinates as show in Figure \ref{fig:square-coords}, instead of only four. While these two extra coordinates may seem insignificant, this inefficient use of graphics memory may cause problems, especially when rendering complex scenes with hundreds or thousands of triangles. This problem can be solved with indexed rendering.
Indexing adds a layer of indirection when rendering. Rather than directly passing the coordinates to draw to OpenGL, a list of indices is passed. OpenGL then uses this list to access the true data in its buffers. The code to create an element buffer is very similar to that of creating VAOs and VBOs, so it will be omitted. Instead, a comparison of the data being passed to OpenGL will be provided.
Figure \ref{fig:square-compare} shows a comparison of the code to draw a square with and without indexing. Notice how on Figure \ref{fig:square-indexed} each vertex is defined once, and then the vertices used for each triangle is defined in a separate array. The numbers in the indices array represents which index in the vertices array should be used when drawing. For example, the value 0 in the indices array refers to the 0th entry in the vertices array, which is the top-left vertex.
\begin{figure}[h]
\centering
\begin{subfigure}[h]{0.49\textwidth}
\lstinputlisting[language=Java]{code/square.java}
\caption{Without indexing.}
\label{fig:square-coords}
\end{subfigure}
\begin{subfigure}[h]{0.49\textwidth}
\lstinputlisting[language=Java]{code/square-indexed.java}
\caption{With indexing.}
\label{fig:square-indexed}
\end{subfigure}
\caption{Creating a square with and without indexing.}
\label{fig:square-compare}
\end{figure}
\section{The Rendering Pipeline}
Now that a VAO and VBO have been created, OpenGL is almost ready to draw. The only thing left to do is to tell OpenGL what to do with the data it receives from VAOs, VBOs, and uniforms. When OpenGL receives a draw command, it runs the VAO through a pipeline in order to determine what to draw and how to draw it. This pipeline is responsible for positioning, rasterizing, and coloring primitives. The pipeline is divided into several stages, many of which are configurable through programs written in the OpenGL shading language (GLSL). Figure \ref{fig:pipeline-flow} shows a simplified view of the OpenGL pipeline and Figure \ref{fig:pipeline-visualized} shows how the input is transformed throughout the pipeline. While OpenGL allows several of its stages to be programmed or configured by the user, some of the stages are fixed-function, meaning that OpenGL does not allow any customization to the stage. A couple of notable fixed-function stages include the vertex fetch stage, the primitive assembly stage, and per-sample processing. The vertex fetch stage is the first stage in the pipeline and feeds the pipeline with data from VAOs and VBOs. The primitive assembly stage performs an optimization known as face culling to ensure that only visible triangles are processed. Per-sample processing is the final stage in the OpenGL pipeline and ensures that previously drawn elements are not overwritten unless the new element's depth is higher through a process called depth testing and writes fragments to the framebuffer to be displayed. % TODO revise
While there are several configurable stages in the pipeline, only two must be defined for rendering to occur. These two stages are the vertex shader and fragment shader. The vertex shader tells OpenGL where a primitive is located, while the fragment shader contributes to the final color of a primitive during rendering.
\begin{figure}[h]
\centering
\begin{subfigure}[h]{0.29\textwidth}
\includegraphics[width=\textwidth]{pipeline}
\caption{A simplified OpenGL pipeline \cite{sellers2016}. Boxes with square corners are programmable by the user.}
\label{fig:pipeline-flow}
\end{subfigure}
\begin{subfigure}[h]{0.49\textwidth}
\includegraphics[width=\textwidth]{pipeline-visualized}
\caption{Visualization of data flowing through the OpenGL pipeline \cite{overvoorde2019}.}
\label{fig:pipeline-visualized}
\end{subfigure}
\caption{The OpenGL rendering pipeline.}
\label{fig:pipeline}
\end{figure}
In order to customize the OpenGL pipeline a shader program must first be created. Once it is created, shaders can be compiled, attached to the program, and then linked. This final executable is run on the graphics hardware and executed by OpenGL when rendering. Figure \ref{fig:shader-program} shows the code needed to create an OpenGL shader program.
\begin{figure}[b]
\lstinputlisting[language=Java]{code/shaderProgram.java}
\caption{Creating a shader program with a single shader.}
\label{fig:shader-program}
\end{figure}
\subsection{Writing Shaders}
As mentioned before writing shaders is done in the OpenGL shading language which is based off of C. Figure \ref{fig:vertex-shader} shows a simple vertex shader program. The first line declares the version of OpenGL that the shader is using which is OpenGL 3.3 for this shader. Lines starting with "layout" tell OpenGL what input the shader is expecting. In this case this shader expects two inputs --- vectors of size 3 and size 4. The location indicates which index of the VAO these vectors should be found. The 0th index was set in Figure \ref{fig:create-vao} and represents the position of the element. The creation of the 1st index was omitted for brevity, however as the shader program shows it represents the color of the element. Lines starting with "out" indicate what data this shader passes on to future stages in the pipeline. Uniforms are global variables set. This shader contains only one uniform which is a 4x4 projection matrix.
The main function is where the work is being done. gl\_Position is a special variable that tells OpenGL the final position of an element. It is assigned to the input position after it is multiplied by the projection matrix. This position is passed on, as well as the color that was passed to the shader.
\begin{figure}[h]
\lstinputlisting[language=C]{code/vertex.glsl}
\caption{A simple vertex shader.}
\label{fig:vertex-shader}
\end{figure}
The fragment shader partially determines the color of pixels. Figure \ref{fig:fragment-shader} shows a basic fragment shader which receives a color from the vertex shader in Figure \ref{fig:vertex-shader} and outputs the color as-is.
\begin{figure}[h]
\lstinputlisting[language=C]{code/fragment.glsl}
\caption{A simple fragment shader.}
\label{fig:fragment-shader}
\end{figure}
\subsection{Textures}
Textures allow images to be stored and used while rendering. While OpenGL supports many kinds of textures, only standard 2D textures will be covered here. The process for creating a texture follows the same pattern as VBOs. A texture name must first be created and bound, and then data can be loaded into the texture. This code is show in Figure \ref{fig:texture-init}. Once the texture is initialized, it can be used in a fragment shader.
The fragment shader uses a special kind of uniform type called a sampler. Samplers represent textures in shaders \cite{sellers2016}. The fragment shader uses a special function called \textit{texture}, which takes a sampler and a coordinate. The coordinate represents where in the texture image the fragment should get its data from. In example \ref{fig:texture-fragment} the shader receives the coordinate from the vertex shader.
\begin{figure}[h]
\centering
\begin{subfigure}[h]{0.49\textwidth}
\lstinputlisting[language=Java]{code/texture.java}
\caption{Creating a texture buffer and loading image data into it.}
\label{fig:texture-init}
\end{subfigure}
\begin{subfigure}[h]{0.49\textwidth}
\lstinputlisting[language=C]{code/texture-fragment.glsl}
\caption{Loading texture data in the fragment shader.}
\label{fig:texture-fragment}
\end{subfigure}
\caption{Using textures.}
\label{fig:texture-code}
\end{figure}
\section{Conclusion}
With all of these concepts a basic OpenGL program can be created. Figure \ref{fig:final-product} shows a program\footnote{Full program source available at \url{https://github.com/ShepherdJerred-homework/seminar-application}} that renders a simple textured cube that rotates. It uses all of the previously covered material, and most of the code samples used above. While this program is simple, it composes many concepts and lays the foundation for more complex applications. Games can be created with only these concepts. The only further work with graphics would be to render more elements and make the objects more complex.
\begin{figure}[h]
\centering
\includegraphics[height=8cm]{textured-cube.png}
\caption{A spinning textured cube.}
\label{fig:final-product}
\end{figure}
% TODO this could still be better?
Computer graphics are crucial to providing easy-to-use experiences for users, and in creating modern day entertainment and other visual media. OpenGL allows you to render graphics by writing simple shader programs and feeding the OpenGL pipeline with input data. This data can then efficiently be rendered and displayed on-screen to users. The pipeline can transform input and allows programmers to modify how their input is rendered. While there is a lot of overhead in graphics rendering, it allows the user to get the most out of their graphics hardware.
\clearpage
% References
\bibliography{bibliography}{}
\bibliographystyle{IEEEtran}
\end{document}