We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Introduction to OpenGLES and GLSL programming

00:00

Formal Metadata

Title
Introduction to OpenGLES and GLSL programming
Title of Series
Number of Parts
102
Author
License
CC Attribution 4.0 International:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
This foundation talk describes the basic concepts of the OpenGLES 2.0 real-time rasterizer. We will explain the different stages of the rendering pipeline, briefly introduce the mathematics involved, show the boilerplate code required to setup an OpenGLES program, and finally look at the real fun stuff, which is the GLSL language used in vertex and fragment shaders. From notebooks and smartphones to embedded systems and game consoles, every modern computing platform contains chips for hardware accelerated 3d rendering. The OpenGL standard and API describes the drawing directives provided by these chips and is used to compose and animate user interfaces and to render interactive virtual scenes. Basically, every pixel that you see has been processed by an OpenGL pipeline. Engines like Unity3d provide a convenient way to describe and render threedimensional scenes without having to deal with the low level drawing directives. But this convenience makes it difficult to understand the path by which your logic becomes pixels, and coding closee to the hardware can be a lot of fun. This foundation talk describes the basic concepts of the OpenGLES 2.0 real-time rasterizer. We will explain the different stages of the rendering pipeline, briefly introduce the mathematics involved, show the boilerplate code required to setup an OpenGLES program, and finally look at the real fun stuff, which is the GLSL language used in vertex and fragment shaders. After watching this talk, you will have a better understanding of the pipelines that are used to create the pixels on your screen. If you already know a high-level programming language such as C/C++, Java or Go, the examples provided will help you get started with coding your own 3d app, game or demo.
29
30
Thumbnail
32:05
53
59
65
81
86
Thumbnail
1:42:59
95
96
Thumbnail
35:58
Computer programmingChaos (cosmogony)TelecommunicationProcess (computing)Computer programmingTranslation (relic)Multiplication signSystem programmingMereologyMachine codeAuthorizationInternetworkingJSONXMLLecture/Conference
Point cloudAbstractionSystem programmingOpen setPiLecture/Conference
Computer programmingAlgebraLinear codeGeometryFrame problemData bufferTesselationShader <Informatik>Back-face cullingQuaternionExtension (kinesiology)Open setLocal GroupStandard deviationComputer-aided designVisualization (computer graphics)DemosceneGame theorySystem programmingIntelAndroid (robot)Java appletKeyboard shortcutRevision controlComputing platformInterface (computing)Context awarenessTexture mappingTouchscreenOperator (mathematics)Computer programmingOrder (biology)Programming languageVirtualizationVirtual machineBoiling pointAbstraction1 (number)ImplementationBitGraphics processing unitThree-dimensional spaceNeuroinformatikDifferent (Kate Ryan album)Sound effectPlanningConfidence intervalSmoothingRaster graphicsHypermediaSlide ruleGeometryContext awarenessDemosceneOperating systemCodeMedical imagingOpen sourceTriangleComputing platformStandard deviationInterface (computing)Shader <Informatik>TelecommunicationChaos (cosmogony)Open setRight angleRevision controlSystem programmingKeyboard shortcutSurfaceCubeTexture mappingMereologyGraph coloringSoftwareAugmented realityQueue (abstract data type)MathematicsMaizeBuildingTouchscreenWindowComputer graphics (computer science)Integrated development environmentProcess (computing)Library (computing)Ocean currentLimit (category theory)Visualization (computer graphics)VideoconferencingWebsiteShared memoryDimensional analysisGame theoryComplex (psychology)Gastropod shell
GeometryProjective plane3 (number)Dimensional analysisSurfaceTrianglePixelRaster graphicsEnterprise architectureTexture mappingVertex (graph theory)Descriptive statisticsGraph coloringTouchscreenBefehlsprozessorSound effectInterpolationLevel (video gaming)Computer programmingLecture/Conference
CodeEnterprise architectureGraphics processing unitBefehlsprozessorVector spaceType theoryFunction (mathematics)RecursionProgrammschleifeProgramming languageShader <Informatik>TriangleGeometryVertex (graph theory)Attribute grammaroutputUniform convergenceBounded variationMountain passTexture mappingLinear codeAlgebraCartesian coordinate systemPoint (geometry)System programmingCoordinate systemRight angleVector graphicsOperations researchComputer programmingGeometryVertex (graph theory)Shader <Informatik>Texture mappingBefehlsprozessorMultiplication sign1 (number)outputProjective planeMereologyCore dumpSubject indexingCodeTrigonometryFour-dimensional spaceSquare numberSineLimit (category theory)Object-oriented programmingBitThree-dimensional spaceThumbnailOrder (biology)Metric systemStatistical dispersionTask (computing)Goodness of fitBranch (computer science)Raster graphicsSemiconductor memoryBridging (networking)Rule of inferenceProcess (computing)Parameter (computer programming)Position operatorComputer configurationGame controllerSystem programmingPoint (geometry)Problemorientierte ProgrammierspracheMatrix (mathematics)Vector spaceFunction (mathematics)MathematicsGraph coloringDemosceneAlgebraCoordinate systemAttribute grammarTriangleArtistic renderingType theoryRecursionFrame problemLinear algebraUniformer RaumSpacetimeVotingSinc functionConstructor (object-oriented programming)Transformation (genetics)SurfaceWordMomentumCirclePerfect groupMoment (mathematics)WebsiteExponentiationMatching (graph theory)FrequencyData managementSound effectMultilaterationVortexLevel (video gaming)Shape (magazine)Source codeState of matterList of unsolved problems in mathematicsEnterprise architectureQuicksortPerimeterBit rateChief information officerMappingEqualiser (mathematics)Contrast (vision)Functional programmingProgramming languageProgrammschleifeTouchscreenMultiplicationRight angleProgram flowchart
SpacetimeSurfaceVector graphicsMultiplicationVector spaceScalar fieldInverse trigonometric functionsTrigonometric functionsAngleProduct (business)Matrix (mathematics)Dimensional analysisMoving averageTransformation (genetics)Perspective (visual)Axonometric projectionOperations researchAlgebraComputer virusConfiguration spaceContext awarenessComputer programDataflowTexture mappingShader <Informatik>DemosceneClique-widthError messageComputer clusterData bufferVertex (graph theory)Bound stateComputer-generated imageryPoint (geometry)Attribute grammarMatrix (mathematics)3 (number)Transformation (genetics)Operator (mathematics)Cellular automatonAlgebraAttribute grammarGraph coloringTranslation (relic)TelecommunicationAlpha (investment)Vector spaceLengthComponent-based software engineeringCartesian coordinate systemProjective planeThree-dimensional spaceUniformer RaumPerspective (visual)Two-dimensional spacePoint (geometry)AngleFunctional programmingTrigonometric functionsSystem callKeyboard shortcutString (computer science)Semiconductor memoryOrder (biology)Source codeObject-oriented programmingError messageTexture mappingCoordinate systemVertex (graph theory)Statistical dispersionLine (geometry)Position operatorData storage deviceCodeTimestampOcean currentTouchscreenLoop (music)VolumenvisualisierungMultiplication signMedical imaging2 (number)TrailLibrary (computing)Demo (music)TriangleIntegrated development environmentShader <Informatik>Virtual machineGeometryWebsiteShape (magazine)Context awarenessModel theoryClique-widthoutputComputer programDemosceneSet (mathematics)Square numberComputer fileComputing platformRotationMultiplicationChainDimensional analysisCASE <Informatik>Product (business)Control flowChief information officerMetric systemSpacetimeReal numberSurfaceState of matterBit rateServer (computing)Extreme programmingObservational studyCuboidBeta functionStress (mechanics)Matching (graph theory)Scripting languageBitRow (database)WeightGenderLevel (video gaming)Arithmetic meanBiostatisticsWordBuffer solutionMilitary baseExecution unitCode refactoringReading (process)Wave packetMoving averagePositional notationDifferent (Kate Ryan album)PlanningDirection (geometry)Computer programmingOnline helpOpen setGastropod shellCore dumpCalculationDegree (graph theory)Sign (mathematics)Program flowchart
Shader <Informatik>Vertex (graph theory)Error messageSample (statistics)Texture mappingUniform convergenceInclusion mapLink (knot theory)Computer programGlass floatString (computer science)Floating pointPerspective (visual)Axonometric projectionView (database)DemosceneComputer clusterAngleRotationTriangleVideoconferencingWindows RegistryWeb pagePlastikkarteComputer fileElectronic program guideComputer programmingUniformer RaumMatrix (mathematics)Projective planeBuffer solutionGraph coloringCoordinate systemAngleShader <Informatik>DemosceneView (database)Computer programmingTrianglePoint (geometry)Uniform resource locatorPlastikkarte2 (number)Multiplication signLoop (music)System callConfidence intervalVisualization (computer graphics)Statistical dispersionSemiconductor memoryMachine codeCodeField (computer science)VolumenvisualisierungBlock (periodic table)Vertex (graph theory)Latent heatForm (programming)Alpha (investment)Pointer (computer programming)RotationInformationTouchscreenPosition operatorElement (mathematics)Subject indexingSineDegree (graph theory)Functional programmingNumberBitSet (mathematics)Single-precision floating-point formatClique-widthSpacetimeIdentity managementVector spaceTimestampAttribute grammarError messageTexture mappingFrame problemWeb pageRow (database)Metric systemMachine visionSimilarity (geometry)OctahedronQuicksortState of matterResultantGroup actionCASE <Informatik>Gastropod shellString (computer science)Quantum stateStructural loadShared memorySystem programmingRight angleOpen setMessage passingoutputSign (mathematics)Workstation <Musikinstrument>Lie groupComputer file
Computer programmingElectronic program guideInternetworkingAngleDemosceneRotationCalculationProgramming languageDarstellungsmatrixVertex (graph theory)Shader <Informatik>BitMereologyMultiplication signNatural numberMatrix (mathematics)Ocean currentProjective planeElectric generatorUniformer RaumBefehlsprozessorMessage passingAxiom of choiceMetric systemNeuroinformatikWebsiteFitness functionDampingLecture/Conference
Electronic program guideComputer programmingPresentation of a groupLecture/ConferenceMeeting/Interview
VideoconferencingLecture/ConferenceJSONComputer animation
Transcript: English(auto-generated)
translation on the internet. It's about 3D programming on
Raspberry Pis mainly, and Falkert is talking about that because he likes to do this stuff, especially in his spare time, since he thinks his job is a bit boring. But, welcome to Falkert! Thank you! Thank you. Thanks for the
question. I'm a system architect, so that's not always boring, but it's very different from programming to 3D graphics, so when I get bored with all the really abstract cloud stuff, then doing some coding very close to the hardware, to the
metal, is actually a nice distraction. So that's what I've been doing in the past few months, basically, playing around with Raspberry Pi and seeing what the OpenGL, which GPU can do with OpenGL on that, so I wanted to give a short summary of how that works so that you, ideally, after watching this talk, know where to start if you want to write your own OpenGL 3D software. So, this talk is about
OpenGL ES programming. We have three parts. We first talk a bit about the OpenGL ES concepts. I call the rendering pipeline works, and in general the shaders. Then there will be a tiny bit of mathematics, a bit of a little
bit of code, and then finally, in the third part, we're going to have a look at example code. That's then going to have the, basically, the plan is to have a little logo of the Chaos communication camp spinning around. A slight disclaimer, unfortunately, for various reasons, that didn't
actually quite work yet, so you're not going to see any kind of smooth animations in this talk, but if you check back later, like in the next few days, I'm confident that we can make it work, and that should give you a good introduction to the whole thing. I'm going to put the slides on my website. They're probably going to be on media CCC as well. So, not in this talk.
OpenGL is a very complex topic, so our technology, there's lots of different things, so there's quite a lot of things that we just don't have time for in 45 minutes. So, if you're missing anything, feel free to ask me about it later. Right. Let's get started.
OpenGL ES. A brief introduction. We start with OpenGL, Open Graphics Language. It's an API for rendering 2D and 3D computer graphics. Like all good things, it was invented in the 1990s, developed by Silicon Graphics Incorporated as GI. Nowadays, it's an open standard
maintained by the Kronos Group Consortium, and it's used all over the place, so it gets computer-aided design, visualisations, desktop compositing, so all the nifty effects on your Mac OS or iOS, machine art, obviously, games, obviously, virtual reality, augmented reality,
that's all basically using OpenGL under the hood. Of course, there's lots of abstraction layers available, stuff like Unity 3D and other engines, but it all boils down to OpenGL eventually. OpenGL ES is OpenGL for
embedded systems. It's a more current standard. There's implementations by all major manufacturers, Nvidia and AMD, the most important ones building the GPUs. OpenGL ES is available on all major operating systems and platforms, so you get it on Linux, Mac, Windows, and all, and there's bindings for pretty much all the
major programming languages, C, C++, Java, even Python and Ruby. In general, you do want to use a compiled language, though, for speed. I personally like to use Go for that, because it's generally a bit tighter and has the same speed as a raw C, but the syntax is much
more readable and maintainable. So, the latest version is OpenGL 3.2, Raspberry Pi, Raspbian Stretch supports OpenGL ES 2.0, which is good enough. All the important concepts are already in place, so that's already very cool.
Then there's one more thing, which is called EGL, a native platform interface, which you need in order to tell your environment, your operating system, to give you an OpenGL context, so before you can start writing or running your OpenGL code, you need to first get an OpenGL context. This works very different depending on
whether you're on Linux or Windows, whether you want a full-screen or a windowed application, whether it's on a phone or somewhere else. So, in general, that's called EGL, but for different platforms, there are different other libraries, WGL, CGL, GLX. MISA is an open source implementation, so basically, if you want
to program on your platform, then you need to use, to find some kind of interface that will set up OpenGL for you, and that code will be platform-specific once you have the context. The code is pretty much portable between different platforms, which is a nice
thing about OpenGL abstracting away from that. Right, so, just very briefly, OpenGL is a rasterizer. That means that the goal is to describe some kind of virtual scene. Virtual scene will have geometry, so kind
of like pyramids and cubes and stuff, or if you want to have a bunny, then you need to describe that bunny as triangles, and it will also, each of the geometry will also have some kind of colour or material associated with that. The geometry we represent as triangles, and
the triangles represent as vertices, so basically, in this example, that cube, we can describe with these eight corners, each corner is a vertice, and then these corners describe the edges between those, and the edges describe the surfaces of the cube, and then if you want, on the
right-hand side, if you want to have a nice logo on there, this would be a texture. Textures come from image data, so bitmap data, like a PNG, for example. The way that the rasterizer works is we describe our geometry in world coordinates, so this is all still
like your code in the GPU, in three-dimensional space. The rasterizer then takes all that geometry and projects it onto a projection plane, a two-dimensional surface, which is basically your screen. After that, we have two-dimensional fragments, the vertices are called
fragments after they have been projected, and these fragments then get mapped with textures or colours, depending on what you tell it to, but basically, those three stages, you describe your geometry in three- dimensional space that gets flatted down, mapped, projected down onto a surface, and then for each
of the triangles, which are now flat on that surface, we figure out how we want to colour them, and then if we have a triangle on that surface with three colours, then the rasterizer will interpolate the values between those colours so that, in the end, every pixel on your screen has a desired colour. That's a very high-level
description of the rasterizer. Let's look at the architecture of OpenGL ES specifically. On the slides, you can see, on the left-hand side is your OpenGL program. That's the code that you write in C or C++ or Go, and that's running on your central processing unit on the
CPU. On the right-hand side, in contrast, is the GPU graphical, like your graphic cards, which is much faster, and, on there, we have two things that run, code that you write yourself, a vertex shader and a fragment shader. Those are written in the GLSL shading
language, and those basically do the projection and the texture mapping that we saw before. So your OpenGL program will upload your geometry data in your GPL program. You will upload your geometry data to the GPU into vertex memory. You will then also upload your
texture data, so your bitmaps and whatnot, into texture memory. You want to do that as little as possible because the bridge between CPU and GPU is slow and expensive, but once it's on the GPU, your vertex shader and fragment shader can access the data very quickly. So your OpenGL program uploads the vertex
data, uploads the texture data, specifies the vertex shader and fragment shader, and then the way it works is, when you render your scene, the vertex shader is responsible for taking all your geometry, and then transforming it, moving it around, scaling it, and
ultimately projecting it. It then gets passed into the rasterizer stage, which does the actual projection, figuring out which of those vertices end up where on the 2D surface, throwing away a few vertices as well, and there's lots of rules. But ultimately, you end up with a bunch of fragments that then are
passed into your fragment shader, and the fragment shader takes those fragments, also accesses the textures and texture memory, and then writes basically colour values into the fragment pipeline, and afterwards, basically, the stuff that you draw pops out in your frame buffer, shows up on your screen, and you
see what you've done. So much for the architecture. Let's talk a bit about the shader language in which you write the vertex and fragment shader. It's a domain-specific language. It's C-like syntax, but it's manageable, and with
only the very basic features of C. It comes with built-in vector types and metric types. We're going to talk about that in part two in a bit more detail. But basically, you have types for two-dimensional, three-dimensional, four-dimensional vectors, and for matrices built in, and you also have a bunch of
mathematical functions like trigonometric functions, sine and cosine, and such, square root, exponent, log, and vector multiplication, which we're also going to mention in a bit more detail. You don't have any recursion. You can make functions, you can write functions, but you're not allowed to crawl yourself, and there's only
limited support for loops, so you can have loops, but you need to know how often they're going to run through at compile time. This is because in that rendering pipeline, it's very tight for speed and the GPU needs to know how long it will take in order to set things up correctly. So, we'll have a
quick look at how the vertex shader and fragment shader look at in detail, because that's really the core of the whole thing, all the exciting stuff happens in the shaders. So we're going to look at what inputs and outputs each of those shaders have, starting with the vertex shader. As mentioned, the
vertex shader runs once for each vertex that you use to describe your geometry. So if you have a triangle, the vertex shader will run for each of the three points. If that triangle is part of a cube, it will be like six times two, six times three times two vertices all together, and the more
complex the scene, the more vertices you have. Each of those vertices is processed by the vertex shader. The main inputs for the vertex shader are called attributes, vector or vertex attributes. Those are different for each vertex.
You can define them yourself, so you can specify exactly which ones you need. Typically this is a position, but we common as well a texture coordinate associated with a vertice. We can also just give directly a color that we want to color that vertex in later, or whatever you really need. So those are under your control to all input it
as attributes, vertex attributes. There's also the option to give global parameters that are the same for each vertex that the vertex shader processes. Those are called uniforms, and that's typically, for example, your projection matrix, so the way that you define exactly from which
point you're looking under your scene and how it gets projected onto the surface. The vertex shader is responsible for passing any data onto the fragment shader that might be needed by the fragment shader. This you do by varying variables, which typically you would take from
the attributes, attribute as the input. You would then basically pass them through, which you have to do explicitly, and then they will be available to the fragment shader. The most important role of the vertex shader is, of course, to specify the final position of each vertex that it processes, and you do that by
setting the gl underscore position metric variable, the four-dimensional vector. Typically, you multiply it by some kind of matrix to get the final position in the scene, and going onwards to the fragment shader. Again, this runs for every fragment, which is a
rasterized, projected down vertex on the surface, so each vertex gets projected down by the rasterizer, then becomes a fragment, and the fragment shader runs for each of those fragments, and the fragment shader's task is to then set the final color of each of those
fragments. We have, again, per vertex input, as described before, this is what the vertex shader is supposed to supply to the fragment shader in the varying variables. Again, here we see we have the texture coordinate and color attributes and varying attributes. And
then, most importantly, to get to the texture data, so usually, if you want to color one of your fragments, you want to use some kind of bitmap in order to draw a logo or something, and this you do by using a sampler to the uniform variable. And then, finally, as I said,
the task of the fragment shader is to specify the final color of each fragment. This you do by setting the gl underscore frag color special variable. So that's the basics. Quite confusing at the moment, I'm sure, but it will all become a bit clearer once we look at the
actual code. But before we can do that, it's getting slightly more confusing, I'm afraid. We have to look a little bit at what kind of mathematics are required to model what we're doing. So, linear algebra is the mathematical branch that we're dealing with here. In my opinion, it's easier to learn
algebra by doing programming than to learn programming by doing algebra, so, in general, it's good if you just play around with it a bit. We want to describe our scene as three-dimensional objects. Those objects we describe with three coordinates, X, Y, Z,
means we have three axes. We use a right hand coordinate system. Basically, you do this. Your thumb is the X axis, it needs to point to your right. Your index finger points up, that's the Y axis, and then your middle finger points towards yourself, the Z
axis, and that's how you remember how those related. It's also left-hand coordinates which work differently, so that's a good way to remember. We need two basic mathematical constructs, vectors and matrices. Vectors describe coordinates in space or
on the surface, and matrices describe transformations. I'm not going to go too much into detail, basically just giving you an overview of what you need to know and study, maybe in uni or maybe look online. Vectors can be used to describe positions, three-dimensional X, Y, Z,
two-dimensional X and Y, if it's like a texture coordinate on top of a surface. You can also use vectors to describe colours. In this case, you have red, green, blue, and alpha components. There's two operations that are important for vectors. There's a colour multiplication, also called dot product. Basically, you
multiply a vector with a vector and you get a real number. If you multiply a vector with itself, you get the length of the vector, which can come in very handy for doing calculations. If you multiply two vectors with each other, you get the cosine of the angle between them, so if you multiply the X and the Y vector, then you get a 90 degree, the cosine of
90 degree, also very useful. And then finally, there's the cross-product vector multiplication, which is multiplying two vectors with each other, yields another vector, and that vector will be orthogonal, so if you multiply the X-axis vector with the Y-axis vector, then you get an
orthogonal Z-axis, which is very useful if you do lighting and things like that and shading, so much for the vectors. So, matrices describe transformations in space, in three- dimensional space. They have a 4x4
dimension, so 16 cells. Fortunately, you don't really need to understand the matrices in detail, because OpenGL has a lot of helper functions to produce those. What are the transformations that we can do? We can have a translation matrix, so basically moving a vector or a set of vectors in space, to the left, up, or whatever, to the back.
We can rotate a vector around another vector by a given angle in a rotation matrix, and we can scale vectors or a set of vectors, so make them taller or wider or all of that together. And there's also projection matrix,
which takes a vector from R3 and puts it into two-dimensional space. There's multiple projection matrix, most popular is perspective matrix, projection matrix, which is a vanishing point perspective, and basically means that things that are closer appear bigger than things that are further away, but ultimately, those matrices
will make a vector become two-dimensional. There's two matrix operations that are important to us. If you multiply a vector by a matrix, that means you get a new vector that has the operation represented by the matrix applied to it, and you can
multiply matrices with each other, which basically chains the operations each matrix represents, so that you can keep a set of transformations in a single matrix. You only have to do the multiplication once. So that's about it, very brief overview of the required algebra. Let's look at the code. We're going to look at a tool,
well, a single file script, hellocamp.go. The goal is to just render the logo of the Keras communication camp on a square in the middle of the screen, and then make it bounce around, like rotate around the y-axis a little bit, and we want to
specify the colours, but then take the actual shape of the logo from a texture. The example code, again, sadly cannot demo it to you today, but I'm sure it's going to be sorted soon, so you can download it on my website. This should work on a Raspberry Pi Model 3,
Raspberry Pi 3, Model B, Raspberry and Stretch, pretty much out of the box. So, your OpenGL programme has in general the following programme flow. First, you need to configure the OpenGL context, which again, using EGL, just tells the machine
where to do your OpenGL, what the environment in which to do the OpenGL code in. So that's all pretty boring. After you have your OpenGL context configured, you need to initialise your scene, and this has multiple steps. You need to
initialise your geometry data as vertex data. You need to initialise your colour material stuff as texture data. You need to initialise your vertex shader and fragment shader. Those two then get linked together into a shader programme, and then you want to usually also initialise some camera which you can use to control from which point you're looking at
your scene and manipulate that later. So we're going to look at that in more detail. But after you have initialised your scene, you enter your render loop, and the render loop is really just update the scene with user input or just because some time has elapsed for animation, and then we draw the
scene and then start all over again around 60 times per second. So, configuring the context looks very different depending on your platform. On the Raspberry Pi, it's basically that. There's a call called create context. You can check for error. You can ask then your
library about the width and the height of your screen which will become handy later, and then you say that should be the current context and you initialise the GL subsystem. Scene initialisation. Basically, you need to initialise all the stuff as described, so that looks pretty straightforward. You also should keep track of the
starting time, so you want to get the current timestamp and store it in a variable. So let's look in more detail how those initialisations look each. Initialising your geometry, your vertex data, and the easiest way we can do that is by just creating
a static array of floats which then can be passed as vectors. In this example, we have two triangles, one triangle with the vertices ABC and the next triangle with the vertices ACD. Each of the vertices is given in a single line in the variable, and
each of those vertices has multiple components, so the position XYZ, that's the position in space, then the texture coordinate that belongs to it, so where the texture should be mapped onto that specific vertex, and then finally the colour red, green, blue, and alpha that we want
to use on that corner of the triangle, so that's like just a constant, like hard-coded variable, and then you tell OpenGL to load that geometry vertex data into the GPU's vertex memory, which you do basically, you generate a buffer, GLGenBuffers, and
you bind the buffer, then you say GLBufferData, pointing it at the vertex data, telling it how long that, how much that is, and then you check for errors to see whether something went wrong. Next, you want to initialise your texture data, so in this case, a logo, PNG, showing
the logo of the EKS communication camp. First, we load the logo from file, which is the PNG. We then use that to draw into a Golang image, RGBA image object. We draw the data we just loaded into there, then we
have it in the required RGBA format. So, after this step, after the first four steps, we have the texture data variable, and then we need to tell OpenGL to load that data into texture memory, which again, typically, you just use GLGen textures, you say it's active texture, then you bind the texture,
and then the GLTextImage2D call will tell OpenGL to take the data and put it into the GPU's texture memory. Next, we initialise the vertex shader. The vertex shader we can just keep in a static string, like a
hard-coded string in the go source. So, this vertex shader is very simple. It takes a uniform camera matrix that we will use in order to specify the projection and to do the rotation, and it takes the position, texture coordinate, and color attributes that we specified in the vertex shader before. We then
pass on the texture coordinate and the color to the vertex shader in varying variables, and finally, we just say that the GL position should be the camera matrix multiplied by the vector position. So, a simple like three-line shader.
And then, since we now have that string, we need to tell OpenGL to load and compile that shader, which again, you do GL create shader, then GL shader source, GL compile shader, and then you check for error. After that, the vertex shader is in compiled form on the GPU. Next,
we do the same with the fragment shader. Also very simple. Takes a texture uniform variable as input, then the two variants that are passed from the vertex shader, so texture coordinate and color. Inside the shader, then we just say, we first find out the color
of the texture at the specific texture coordinates, so we use the text coordinate varying to look up the color value inside the texture uniform, and then we specify the GL frag color, the final color of that fragment, and we want to use the red, green, blue as specified in the vertex attributes,
but we take the alpha value from the texture that we loaded into texture memory, and then again need to do the same thing as before, exactly the same code, basically GL create shader, shader source, compile shader, and check for error. After that, we have both our
shaders in memory. Now we need to link them together into the shader program, which again looks very similar just to like we do GL create program first, then you attach both shaders to that program, you tell it to link the program, and check for error again. After that, you have the
shaders linked in memory, so the varying variables will know of each other, and last thing you need to do is to specify where to find your vertex data, which you do by calls to GL vertex attribute pointer, so you give it like this three codes of block
that do basically the same, first for position, then for text code, then for color. We just say at this point in memory, we have that many floats following each other that represent the given vertex attribute. After that, the program is in memory and ready to go.
Finally, we want to set up our camera. For this, we first set up our projection matrix, which we can compute by using the width, which we can do by looking at the ratio of the screen, so we divide the width by the height of the screen. We decide on the field of vision,
in this case, 45 degree, and then there's the MGL32.perspective call, which returns a projection matrix with that specific ratio and field of vision, and the near and the far projection plane. Next, we want to specify where in space our camera is. We want to have the camera a bit down onto
the set axis so that we can look onto the zero, like the origin of the coordinate system. As you might have noticed, we described our triangles directly situated on the origin, so if they here, then we want to look from a bit far away. And it's the same thing, the so-called
MGL32.lookAt vector, which takes the position of the camera, what it should look at, the position of that it should look at, and the vector that says what is up. So now that we have that projection and view matrix variables, we can multiply them together to a single camera matrix variable,
which will then be used in the shader. So we just start with the identity matrix, then multiply the projection matrix, multiply the view matrix, and now we have initialized our camera. So that's the complex stuff. The initialization of the scene is typically the most verbose code
that you need to do, which is good because we only need to run it once at the beginning of our code. Now that the scene is initialized, we can start with the render loop. As mentioned before, we need to update our scene, and then we need to redraw our scene, updating, and the simple example is pretty
straightforward. We just want to know how much time has elapsed since start of the program. We do that by getting a current timestamp and subtracting the start time from it. Then we have a number of seconds, like a floating point number of seconds. From this number, we can generate the angle, by which we want the triangles to
rotate. So we just take the sine of that of the elapsed time and multiply the sine with 45 degrees. So basically, we have values between the sine, the return values between one and minus one, so the angle will have values between 45 and
minus 45 degrees. With that angle, we can now generate a rotation matrix. This is done by the mgl32.homogrotate3dy function call, and takes an angle, and then returns a rotation matrix that rotates along the with that angle
around the y-axis, and then we have the rotation matrix. We can multiply it by the camera that we have set up before, and then we tell gl to use the index of the first element of that rotation matrix as a pointer for that uniform that we defined in our vertex shader.
So after this update step, the uniform variable in the vertex shader will have the value that is now computed of that rotation dynamically from the elapsed time. As well as the projection and few matrix that we had set up before.
Now that we updated the scene, we need to draw the scene so that we can see the results of that. And again, this is pretty straightforward because we did all the hard work already. Basically, we need to clear the screen, so delete or remove all the color buffer and depth buffer information. We need to tell it to use the program that we have
set up before. We need to tell it to use the buffer that we have set up before and to use the texture that we have set up before. With that in place, it's a simple call to glDrawArrays. In this case, we tell it to paint six vertices as triangles, which means two triangles, three vertices each.
And then again, check for error. this should be the scene should now be drawn, which means now we have to start again. Go back into the render loop and render at 60 times per second. Usually, you would have a sleep of like 160, a sleep
call there. There's more sophisticated ways to get smoother frame rates, which I cannot go into in this point. But yeah, basically, you start here. This is the tight loop that you want to do as often as possible. A minimum 60. So that's about it. As that, I'm very sorry
that I'm not able to give you any kind of actual animations or visuals that show you what I've been talking about for the last half an hour. Sorry for that. I will make it work. I'm confident in the next few days. So please, if you're interested, go to the URL that I showed you before. I download the program and then you can study it in detail
as we contain all the code that we looked at in the talk just now. And finally, there's some resources if you're interested in more details. So cronos.org publishes reference pages for all of their standards, which is very useful. It's an open standard, so you can get that
without registering or anything. At the top URL, there's also handy reference cards that you can use for programming. There's a book called OpenGL ES 2.0 Programming Guide, which I find very useful to give you an overall view of the whole thing. And there used to be the red book.
There used to be the Bible OpenGL Programming Guide, the red book, which is now a bit outdated, unfortunately, but might still be worth a look. That's it. Thank you very much. Thank you, Falkert.
Do we have any questions? There are microphone angels, one here, one over there. And there's a signal angel, which I am unable to see. No questions from the internet, no questions in the room. Oh, there's one question. Go ahead, please. Hi, thanks for the talk.
As far as I understood, if you want to update the latest scene, you calculate the transformation matrix you need, like you change the angle of U, then you calculate the transformation matrix in your language of choice, like Go or Python or whatever, and then you upload
the transformation matrix to the GPU, and then you redraw the scene by telling the GPU to redraw every vertex with this new transformation matrix. Is that correct? That is what I said, and that is correct. However, that's not the only way to do it. You can basically also
just pass in the time into the vertex shader and then do the same calculations on the GPU, or you could, like, generate the angle on the GPU, then pass the angle as a uniform into the vertex shader and then do the matrix generation there. It depends on, you know, what's the most expensive part in this example.
We chose to use the more straightforward way that demonstrates how to do it the best, but yeah, in the end, you can do it either side, and that's a bit of the art of it, to decide, like, what's easier to do in on the CPU in Go, and what's easier to do in the vertex shader. It really depends on the nature of the thing. So I could just pass the new angle to the GPU
and then do all the calculations? Yeah, you can pass the start time and current time, then you don't need to even pass the angle, you just do all that, or you calculate the angle from it, pass only the single float, or you do the matrix calculation and pass the whole matrix, it's really up to you.
But keep in mind that you need to pass the matrix anyway for the projection and few matrix, so it makes somewhat sense to multiply then the final rotation on the CPU side as well. But again, it depends on how many of those you have and how often you need to do it and all that. Thanks.
Any more questions? Wow. That's rare. Such a well-held presentation and no more questions left. So a warm hand to Falkert
and thank you for your talk.