We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Creating the next hit game with FOSS tools

00:00

Formal Metadata

Title
Creating the next hit game with FOSS tools
Alternative Title
Making the next blockbuster game with FOSS tools: Using Free Software tools to achieve high quality game visuals.
Title of Series
Number of Parts
561
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
More than ever before, It is now possible to create high quality looking game visuals with little effort by using free and open source software tools. The combination of Blender, Gimp, Krita and Godot Engine running under Desktop Unixes gives the developers unprecedented creative power and ease of use to achieve professional quality visuals, all this with ease of use and productivity surpassing that of proprietary software. This presentation will be a step by step description of the tools and techniques used by an artist and a programmer to create the Godot Third Person Shooter demo in three weeks. The presentation will be broken down in steps describing the tools and techniques used to create a state of the art third person shooter demo using free and open source software. -Level blocking using Blender -Level testing using Godot -Character modelling using Blender -Character rigging and animation using Blender Actions -Character import using Godot -Animation Tree and State Machine in Godot -Character and Enemy VFX in Godot -Level hard-surface modelling using Blender -Texture painting using Gimp or Krita -Level importing to Godot -Level lighting and post processing setup using Godot Real Time Global Illumination. -Level audio setup using Godot
Open sourceGame theorySoftware developerBitOpen sourceSoftware industryMetropolitan area networkFirst-person shooterVideo gamePoint (geometry)SoftwareGame theoryDifferent (Kate Ryan album)Process (computing)Total S.A.Multiplication signWindowFreewareSharewareEnterprise architectureSinc functionSoftware developerAuthorizationComputer animation
SharewareSystem callPlanningVolumenvisualisierungComputing platformRevision controlOpen sourceEndliche ModelltheorieGame theorySharewareSoftware developerMobile WebPixelOrder (biology)Artistic renderingPoint (geometry)Real numberMachine codeComputer animation
PrototypePlanningCurveControl flowGame theoryPhase transitionPrototypeGoodness of fitVideoconferencingProper mapRootLevel (video gaming)Finite-state machineInteractive televisionAbsolute geometryDistanceObject (grammar)MetreSpacetimeEndliche ModelltheorieCore dumpFree variables and bound variablesGeometryMereologyContrast (vision)Type theoryCubeBlock (periodic table)Polygon meshRoundness (object)Computer programGroup actionGame controllerDimensional analysisSoftware testingNetwork topologyDemosceneState of matterSquare numberSoftwareDataflowTexture mappingRight angleWordReal numberNumberVideo gameSlide ruleCycle (graph theory)Computer animation
PrototypeControl flowAcoustic shadowPhysicsSound effectMereologyPhysicalismPrototypeLogicShooting methodParameter (computer programming)Machine codeRevision controlMathematicsPhysical systemFunctional (mathematics)Object (grammar)Translation (relic)Combinational logicMechanism designParticle systemCore dumpInteractive televisionShader <Informatik>Computer animation
Texture mappingDigital photographySinguläres IntegralTesselationGradientContrast (vision)CloningRevision controlInversion (music)Normal (geometry)Digital filterState of matterExplosionInteractive televisionMachine codeDataflowTexture mappingWater vaporEndliche ModelltheorieBitMappingLevel (video gaming)Digital photographyPerspective (visual)Presentation of a groupGradientContrast (vision)Different (Kate Ryan album)MultilaterationCartesian coordinate systemSound effectTesselationTransformation (genetics)Normal (geometry)InternetworkingGame theoryCloningRight angleOpen setExtension (kinesiology)Exterior algebraType theoryMatching (graph theory)Focus (optics)SharewareTwitterBlock (periodic table)CuboidSingle-precision floating-point formatMetric systemComputer programScripting languageFrequencyGraph coloringGreatest elementComputer animation
Normal (geometry)Texture mappingScientific modellingDisk read-and-write headRandomizationShape (magazine)Level (video gaming)Texture mappingGraph coloringObject (grammar)Table (information)Content (media)MappingEndliche ModelltheorieBitWordRoundness (object)SharewareWindowComputer animation
GeometryTexture mappingScientific modellingPlanar graphLevel (video gaming)DampingEndliche ModelltheorieResultantSharewareDirection (geometry)Graphics processing unitMaterialization (paranormal)Texture mappingCartesian coordinate systemObject (grammar)GeometryMappingFirst-person shooterVertex (graph theory)PolygonFactory (trading post)Instance (computer science)PurchasingComputer animation
GeometryGraphics processing unitKolmogorov complexityScientific modellingAxiom of choiceLevel (video gaming)Endliche ModelltheoriePhysicsCollisionTexture mappingGeometryBitResultantMaterialization (paranormal)Level (video gaming)CollisionSystem callFingerprintEndliche ModelltheorieTriangleEngineering physicsProjective planeObject (grammar)Multiplication signGame theoryVertex (graph theory)Revision controlPhysicalismShape (magazine)Different (Kate Ryan album)Presentation of a groupComputer animation
MedianFile formatDifferential algebraic equationSoftwareReverse engineeringObject (grammar)Engineering physicsCollisionEndliche ModelltheorieGeometryPhysicalismTexture mappingFile formatAreaLibrary (computing)Latent heatOpen sourceMereologyMaterialization (paranormal)Game theorySinc functionCartesian coordinate systemArrow of timeReverse engineering1 (number)Similarity (geometry)Computer animation
DisintegrationDemosceneTexture mappingBuildingCloningProcess (computing)Game theoryDemosceneINTEGRALComputer fileMathematicsTouch typingLocal ringMaterialization (paranormal)Texture mappingInstance (computer science)Special unitary groupGraph coloringSystem callProjective planePlotterComputer animation
Directed setGlobale BeleuchtungMotion blurDepth of fieldIntegrated development environmentSound effectSound effectSpacetimeGlobale BeleuchtungTouchscreenDirection (geometry)Set (mathematics)Acoustic shadowDepth of fieldReflection (mathematics)AreaNominal numberComputer animation
Game theoryBitRight angleMUDSpecial unitary groupSharewareMusical ensemble2 (number)Level (video gaming)Real-time operating systemIntegrated development environmentPlanningComputer animation
Computer animation
BuildingSharewareBitMusical ensembleParallel portDemosceneMathematicsData storage deviceSpacetimeNetwork topologyProcess (computing)RootGodHidden Markov modelLine (geometry)Right angleSimilarity (geometry)Cartesian coordinate systemComputer filePhysicalismSineHydraulic jumpVirtual machineComputer animation
DemosceneAreaLevel (video gaming)Different (Kate Ryan album)Dot productSound effectRoboticsParticle systemMathematicsShape (magazine)Musical ensembleType theoryMixed realityComputer fileBus (computing)Category of beingComputer animation
PixelSharewarePresentation of a groupComputer animation
Level (video gaming)Different (Kate Ryan album)PolygonDistanceCylinder (geometry)Covering spaceSheaf (mathematics)Revision controlCountingObject (grammar)Semiconductor memoryGeometryKey (cryptography)1 (number)GodEndliche ModelltheorieMessage passingGraph coloringGame theoryVideo gameDensity of statesCASE <Informatik>Vertex (graph theory)Structural loadElectronic mailing listComputer filePower (physics)DampingGraphics processing unitNormal (geometry)Computer hardwareMultiplication signPolygon meshComplex (psychology)Lecture/Conference
Coma BerenicesMultiplication signSoftwareGame theoryComputerTransformation (genetics)Projective planeStandard deviationPoint (geometry)Physical systemConstructor (object-oriented programming)Level (video gaming)Software developerEndliche ModelltheorieKeyboard shortcutFreewareSystem callDifferent (Kate Ryan album)Wave packetView (database)State of matterNormal (geometry)Number1 (number)Structural loadMereologyOpen sourceLecture/Conference
Texture mappingLevel (video gaming)Complete metric spaceShader <Informatik>Revision controlMultiplication signSoftware testingComputer hardwareWritingDifferent (Kate Ryan album)Nichtlineares GleichungssystemDefault (computer science)SharewareParameter (computer programming)Quicksort1 (number)Visualization (computer graphics)Open sourceMachine codeProjective planeMusical ensembleWindowDataflowTraffic reportingRight angleNP-hardOpen setMobile WebService (economics)Lecture/Conference
Computer animation
Transcript: English(auto-generated)
Hello, everyone. Thanks for coming to this talk. I'm pretty amazed about how many of you have come, so thanks for coming. I've been doing video games for more than 20 years, almost about the same time professionally.
But before then, since I was like a teen, I always have a deep passion for open source. I started using Linux desktops for, for like, since like 94 or 95. It's been a long time. So when I started working professionally in game development, most of the game development industry uses Windows,
which was always a problem for me because I didn't want to go to it to use it. In fact, if you consider like the total revenue of the software industry, about 25% is the video game industry, of all the total software industry. And for being something so big, open source hasn't like been created so much in the game industry, as much as it is in enterprise and other kind of industries.
I always wanted to develop games with open source. I always actually did in the past. So, after many years, different things have happened. I ended up being one of the
original authors from Godot Engine. You probably have heard of it at this point. Together with Ariel Mansour, I'm not sure if he's your own. But he also came to the fossil. So, Godot was open sourced like four years ago. I'll be a bit more probably five now, and it's been growing a lot.
So, about this talk, the talk is creating the next hit game with free and open source software tools. The idea is to show that it can be made, and I will talk a bit about how our process was for creating our new third-person shooter demo, which you probably already saw in your booth. So, let's get started.
What's the motivation? Godot 3 was released exactly a year ago. We did a demo to showcase all the new rendering features. Before that, Godot was mostly a mobile engine to make mobile games, but for version 3, we rewrote it, mostly rendering, so we can make something real nice. You can see those screenshots are our new renderer that came out a year ago.
But we don't have anything to showcase it, so even if it was pretty good, you couldn't show what it would actually do. So, we did a new demo to showcase it. Our plan was, we have, these are our simple 2D and 3D platformer demos. Godot is a game engine where you can do separately 2D and 3D. They are like separate engines in one.
They work mostly the same, but 3D and 2D APIs. So, we have this platform demo, which is almost the same code in 2D and 3D, so you can just migrate from 2D to 3D, which is something many developers do at some point. of their career. But the thing is that our 3D platformer demo didn't look very nice.
The art of this one, I made the art for this, the 3D one. The original one was made by an artist called Fernando, Fernando Calabro, and I tried to replicate his style in 3D, but I'm really a terrible 3D artist, so it looked pretty bad. But I know that Godot 3 was capable of much better.
So, this is where the journey begins. We use a donation money. We have a Patreon. If you're not one of our sponsors, please become one of our sponsors. We use some of the donations to hire an artist. We actually hired the same guy that made the pixel art, because he's actually a really good 3D artist.
Actually, he's a professional 3D artist, and this was done as a hobby. So, we hired the same guy. To make this demo, we started with concept and prototyping. I worked so long in the game industry that I'm very used to this. If you do a really good concept and prototype and you do it properly, you can get really, really high quality later on.
You will ensure that all the problems are solved by this phase and not later. So, we started planning it. The idea was that it should look really nice. It should be futuristic design, because it's very trendy to make futuristic style. Most artists want to do this nowadays. Ten years ago, it was like medieval and now it's futuristic.
We wanted to make it curved, because something that looks very amateur when 3D is when you make like all square levels or blocky. So, we wanted to make something really pretty, which is like curved, which is way more professional looking. We wanted to have multiple rooms. It had to be short but pretty. So, this is what's kind of the idea.
It's kind of a round level. You go around the deck and then you get to a richer room. You will see soon. So, first we started with level blocking. The artist made the level blocking in Blender. He just took all the ideas and he made a really basic geometry. I'm not sure if you can see it, because from here the contrast is not very good, but
he made all the blocking. Everything that was going to be in the level, he made a mesh for it. It's really simple, just no textures, very flat, no detail, nothing. The idea was to test the game flow, test the dimensions of the level, and also test the lighting. How was the general idea was going to be?
Afterwards, with character prototyping, if there is something you learned in the game industry is that when you when you prototype you can make like the art cannot be final. You can even put cubes somewhere or maybe more detailed geometry for your characters, for everything. But the animation has to be like near final or final because
part of your gameplay is the animation always in a game or most types of game. If you're going to do something, the animations have to be near final. Just using placeholder animation is asking for trouble because animation is part of the core mechanic almost always in a game. It feels pretty much always. So, animation was one of the first things we do. This is like a test model.
It looks pretty complicated, but it's not the final one. You can see this was done in Blender. We did the model and animated it in Blender. This was done by Fernando. So, yeah, never do animation placeholders. If you make a prototype just from the very beginning, try to make sure the animation is final when you prototype or make three games, especially.
So, later we tried to do gameplay prototyping. We didn't have a level. For some reason the video is not looking very good, but sorry. We didn't really have a level yet. It wasn't modeled, so we started doing prototyping in this very black-and-white scene. Usually when you do prototyping and video games, you try to use these squares,
which I think are like two meters by two meters, so you have a better idea of distances. This is just practices which are typical in the game industry. Interaction and control must feel like final from a prototype. You rarely try to make something that you will improve later. It doesn't look final in the prototype. You're doing it wrong.
So, this is the animation tree. Go.3 has a... 3.1, which is coming soon, has an animation tree, which means you can create all the animation interactions, like state machines and changing states, blending space. This is very common in modern game engines. It allows the animator to test all the interactions and all the transitions and all the blendings
manually from a UI, and then the programmer can take this tree and use it from the code, like changing state from walking to jumping to anything. So, animation trees are very programmer-friendly to communicate between the animator and the programmer and make sure everything works as intended.
We use root motion. Root motion is a technique where the animator animates with actual motion, like for example, originally in games, you would animate a walk cycle, but it was in place. So, when you put it in a game, in the football slide, that was very common. I think that was the first game that actually did root motion properly.
So, we use root motion, which means that you can see in the video, sorry it's a bit broken, but that's what the video is about. When you change states, it changes animation and the blending, and you can see that the animation is like, it's totally synchronized to the foot of the character. If you saw that the previous slide, this one, you can see the animator actually moves with the object.
It's not in place. So, when in the engine, we remove this translation, this transform, and you use it in code. This transform is retrieved from code and used to actually move with the logic. So, VFX. VFX is something that actually also needs to be pretty final in a proper prototype. It may not be exactly the final version,
but it has to be very close to final because, again, both VFX and animation are parts of our interaction, and that is actually part of a core mechanic. So, VFX has to be final from the prototype. Godot is very cool. It has something that no other engine has, which is an animation system where you can pretty much animate absolutely anything in the engine.
You can even change a mesh or change a texture, code functions. You can do so much from the animation system in Godot that you can do VFX really nicely. So, this is a combination of particles and physics and shaders that change parameters, all controlled by an animation, like this shooting effect is done from there.
You can see a battle here in how the battle looks. Sorry, again, for this. So, this is an interaction. You can see that this explodes and then the states are changed to rigid bodies, so they just blow up and the fire starts emitting. This is absolutely all done from animation. There's no code involved.
So, texturing. Before modeling, you usually want to do texturing, so you cannot get off an idea of how to do it. I will explain a bit the texture workflow. There are two texture workflows you can use. It's important to understand how to read textures if you're going to make a game, understand where they can come from, how you can use them, and different things.
By the way, the new game, 2.10, is fantastic. It's really, really good. I will show you a bit later why. And then you can find base textures somehow. There are many royalty-free textures around you can find on the internet. People just like taking pictures of things just to make textures.
You can create them from photos. This is why I will explain later. I wish there was a Substance Painter alternative as open source, but there isn't yet, so we are waiting on Blender to do it. Hello, if anyone from Blender is here, please hurry up after 2.8 is out. It can also be painted with Sprita. Many types of games use hand-painted textures, if you remember.
Remember games like Wind Waker? They have very pretty hand-painted style textures. I wanted to go more in-depth on that workflow when doing hand-painted textures in Krita, but the presentation was really long. And you really need an artistic skill to do it, so I will probably skip it and one day extend on that.
I will explain how to make a texture from a photo. You can do this both, I think, in GIMP and Krita. GIMP is probably a better switch for photo manipulation. Here's an example. I took this picture myself. I was at a Roman ruins while visiting Europe, so I took this picture.
I think this floor is pretty old. I took a picture. This was going to be a nice texture someday. This was like 10 years old, but I will use it to show you how you can make it in a texture. So, you first have to look for a region that can tile.
Usually, most textures have a region that could be nice for tiling. In this example, this one is kind of obvious where it can tile. Of course, if it's not totally aligned to a region, to a texture, you can use cage transform or perspective transform to align it to a box, then you can crop it.
This is something pretty useful you can do in GIMP. You can use layer offsets by half the texture. So, this is pretty much the texture was moved half to the right and half down and then just moved around like tiled. You can explore how well it tiles. So, you can see it doesn't really tile well because of two reasons.
First, it just doesn't match from one side to the other, and the second reason is that it has a gradient. You can see when we tiled it here in 3D, there is like a gradient. It makes the tile pretty bad. So, to get rid of the gradient, you can use high-pass filter. This was added to GIMP 2.10. You
use a high-pass filter effect, and then it removes high frequencies, so all gradients go away and just keeps pretty flat. You can see here, if you look in detail at the center cross, it's not matching still very well, but it's much better for tiling because the gradients are gone.
So, to fix the tiling program, you just use the clone tool a bit and just fix it pretty much. That's an easy way to do it, the most common one. As I say here, you just use the clone tool. There is another technique I will show in a bit which is pretty cool for fixing tiling. So, then you can adjust brightness and saturation, contrast and hue and everything and make it pretty as you want.
You can see here that it's tiling really well. This has perspective in the bottom, and it's tiling very well. So, low contrast usually looks better. Try to make your textures more color-focused and contrast-focused. That will usually look better in the game. I explained here how to create normal maps. I will do it a bit quick.
Usually, if you want to make normal maps, you have to start from what's called a height map. You can see there in the middle, this is a height map. It shows what is white is above and what is black is below. So, once you have a height map, you can use the normal map effect in GIMP. I'm not sure if it comes with it or it's an extension. There are other applications you can download. They just take a height map and return a normal map.
So, this is why you can use it. It shows it has depth when you're working on it. Some effects may break the tiling. You can fix it again with what you have just learned. Embossing may work for creating a height map. You can see on top the embossing of the original texture. You can use it to create a height map, kind of like.
Here's a really simple magic trick I love for tiling textures. You can see the original texture. You can see that it doesn't really tile very well, neither vertical nor horizontally. It's a nice texture. It doesn't tile. So, what I do is, I just, like, this is the base layer. I duplicate it. I add another layer above.
GIMP has an effect. It has a filter, which is cool. It makes it seamless. All it does is create a crossfade of the texture. It really looks pretty bad. It makes it seamless, but it looks pretty bad. So, what I do is, just this above layer, I make it tile seamless with that effect. But then I just use the eraser and erase everything but the border.
I just leave the border. And then I merge it down to the layer below. And you can see on the right, it looks fantastic. This is, in my opinion, the best way to tile textures. It's effortless. It could be done with a script, if you want, automatically. It looks really well. So, this is just a small technique you can use if you can check back on the presentation later online.
And this is really good for tiling textures. So, then we're going to modeling. This is not something I use for the demo, this technique, but I would like to explain it. It's a very common technique in the industry. You just make an atlas with shapes that may be, like, related to the way you are going to model a level or something.
Here, you can see, you put, like, two blanks of wood or something with a detail, and then something round, and then something that looks like a wall paper or something. You can paint the head map. I painted this head map myself with GIMP.
And then from the head map, you can get the normal map there. Then I converted this one to an occlusion, and then you can use the rest for, like, metalness and roadness, which is common in PBR workflows. I'm not going to explain PBR. You can find a lot of it. So, once you have those textures with your atlas, with kind of shapes, random shapes you put on it, just make objects that use those random shapes.
Just be creative. Like, you make those shapes first, and then you try to think on how to make objects that use them. So, I made some furniture. Like, you have a window, a table, a chair, another table. And then you can use it, and it's really quick. You can model really quickly with that. Just make the models, and you will map based on the model, on the texture.
It's very quick. You need to kind of use the same color for everything, because when you use 3D and you use meat maps, they will bleed a bit. So, if you kind of use the same color, it's not really noticeable. The big thing about this technique is that you can create content a lot. It works really good for futuristic things. I read the document, and I think Overwatch uses this technique for many of the levels, so that can be of interest.
So, this is an example. I made this and rendered it a lot. I made all this furniture and the walls, and this took me, like, three hours or something like that. It's really, really quick. You just use this technique from scratch, and you can make, like, a level really quickly.
So, this is geometry-based modeling, which is actually what we use for the demo, this new third-person shooter demo for Godot. This is a really nice technique. It's a very modern technique, because you can only do it, like, nowadays. The GPUs are super fast, and the amount of vertices in the model doesn't really matter that much. So, what you do is just model all your detail and geometry. Just forget about painting a detail in the texture.
It may be very subtle if you are painting the texture, but most of your detail, like 90% of your detail, is just geometry. You have, like, a hole in the middle of the thing. Just model the hole. You have, like, a small, just a fin or something. You just model it. Just every detail goes to the texture. Model every detail.
Forget about, like, using the texture to paint the detail. Just go crazy. It's okay. The new GPUs don't really care about so much polygons, no matter. It's little for the new GPUs. So, then what you can do is use Tripler mapping. Tripler mapping is called auto-texturing. It's a way that just, like, blends between texture in three axes.
The texture fails in every axis and blends depending on the direction the face is having. This is called auto-texturing. In Godot, you have, like, this technique you can use. So, the nice thing is that you can just set up a lot of materials, and then you just assign them randomly, where you have, like, the metal material, the tube material, the screw material, and then just put them there. I know that, for example, many companies in the animation industry,
like DreamWorks, use this technique. They have somebody that goes and models everything, and then someone else makes the materials and assigns them to the object. So, this is a really nice technique, and you can get really nice results. Like, this is a picture. It's using mostly this.
Then it has a bit of the texture painting, but mostly you can see it's all geometry. You can make it look really nice. Here you can see a few more examples. It looks really nice. I mean, you just can, like, you just go crazy with geometry. It doesn't matter. It will look cool.
And then you just assign the materials for everything. So, level design. I will show it if I have time in Blender. I still have a bit of time, so when I'm done with the presentation, I will show a bit of the projects in Live.
So, then, Arches converts all the blocking that was done initially. It's converted to final assets. They are given more detail, and they are textured. This is Blender. The idea is that Arches has full freedom. The level is actually made in Blender. Many people think that it's better or usually better to just model different pieces and assemble them in the game engine.
Usually, when we work professionally, this is kind of a mess, because the game designer doesn't really know how to, like, put together a level with pieces. So, it's better we just agree on what the level is, and then the artist can go wild and do all the art and do all the detail and everything. That just conforms to what originally was agreed with the designer.
So, the pipeline is much easier this way, in my opinion. Collisions are added manually also in here. Like, for example, we have something in Godot. You just add, like, minus call to the name of the object. When you import it to the engine, a collision will be generated for it. So, Godot converts to physics shapes. You can do just minus calls, so the collision has the same shape as the object you modeled.
But sometimes you make a model really complicated, and physics engines are really slow when you have so many triangles in the collision. So, it's very common to just make a new version of the model that is only, like, the vertices and the faces, just for collision. So, you make it, and in Blender, you add, like, minus call only, which means this is just for collision, no art.
When imported, the art is deleted. So, here's an example of how it works. This is, like, the base model, which is a model, and the collision is another object. I put it to wireframe, so it's easier to see, like, which one is the collision and what is the object.
Using all this very complex geometry in the physics engine is going to kill the physics engine, so you need to make something simpler. And then in Blender, you just make the base material. Since Blender doesn't really support the same kind of, well, it does, it will now in new Blender. But it's easier to just make something very simple in Blender as reference for the materials.
When imported to the game engine, you set the actual materials and finish with the textures and everything. And you always use very verbose names, so when going to the game engine, you find the materials. Otherwise, if you have, like, Blender material .001, you're not going to find it. The same goes for lights. You just can put, like, lights everywhere.
When imported to the engine, the light will be in the engine. I will skip this. You can use, like, one thing to notice that is important is that most 3D applications, usually in the industry everyone uses the FBX format for exporting 3D. We can't use this in Godot because this has a very, very restrictive license.
It's not compatible with open source. So the only open formats we can use are DAI, which is Collada, which is old, and GLTF2, which is newer. Both work really well for exporting to game engines. So there's no one effort for reverse engineering the FBX format. It's, like, a very close format. There's no specification, just a close library to open it.
But it's being reverse engineered, so I hope we can still support it soon because people really requested the ones that use Maya or similar. So this is the last part, engine integration. So it's everything, making everything pretty. In Godot, you just export to the project, and it will open it. I will show you this later, running on the engine.
So the idea is, I will explain quickly, you just import this, and the materials, you just put all the textures. Just when you import it, it doesn't have any material. It's flat. Here, the materials were added, but there's no lighting. Finally, we start modifying the lights and everything.
In Godot, you have something called Instancing, like the scene that was imported. You can't touch it because it can't edit the Collada or the GLTF file. So what we do is just instantiate that scene into another one, which is very common in Godot. You set it as editable, and you can make local changes in the new one. This way, you can edit the original scene that was exported from 3D without changing it.
Why is this important? Because the artist might go and keep changing the scene in Blender, and then will re-export it. If you make local changes directly on that scene in the engine, they will be lost. So in Godot, you instantiate the scene from the artist and make local changes in the instance.
So the artist will just save the scene again. We re-import it automatically, and now all the changes are kept. Finally, the same with the lighting. You can just set lights and shadows and everything. Then you have the final step, which is like global illumination, which is the light bounces. Usually, you set up the direct lights.
When you set up lights, it's like direct lighting, which means that this is just where the light affects. Global illumination is what adds the bounce. If I throw a very, very strong light to the floor, it will bounce and spread across all the rooms. So global illumination is pretty much that. Godot has something called GI Pro.
You set an area, and all the lighting inside will just bounce around. It looks pretty pretty. You can see the comparison above and below. And then you can add pretty effects like ambient occlusion, screen space reflection, depth of field, blur, you can add fog, anti-aliasing, bloom, and everything.
So in the end, it ended up looking like this. You can see it looks pretty nice. I mean, it's kind of what games look like. It has all the detail, and it's very nice. Since we have still a bit more, I will show you this, working and running on the engine.
I will show you first the demo, if you've never seen it before, how it finished. Give me a second. This follows the planning. You can just walk around the stage.
You can see that the character moves around, and all the lighting from the environment affects it.
This is the real time. If I get close to this, you can see this emission material is affecting the character. There's the enemy.
So yeah, that's how it ended up looking. So, thanks.
I will show you a bit how it is done in the engine, so you can see a bit more detail about the process. I will open the player scene. This is the player.
You can see here, this is something that Godot does, where it says scene root. You can see here that this is actually a COLLADA file, which was exported directly from Blender. This was instantiated in this current scene, which is the player scene. The player scene has a node, a base node, which is like a kinematic body,
which allows for moving with physics and everything. It has a lot of animations and everything you may expect. This is everything imported from Blender, all the animations in Blender. This is the animation tree. For example, you can see that there is a blend filter,
and the eyes are an animation that's running in parallel that makes it blink no matter what is going on. This is called animation tree. You can, for example... Where is this? This is a blend added for aiming up and down. This is a blend node. For example, this one-shot node you can turn on,
and it's going to jump. It doesn't look very interesting now. No, actually, that's for landing, not jumping. This is mostly a machine. You can change the animations. You'll probably notice that when you put walk, the guys, like the Palestinians, this is because there is a blend spacing here.
So depending on where the blend space goes, you actually need to activate the animation. This is this one. See, this is walking with gun, change the blending. In this axis, it's switching between gun, no gun, gun, no gun. This is just the animation speed.
If I change this, you can see it just changes the animation speed. According to where I am, I can just start walking, then when I shoot, it just takes out the weapon and stores it back. There's a lot of interesting things you can do with the animation tree. Let me show you the scene.
This one, right? It's pretty big, we'll take it just a bit lower. Okay. This is the actual level scene
imported from Thunder. You can see this huge green thing is the GI probe. This is what gives all the light bounces. You can set it up around anything and it will give light bounces. It's really quick to set up. Here you can see just everything, like the lights on the enemies and everything. You can see that this is what he mentioned.
This is actually the scene as imported from Blender. It's instantiated here. It's great because it's children's notes. You can still modify any of the properties, so when you reimport the original file, they will keep the changes. This is for the enemy scenes.
Here's the enemies. You have different robots. You have areas for the sound. You can, for example, here you can see we have a mixer, so mixer lingo dot. This means that I can, for example, set up an area around where this is. I can give it any shape and I say that any area,
any sound that plays on any area is going to like the bass, which is outside. In the outside, I set an area. I say any sound here goes to the same bass. Then here, as you can see, we can have a lot of effects, audio effects like amplify, delay, everything, just a lot of audio effects. We can, for example,
you can use a smaller reverb. The big area, you can use a bigger reverb, and they are sent there. The music is just a note. You just can, you turn it on and it plays the music, pretty much. You have reflection probes. There's a lot of things you can use here. Particle systems and different types of notes.
You can see here that it's very interesting if you had it out. This is the original designs.
PixelArt's price made 10 years ago for this demo. This is a 3D. The same artist actually made the PixelArt 10 years ago under 3D 10 years later. He tried to keep the designs as much as possible, but it shows how much this technology has improved over the years, I guess. This concludes the presentation.
If there are any questions, we still have 10 minutes, I think. Are there any questions? Hi.
During your talk, you mentioned that the poly count doesn't matter. What I'm wondering is if you profiled this. Can you speak louder? During the talk, you mentioned that you said the poly count doesn't matter while you're making the models. I'm wondering if you've profiled this on iGPUs and if you compared that to
a high-load detail pass with normal baking. In general, how can I explain? GPUs keep improving. Maybe the artists 15 years ago were very careful about not adding two polygons. When I say it doesn't matter, in general, artists are still using
as many vertices as possible. As technology keeps improving, you can still use more and more and more. Of course, you can always make too many vertices or too many faces, but for what an artist needs to make it look nice, it really doesn't matter so much. To know what it is if you make this main character and made it 40,000 polygons, it's fine.
It's going to run everywhere. It's not going to be any problem. In Godot, it's compressed, so when you open it, it will load fast and it won't run fast. You can also see that in the level, the artist, he just didn't model everything you see from scratch. He reused a lot of objects.
There's probably 30 base objects that were all reused. A section was put inside in a cylinder. There are actually not that many pieces that make this level that you have just seen. There's a lot of reuse for that. As most pieces are reusable, there are not that many pieces being grown in general. You just can't go wild with geometry in that.
It's fine. It's not going to matter that much. It's not like you're making something completely unique. Professional 3D artists in general, they know that the more they can reuse what they do, the faster they will make it. They will be more efficient because just doing more and more and more and more and more and more. In general, for a level like this,
normally you don't have that many pieces or anything like that. It's normal that you just can use higher geometry and be fine with it. If you're making something like, I don't know, like Assassin's Creed where you can have a 10-kilometer distance, of course, you probably need to use LODs just to make less geometry use on the things that are far away and occlusion on different things.
In general, for what you see close nowadays in a game, it really doesn't matter whether you have mesh streaming for LODs and everything, it doesn't really matter that much nowadays. We still have plenty more time for questions out there.
Yeah, it was about the use case that you mentioned about Assassin's Creed. If you have to see an object like 10 kilometers away, how do you deal with it in Godot? I know that there are some techniques like desolation or level of detail. How does Godot deal with it?
Version 3.1, which is coming now, still doesn't handle LOD very well, because as I told you, it has just one year we have made a new 3D engine. We are going to port the engine to Vulkan next year and add all the missing options, but the general idea with LOD
is that you make models with high geometry. If you're going to make a game that has a very long drawing distance, usually what you do is just you make versions that have less geometry and are much simpler, so there are two ways of doing it. Either you just make two versions yourself or when you import you set it
by just decimating the model and making useless geometry automatically, like automatic LODs. This is also very common. One of the nice things from automatic LODs is that when the game loads at first it just loads all the low detail versions of the polygons and only whatever is closed gets loaded, like gets streamed
as high detail. You give the game a fixed amount of memory to load the models, so when it runs out pretty much because you're moving and it needs to load a new high detail one, you use an LRU approach and it's going to free the high detail version of the older ones. You just move around the world and what is closed will load
the high detail versions and what is far away will just free the high detail versions. That's how you actually move around. Usually games like, in the case of Assassin's Creed, it's actually a complex cooling technique for just showing very...
So in cases of games that complex, I mean, you probably need a few dozen million dollars to make a game like that. You probably may want to write a custom rendering pipeline that's better like hardware cooling or something like that, but for something, a regular team that is not one of the dozen big studios that can make a game like that, just using streaming
is going to be far enough. Any more questions there?
You mentioned in the title about you alluded to making the next blockbuster game using open source tools. Do you have any major projects or game developments that are under development using your engine? If you check, or we have a reel of games that are being made,
we are going to make some really interesting projects made. It's mostly indie projects. Call of Duty is pretty small in community compared to the very known engines. It's the fastest growing engine. If you check the global game numbers, it's duplicating every year. We are not at the level
of game maker or construct. It's still small compared to Unity, not that small compared to Unreal, but it's growing much faster than any of the others. There are many nice things being made, if you check what the community makes. Game published, I don't think so, because the new 3D engine
is really new and making a game that is very complex with 3D takes a long time. Until we see that kind of games made with the engine, probably a few years will have to happen. People are making nice things and it's growing very fast, so I hope it's going to change in the coming years. Hi, I'm not a 3D modeller,
I'm working in the industry and he doesn't like to recommend Blender to new people that he's training up because the workflow is completely different to other modellers that are typically used in the industry. Do you feel that's improving at all?
You mean that the problem is that they don't like to recommend Blender? Sorry, I didn't catch that. I didn't quite understand the question. Just the way in which you use it, even down to the keyboard shortcuts and just the way that you use it is completely different. I don't use it myself, so I'm not familiar,
but I thought it was an interesting point. Can you speak louder? The speaker doesn't read so much here, sorry. Sorry. When he's training new people, he doesn't like to recommend Blender because the workflow is so different down to keyboard shortcuts
and just the way in which you use it compared to the commercial modellers. In general, the thing is, I think these kind of things are, in large part, generational. You can check it very well in the Godot community. Most Godot users are young people or people that have more will
to try something different. Older people who have been working in the industry for a longer time usually will not want to switch technologies. It's the same with Linux. It started being kids making operating systems and 30 years later, it's widespread and used by everyone. I think these kind of things are generational mostly. At some point when it grows so much, others will have to give it a try
and they will have to adapt or die, I guess. At the beginning, it's always generational, in my opinion. I think Blender is perfectly capable for this. I think in Europe it's used quite a lot. Probably in the States, not so much because they have the idea that they need to hire a company that sets up the computer with Maya.
So they hire Autodesk and Autodesk will set all their computers for them with a rendering farm or everything just for what they need. This is more like an American big company mentality. As soon as time passes, I think it's going to change because this is more generational than people realizing that it's good, in my opinion.
Thank you. I just wanted to bring another point of view to this remark. I really think the contrary.
Industry should not dictate its standards or way of working. And one of the reasons that we are also happy to, and that's the way I use Godot, is trying to think if we can find other workflows. Many artists use it precisely because many artists use free software graphics
and free software imagery precisely because you can intervene on the workflow. Eventually, you can use it to actually rethink the way you organize things, eventually do visual transformation of the norm. So I wish that, I agree, the generational, I think, is a really good answer. That's a way of saying it. But we can change our workflow.
It doesn't have to be us adapting to the industry workflow. And this is what is killing our graphic industry. Yes, I completely agree. One thing I know that I have seen that impresses me so much is generally it's like, I will explain it sure
because I know we have very little time, but maybe even for this work is really amazing. Usually, when you have an open source project, the contributors are mostly separated into those who create something which are small and those who add something to it and make it good. For Godot, what happens the most is that we have a really big
community of contributors and every feature that is added to the compatibility code because someone comes up and says, we can do this much, much better if you do it to a crash and make it again. And it happens a lot. And what ends up happening is that if you check the engine, there are a lot of workflows that are really amazing,
really well thought. If you compare it to the commercial engines, they are much worse than what Godot is doing mainly because there is this kind of complaint about something, but maybe differently than if maybe you are using, I don't know,
Unity or Unreal. Community will complain they don't like something and that's it. In Godot, they will complain and they will make a pull request and then others will criticize the pull request and hey, why do you also make it this way? And it keeps changing and improving and we have some discussions that may take months to just decide how to make something
and in the end, we are arguing and the best way of doing it that we have found out is implemented and it's really amazing. So when people try Godot, usually what happens is they are very surprised about many of the workflows because they are very polished and very well designed.
Maybe they need to click for you once you learn them, but once you do, it's like wow, this is very well thought out. And it's not like there's someone who can do this as fast as possible. So I think that, I hope that kind of answer
sort of complements what you said. With Godot engine, do you have to do a lot of testing for the multiple, so you want to, it's a multi-platform engine, do you have to do a lot of testing with different drivers, different vendors and hardware and everything
or is it pretty much, if you design it, it'll run on Windows, on Linux, it'll run on both of them everywhere. The problem we have now is that we also have OpenGL and OpenGL is getting kind of duplicated by the industry and the drivers are bitrotting
and they are very complex drivers which can be trotting, so we are going to be moving to Vulkan for the next version because OpenGL is kind of no longer viable. As it's very broken on mobile, it's kind of unusable, so we have to go back to OpenGL ES2 for a while because OpenGL 3 is just broken.
I work on rendering, I have a lot of hardware, but until the community tests and reports the bugs, it's difficult to have it perfect, but usually this is pretty efficient, it just works quickly, and we fix, even they submit the pull request with the fix. Hi, I was wondering if you do any sort of texture packing
for your PBR resources, for example you can have a normal map which is just two channels and resolve the third and the fourth channels, and meltiness into textures as well. I'm wondering if you do this and if you could give some insight into what PBR equations that you use in Godot.
You can write shaders as in code, you can use visual shader with nodes and you have a very complete default shader material that has a lot of parameters. It lets you assign every channel to everything on the texture if you want, so it's quite easy to use
if you want just to use different channels in the material. You can use that, you can do it in many ways if you want, it's very flexible. I was wondering how much time and with how many people... Can you speak louder? How much time did you invest and with how many people
into making that demo? He asked how much time the people needed to create the demo. This demo was made in three weeks if you put all the time together. Professionalize, you can make this really quickly.
Thank you very much for your talk. Thanks everyone.