We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

How we run GraphQL APIs in production on our Kubernetes cluster

00:00

Formal Metadata

Title
How we run GraphQL APIs in production on our Kubernetes cluster
Title of Series
Number of Parts
118
Author
License
CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
In this talk I would like to share the workflow and tools we use to build, deploy and operate GraphQL APIs on our on-premise Kubernetes cluster. I will share code and command examples explaining how we are operating our applications since our recent transition from REST APIs on Web servers to GraphQL APIs containers on Kubernetes. This talk will not be about the difference between REST and GraphQL but focus on the workflow, tools and experience we gained in switching our run time environments and API models. At Numberly, we have built and are operating our own on-premise Kubernetes cluster so we will also be talking about its capabilities and share some of the experience we gained in doing so. Proposed agenda: - Our previous workflow and its limitations - How we designed our Kubernetes cluster, its capabilities and the choices we made - Developer workflow, environments management and deployment - Our GraphQL stack, featuring a sample application - What we're still working on to improve
Keywords
Axiom of choiceMoment <Mathematik>Human migrationDemo (music)Stack (abstract data type)Graph (mathematics)Self-organizationCodeSoftware developerConfiguration spaceMobile appNumberShared memory
Software developerGentoo LinuxMusical ensembleNumberTheory of everythingCodeOpen sourceMoment <Mathematik>Video gameData miningMereologyWordProjective planeInstance (computer science)Software developerGroup actionComputer animation
DisintegrationConfiguration spaceComputer fileVacuumConfiguration spaceVirtualizationCodeSource codeCartesian coordinate systemArmNormal (geometry)Revision controlComputer fileWebsiteData managementServer (computing)1 (number)Different (Kate Ryan album)Web 2.0Projective planeOperator (mathematics)Software developerSoftwareSoftware maintenanceCASE <Informatik>LastteilungUser interfaceRootRepository (publishing)Continuous integrationMultiplicationMereologyRight angleOnline helpTask (computing)Electronic mailing listVirtual realityUniform resource locatorService (economics)BitMultiplication signRun-time systemProxy serverLimit (category theory)Food energyStack (abstract data type)Level (video gaming)Computer animation
Configuration spaceSoftware developerUniform resource locatorGoogolSuite (music)LoginAuthenticationInformation securityRollenbasierte ZugriffskontrolleAuthorizationNamespacePC CardWindows RegistryComputer networkPublic key certificateRun-time systemMultiplicationData storage deviceComputer-generated imageryAxiom of choiceGroup actionInformation securityNumberProjective planeSuite (music)AbstractionConfiguration spaceCubeBitSoftware developerDecision theoryVirtual machinePhysical systemOpen sourceMetric systemNamespaceMereologyExecution unitFlow separationAuthorizationOverlay-NetzMedical imagingShared memoryWindows RegistryKey (cryptography)Freeware2 (number)Computing platformMobile appChemical equationLevel (video gaming)Axiom of choiceProduct (business)Musical ensembleFrictionExtension (kinesiology)Primitive (album)CodeDivisorRollenbasierte ZugriffskontrolleAuthenticationOpen setComputer animation
PC CardExpert systemScale (map)WeightPole (complex analysis)Software developerEncryptionSoftware testingPublic key certificateRootMultiplication signStrategy gameCore dumpOnline helpProduct (business)NamespaceData storage deviceCartesian coordinate systemMultiplicationInternetworkingPC CardConfiguration spaceBitInformation securityDatabaseRun-time systemMedical imagingComputer fileBenchmarkFront and back endsArithmetic meanAttribute grammarResultantSpacetimeWeb 2.0Instance (computer science)Level (video gaming)Speech synthesisState of matterVirtual machineFreewareSoftwareProjective planeAirfoilComputer animation
Windows RegistryRollenbasierte ZugriffskontrolleComputer-generated imageryChemical equationInformation securityRule of inferenceScale (map)Game controllerService (economics)AutomationLevel (video gaming)Medical imagingPublic key certificateInformation securityTraffic reportingSet theorySoftware developerLevel (video gaming)Rule of inferenceChemical equationWindows Registry
Rollenbasierte ZugriffskontrolleGroup actionComputer-generated imageryWindows RegistryEncryptionEncryptionRepository (publishing)Web browserPublic domainConfiguration spaceMedical imagingVoltmeterGreatest elementRollenbasierte ZugriffskontrolleLimit (category theory)Windows RegistrySoftware developerGroup actionProjective planeFreewareRight anglePoint (geometry)Interface (computing)Computer animation
Demo (music)Software developerNumbering schemeFormal languageObject (grammar)System callCodeGraph (mathematics)Cartesian coordinate systemRow (database)Software developerFormal languageAddressing modeKey (cryptography)Generic programmingGoodness of fitINTEGRALResolvent formalismComputer fileObject (grammar)Category of beingCurvatureSocial classProjective planeFront and back endsQuery languageRepresentational state transferUser interfaceVideo cardWhiteboardIntrusion detection systemLibrary (computing)Multiplication signCodeElectronic mailing listSource codeCore dumpTouch typingBitFunctional (mathematics)Type theoryProxy serverIntegrated development environmentMobile appInheritance (object-oriented programming)outputCASE <Informatik>Ferry CorstenDemo (music)System callNumbering schemeCuboidWritingDebuggerComputer animation
MultiplicationLevel (video gaming)Computer-generated imageryWindows RegistryBuildingCartesian coordinate systemVariable (mathematics)CodeMedical imagingSoftware developerInstance (computer science)Product (business)Twin primeRun-time systemScripting languageBranch (computer science)Run time (program lifecycle phase)Cellular automatonBitComputer fileLevel (video gaming)Domain nameOcean currentWindows RegistryHookingComputer animation
Demo (music)Type theoryProjective planeGraph (mathematics)Service (economics)PlotterSoftware developerComputer animation
FrictionCodeFormal languageRun-time systemVariable (mathematics)Software developerSource codeBranch (computer science)Point (geometry)Multiplication signCartesian coordinate systemVoltmeterRun-time systemAbstractionCodeVariable (mathematics)Sound effectPower (physics)FrictionSoftware developerCubeResultantPresentation of a group
Virtual machineSinc functionCartesian coordinate systemCloud computingService (economics)Presentation of a groupComputer fileShift operatorScripting languageRun-time systemBenchmarkWeb pageMereologyHybrid computerFitness functionOperator (mathematics)NamespaceSoftwarePoint cloudSpacetimeLecture/Conference
Transcript: English(auto-generated)
Hello everyone, thanks for coming. Do you hear me well? Okay so Yeah, this is this talk. I will share our stack migration experience from both infrastructure to devs and as well as the
organization standpoint So I will share about the motivations behind Our Kubernetes stack change the experience we gain into building our own bare-metal cluster and how we made it so that it was adopted by by
Every kind of text that we have at Numbly So I will be sharing some configuration examples I will also Take this chance to showcase our GraphQL choices that we use for building API's So it's not a talk on GraphQL itself, but rather about the choices. I will share also
examples and code examples to showcase the demo app and Then I will demonstrate how they fit together. So that will be about the developer workflow. So
So just a quick here about myself you can find me about everywhere as a trouble and I'm a gentle Linux developer. That's my open source life I maintain quite a quite a number of packages and the MongoDB one for instance and I'm a PSF contributing member because I I open source some some some code and work on
Some Python code and I'm also a CTO at Numbly Before we begin I I wanted to share the story That when I submitted this talk a colleague of mine came to me and asked me this question
He came here and say hey Alex. Couldn't you have more birth words in your title? And So I felt obliged to answer these questions Since you may also have wondered it about it before you decide to come here or not. And the answer is no So let's begin the first thing I wanted to share is our previous workflow
That has been up and running for more than five years at Numbly. So this is how we still are working for some part of our projects but This talk is about this transition and how it's being done So we have our friendly developers we use
GitLab internally. So that's where we basically have code repositories configuration repositories. We keep them separate So there is no secrets in the code in the source code that we that we that we have and That's where we run continuous integration code reviews, etc, etc
So that's basically what the developers are interactive and the project managers also are interacting with every day Then to start the deployment and the orchestration of the deployment of the projects that we Build into GitLab we we we base ourselves the developers just have to create
YAML configuration file at the source at the root of their repository and it's basically an uncivil task compatible YAML Then we created a web interface that we call deploy docus That is basically linked to GitLab. So you login into it and that's
Proxying the GitLab SSO and then you can see the the list of the projects that you work on and Then you can start selecting it and then executing it executing an uncivil Playbook that will run in the background that will basically
Connect to GitLab get the source code the repositories merge them together and So it's uncivil based and then it will connect to all the bare-metal servers that we have At least the ones that are targeted and configured in the YAML configuration file beforehand and it will start
create creating virtual environments, so virtual arms and Python virtual arms and Take the code deploy it inside and then start configuring a you and deploying the USB configuration file
Configuring nginx ingress configuration files and everything at once in multiples servers for clusters If the Project or service that we are deploying Is a public accessible one? We will need the help of some
network engineers to set up f5 load balancers Which will also act as SSL offloading Proxies let's say so all the SSL we happen in the f5 usually
This is pretty cool. It's working Very well, and it's been working very well for a long time for us, but there are still some limitations in it well, the first abuse ones relate to the deployed ocus web interface and to the uncivil playbook that is a Running everything behind if gitlab is changing their API for some reason
We have to fix it on on all those orchestration Environments in sensible and in deployed ocus, so it's a bit of work
It's not happening that often to be honest, but it happens and when it happens you basically end up with a Large herd of angry developers that can't deploy their things anyway, so it's not that cool and on the server side The virtual norms that are that we are able to create
On the bare metal servers depend on the Python versions available on the servers So that means that we have some operation maintenance to keep up to update etc etc every Every bare metal server that are part of the different web clusters or application clusters that that we operate and
Of course, we also depend and the developers also depend on the network engineering team to do this Mainly manual not fully but mainly manual SSL configuration when the the website has to be public or the API or whatever it is that we Need to be accessible on the web through an HTTPS URL
Also, you can see that It's based on virtual environments Python ones mainly. So what if the developer needed a different kind of stack? that would mean that the ops or DevOps people would have to
Modify as well or deploy it on make it available on the bare metal servers So and there is no uninstall feature as well. So this is something that we wanted to have for Some kind of corner case and problems, but it's not that important But still there is no hey, just forget about it and inside
There is no performance isolation as well most Very strict one at least so you can have some problems sometimes when A developer deploys a code that is
killing the RAM of the node so we could have just kept this and Reroute a bit to the orchestration you could ask yourself, but okay But why don't you take all these nginx you with gee and virtual now get?
stack on the servers and just run them on On on LXC or Docker or rocket and You wouldn't you would be addressing basically what's on every node That's right. But with that means that would we would still have to To keep up with all the orchestration that makes this happen. So it's solving a part of the problem and
When we started that when we felt that it was starting to be the right time to move on Actually, the Kubernetes Ecosystem was was already something that that was let's I won't say stable
I would stay Popular enough so that the community behind it and and some kind of documentation was was enough For us to go into it so We didn't want to have to maintain this orchestration container orchestration and things like this are by ourselves
We just joined in the fun and decided that we would be Building with those bare metal approach our own communities cluster So that's what we did and I'm now gonna give you some some some some
Some overview of how we've done it. The first thing was to actually build a bear cluster So that's the methodology and then we have to decide on the tooling and when I say tooling It's mainly how will the developers interact with the cluster which is not a simple question
Actually, so you will have to take a stance on the level of abstraction that you want to give we wrote documentation because if it's not documented it doesn't exist and and then we worked hard into making sure that this new platform this new way was
Both adopted And supported so there are ways organizational ways to do this and I will share a bit later How we did it and then we distributed the expertise so that the expertise on the Kubernetes Workflow and cluster is not the thing of only the people that build it in the first place
so a lot of our production clusters At numberly run on gentle Linux this is part of our deep dive approach on everything we we do So we decided to continue on this and and
It's also a good chance for us to get to know and to understand all the bricks and there are numerous In the communities the ecosystem how they fit together so We built it on gentle we leverage of on our interest code
Technology no, no technology, but in France code Way of approaching things and so we have already a lot of Ansible Playbooks that operate all those machines that I was talking about earlier
So we leverage on it and just added the full of fully automation on deploying reconfiguring and provisioning machines on the Kubernetes cluster Has our name says we are obsessed with metrics and numbers so we have we are extensive users of Grafana and
Whether it comes from graphite behind or to prometes So we built dashboards to monitor and see how the cluster was going on in the early stages Then we decided to adopt a developer driven approach when designing our cluster because we wanted to remove friction So that was our main goal in the first place
Of course, that means that it doesn't have to compromise security as well So we will see how we and the decision that we made to to keep a right balance but The one thing we Adopted quite early was we didn't want to have too many abstractions
So actually we decided to allow developers to interact with the Kubernetes cluster directly So they have cube CTL at their disposal So there's no hell no overlay between the developer and the Kubernetes cluster
So that means that we also took some security measures to make sure that it didn't get out of an The first one is that At numberly we are using the G's Google suit. So that means that every employee has a Google account and Google
This Google account offers an open ID authentication So the workflow to first authenticate on the Kubernetes cluster is to just go to a cube configure And then login as usual. You're using the Google suite account that they have We get a free MFA second factor
Thanks to the Google account and every developer An employee at numberly has a yubikey for this And then it provides them through the gangway Project they're a kube config that you just have to download and they're up To start interacting directly with the Kubernetes cluster
then We have to handle authorization and permissions and for this we already had a nice a nice workflow on GitLab So we have everyone on GitLab and groups on GitLabs and roles on GitLabs So we decided that it might be interesting to map all those permissions and groups
to Kubernetes and there was no Project that was doing this so we decided to open source our own and it's called GitLab to RBAC So the principle is that a namespace in Kubernetes relates to a team
And this team it relates to a group in GitLab. So that's how it was already working and and This project just will just continuously map the GitLab Namespace and groups and users and their permissions to to Kubernetes
So then we don't have to separate authorization and permission systems to operate We just do everything on GitLab and it replicates to Kubernetes so To give you an overview of the cluster capability and choices that we make we made sorry
GitLab also offers an image registry that we of course leverage So that's where the image are downloaded when they are deployed on Kubernetes And we enforce some QA on on this security QA only whitelisted images can be deployed But we don't want any random image on the web
running on the Kubernetes cluster and We enforce from the start the run under non-route That means that no container can run on Kubernetes if it's running as root And we have strict network policies. So that's a net poll policies. That's a that Regulates how
Pods can discuss between themselves and developers to pods or internet to pod Basically, we disallow almost everything and unless it's coming from the ingress Speaking of ingress we are using the Kubernetes ecosystem Provided nginx ingress and we added the fully automated
Let's encrypt certificate lifecycle So this also offers the developers to just with a two or three lines and we'll see later of configuration to have a free https endpoint the multi-tenant cluster
means that We decided for a start maybe it will change over time to Have all the environments Inside the same cluster. So we deal we don't have a Kubernetes cluster for development a Kubernetes cluster for staging The Kubernetes cluster for production. It's a multi-tenant one for all environments
So that means that you can have a pod production pod running next to a development one Um, there's no real consensus in the Kubernetes ecosystem yet about this strategy Our approach was we are rolling out something. So we want to leverage on the resilience and the simply simplicity
of the of the machines and And the workflow that we will provide To help people Get acquaintance with the cluster We also created a special sandbox namespace that basically allows anyone that is authenticated To do anything and it's wiped every day. You don't have to read this
I just put it in for reference on the slide so that you Can see how we wipe it every day So just to test We don't have a distributed persistent storage yet That doesn't mean that we don't provide persistent storage, but it's a simple one through nfs. So it's really
Mainly for now about stateless machines, uh stateless applications So we won't be hosting databases On kubernetes yet. Maybe it will come I don't know And and then when you are a bit obsessed about security, there is a good benchmarks provided by the cri cis and so we of course
Made sure that our cluster passed it Then you have to write a good documentation. So i'm providing here The topics that are that we felt and was working to get it covered so
When you enter this space all your text may might not know what the docker file is So you need to kick start them in docker and you need to kick start them in kubernetes You need to kick start them in the deployment as well, etc. So we have leveled this a bit the idea behind this and the trap that uh
I hope we didn't fail into is not to rewrite the document the documentation Or the kubernetes documentation instead this documentation is a practical one making references if you're if Needed but it's a practical one. So it's get your hands into it and let's go step by step
And It's really helping and it has helped me being a very healthy, uh helpful. Sorry for for developers to Get their hands very quickly in the kubernetes cluster and have concrete results and here the sandbox namespace helps a lot because they can try and learn in it and everything that we
We asked the guys to to test is based on on the sandbox namespace So this is why we are building we build this we put a lot of effort in this actually Because this is where lies your adoption Speaking of adoption. We have to foster it and then you have to scale it at numberly. We have multiple teams
Multiple poles they share the same the same core attributes. Let's say back-end developers But you have multiple back-end developers for instance. So in all those teams we wanted to make sure that um That
There were there was someone that was identified and and Valued as well as being able to help and give support So that were not only the people that build the cluster We are the main reference and we're starting to get spammed So it was not it's not scaling that way. So we created our internal kubernetes certification
And So that the people that take this certification we can make sure that they have the basic but still strong knowledge Enough at least to make sure that they support the the people around them And this is also a nice way I think to value the expertise of of members of the teams
So a quick takeaway on the kubernetes side So gitlab we use gitlab for airbag image registry and with kubernetes It's called gitlab to airbag. You can check it online if it's if it's useful to you would be very happy
It's written in python We have always to balance security with versus freedom They are not opposed at all times, but but still that's something you have to take into account gift freedom, but Not so much that it can put your company at risk
That's why we have to enforce the security and qa rules from the start It's important when for us and I guess for anyone starting in this path um For now, we get reports on on not widely listed image running. We having to do to To make it enforceable from the start as well
And what I like very much and what we value very much In in this approach is that now hops can concentrate on? Adding features to the cluster that developers can leverage on their day-to-day work And I think that this is a nice this is this is really nice
So instead of being cluster by cluster now, you can see and we can see our kubernetes cluster Has a set of features that we can use And having practical and documentation helps a lot And to spread expertise, maybe a certification is a is a good is a good trick
Maybe we will create more certification levels later So how does it look now? It basically looks like this We remove the configuration repositories now It's moved to kubernetes secrets and two volts depending on on on some projects
We are not Finally stable entirely on this so that's something that we're still working on User roles, they are mapped to they are mapped to kubernetes or rbac The groups to namespace that's where the docker image registry is and now instead of having the interface
On At the bottom. We just Allow our developers to run kubectl commands to interact with the cluster Which will in turn orchestrate the pods with an ingress nginx. We have a free. Let's encrypt um endpoint for the
The the projects that we need and we still need to to work on automating the f5 ssl offloading as well for the public domains We deploy a lot of projects every day So you might wonder why don't you just go for let's encrypt and keep on this f5 thing, right? It's because we have to face some limitations from our clients
and that that forces us to support Let's say not so up to date browsers so yeah, so We have we have to to be able to to to be in between that's
Especially true when you work for banks anyway So now let's try to build the graph qla application on this kubernetes cluster and then we'll finish with how it What's the workflow that that makes it happen? So the demo app that i'm taking and that I will the source code that will is provided as well
Uh, I I thought it would be a nice introduction or Or id to to demo how you can proxy let's say the trello rest api Through a graph ql endpoint so you interact issuing graph ql queries that will be turned into trello api rest
queries The first thing you ask in here is how do I do graph ql in python? Usually the the The answer is graphene Which is the most popular and Library to to do graph ql in python at the time that we were asking ourselves this question. They were not supporting
async.io And we are very async.io Lovers so it was Kind of a problem. The other problem was the design approach of graphene where you did basically Explain your graphical schema as code or as classes
Etc. So this is how you you express it um but in in the in the graphical Ecosystem as we will see later There are other ways and most importantly language agnostic ways to do it So that's why we didn't go for for graphene instead. We went for this
So for the non-french, uh guys around here, this is called tardiflet It's a mountain Dish, let's say it's basically potatoes cheese cream potatoes with
cheese with cream with potatoes and cheese And a bit more cheese you have to finish the top must be cheese. Okay Anyway, and if you are very Hungry, you can add the larder in it but
That's a plus So the project itself is called tardiflet. It's meant so basically you you understand that The the core developers are french. They are the guys that have the daily motion They're doing a great work and what I like, especially in tardiflet is that it's modern python Let's say it's fully built on async.io and good way
I think and it It has a schema first design in the schema definition language design What this means is that you will express your schema? using the graphical sdl only So this is completely agnostic to the language and then you will just point the tardiflet engine
to load this raw flat file And it will load the entire schema So you don't have to express it using code and classes or python objects you just express it in a way that everyone in the graphical ecosystem can understand it and then
You put it in the engine and you're good to go and we'll see how they offer an ao http integration They embed a graphic ul development web interface to help you as well And and so it's pretty developer friendly Tastes very good. This is what the sdl looks like
So you define a query and then you will basically define types So here i'm defining the type member which refers to the member type in the rest trello api That is either you or someone in trello And then you have your properties and then scalar associated to it. Okay, so this is not python
This is not any kind of language. This is the sdl the standard sdl that defines schemas in in In graphql and that can be understood by any kind of language or library What's interesting as well is if you look at the trello api documentation you see that the member object
will Have a property that is listing the id of the boards That means that when you carry a member you will get the ids of the boards and not the details of the boards themselves so when you operate the rest trello api of
you have to Get the member get the list of ids of the boards and then for each board if you just wanted to display the name You would have to make a single query with the id To the board's endpoint and then get the name out of it, right? This is how you would do it in rest graphql allows you to abstract this because this is
This crafter is behind it. So you will have only to add the board Boards edge that will be a list of board objects so this is how you will present it to your in your graphical endpoints and then the machinery or the magic that that that you have to do is to
abstract this and make sure that your Your your graphical endpoint does all those rest calls for you but For the front end or the initial query you will have one query that will end up in being three queries to the rest
Trello apis So that's one of the key feature. Let's say of graphql, but you can see You can see it in here. So it's it's explained Show me some code now. How do you create this? You have the generic sdl so it's in the middle you create the engine and you pass it
The path to the file to the sdl file row file that that is on on your on your on your project and that's all And then it gets validated etc. And then your engine is ready to get queries basically So when you then issue a query to a type inside your graphql schema
It will need to know some resolvers that are able to resolve to get the data that we are asking for So that was the import of resolver is about And to write resolvers in tartiflet is just a simple decorator
pointing to the node that we are talking about in the schema, so if we Remember the query and we have the type member. You will just have to create your async def a simple async Function and
And Decorate it with the resolver and that's all You will return a dict object that will represent the the that will have properties and if in those properties they represent an edge then The engine will for you look for hey, do I have a resolver for the board's edge?
Because for now, I just have the ids And I need to go and look for those ids. I need the names, but they were not provided On the first call It orchestrates everything exactly so it will Iterate like this through the graph
based on what got queried and then just Call the resolvers functions like this simple Super easy it will do it in in in concurrently as well. So it's also Quite fast and it's really easy to reason with so here you see that I get my my box from the id board that got
returned in the json from trello and then I just have to look for each id board and get The the name and then I will just return the object that is That has been returned by trello. I didn't have to filter as well because the filtering is already done by the graphical engineer as well
So that's all What queries are resolver and then edge resolver Okay, now let's ship it So the first thing you have to do is a docker file and this is a demonstration of a multi-stage build to get A smaller image at runtime
So I find it very very helpful So i'm providing this for you to to come back to it as you can see on like 285 You also have to enforce the nobody user has running your application and That's basically how it's built Then the workflow itself on the git on the git side
We will have the build and the build will relate to the git the current git branch So I provide the simple script just to showcase how you can and you can do it in a in a hook How to build and deploy the image to your git lab registry based on the current branch you're working on So development branch will be development
Instance or pod on on kubernetes staging branch will get you a staging Pod in kubernetes for production it's on master plus git tag, so it's a bit more complicated than Just the bash here, but it's it's how we do it easily
Now you have to deploy to kubernetes for this you create a deployment yamo I trimmed it a bit because they are quite verbose. You can see that We also enforce in the deployment the run as uh, nobody And then we get the secrets and we provide them
From the kubernetes secrets and we provide them to the code as environment variables. Also, that's how it's done And on the developer side as well You can ask to get your let's encrypt fsl endpoint with the domain that you want and it will create it for you for free So i'm crazy enough to have a quick demo
Yeah With my hhkb keyboard, so you're ready? Okay, let's go. So basically you type three I can do it without my hands And you just build and upload this so this will build the thing and upload it to git lab
And then you can see that for now. It's not running. So there's no deployment for our project So then we will apply the development deployment so here you can see that it's being created on kubernetes now It's there but it's not ready yet zero on one. So let's see if there is a service Yes, we have an ip for our service. Is it is the pod created not yet now it's created
It's running and now it's ready That's all and I have my ssl as well Take away on graphql It removes friction it helps teams collaborate because this gives you a spec and so it normalizes how data is addressed and
And communicated between teams having a sdl approach. I think less people concentrate on the data it's really important and not the code that effect is really modern and has this sdl approach and it's Very good. I think so. Give it a give it a try
We have a workflow for environment deployment get on git branches Maybe we will challenge the multi-tenancy of the cluster later as I told you before and that will it will maybe have an impact on this the secrets Are shared to applications as environment variables and we still have to work on generalizing volts
And have giving power to the developers. We We decided to give them the kubectl as the as their main tool to interact with the cluster So maybe at some point it will when the adoption grows And we will add some and allow some other abstractions to interact with it just at home
but for now It's working on pretty well And that's it you have all the source code in here and you can reach me out here and I think we still have Some time for questions. Thank you very much
If anyone has a question for alexis, please come to the microphones in the aisles Thank you for the presentation and I wanted to ask why you decided on using bare metal instead of using either
cloud provider for the service and then Kubernetes on your own or using fully kubernetes as a service Because most of our applications interact with data And this data is on our own Infrastructure so we have a hybrid approach with cloud, you know fully cloud based
We come from the bare metal approach and this is something that first we value very much because We value the skills of the people that work with us and that's our own machines own skills. It's also Something that we have to to cope with because it requires some extra work, of course
but mostly because all the data that Lies behind it Is also hosted on our machines Okay, thank you, uh, I really liked your documentation page that was really great, um Do you use some some tools to deploy the yaml files or are you just use the kubectl?
Like like you saw there kubectl um, but like if you have to because you have Staging and prod like three environments. So you create three yaml file for exactly services exactly exactly Thank you exactly
um, oh, yeah, um, I was wondering when you have the ingress injects nginx and um, And do you actually host nginx inside of each of the parts that runs python as well? No, we have a separate namespace for all the ingress because we apply network policies between the namespaces as well. Okay
Thank you. I I don't think your mic is working. I'm afraid i'm sorry. Can you I didn't
Thank you. Hi Since you're operating your own kubernetes cluster. Are you considered using open shift instead? No You haven't Evaluated our benchmark the one one solution. No, we didn't evaluate it. Uh, we we
We have a quite deep deep dive approach. So we we wanted to operate Kubernetes, that's for sure. And then we wanted to operate it With no other things Because I think it's easier to to install locally on your bare metal clusters
Maybe but i'm not sure it is provided in gen 2 linux No, there are yaml scripts to install. Yeah But it doesn't fit with our how we operate our own Infrastructure today, so it just for us. It's more natural to just go for the packages themselves
And then Okay Go for every brick because we have all the uncivil tool set already at our disposal Okay, thank you for the answer. You're welcome. Thank you so much alexis. Let's have a hand for him