We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Testing for and deploying to AWS environments: a toolbox

00:00

Formal Metadata

Title
Testing for and deploying to AWS environments: a toolbox
Title of Series
Number of Parts
19
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
This is an overview of the setup we used when building DeckDeckGo. We used Nix to test and deploy code to AWS Lambda, backed by Amazon's Simple Queue Service, DynamoDB, S3 and RDS PostgreSQL.
Component-based software engineeringLambda calculusComputing platformMultiplicationLogicLambda calculusWeb 2.0Service (economics)CodeScripting languageQueue (abstract data type)Presentation of a groupVirtual machineSoftwareShared memoryTemplate (C++)Relational databaseSet (mathematics)Standard deviationIntegrated development environment1 (number)Goodness of fitVideo gameDebuggerServer (computing)Slide ruleFront and back endsMultiplication signDirectory serviceProcess (computing)Subject indexingBitBuildingConnectivity (graph theory)Complex (psychology)Open sourceProjective planeWorld Wide Web ConsortiumComputer animation
Lambda calculusGamma functionRoyal NavyNormed vector spaceChi-squared distributionQuiltVacuumSynchronizationBinary codeDigital object identifierSource codeRecursionDefault (computer science)Function (mathematics)Wrapper (data mining)Archaeological field surveyDialectRun time (program lifecycle phase)Inclusion mapService (economics)Integrated development environmentUniform resource locatorData storage deviceObject (grammar)MereologyLambda calculusCASE <Informatik>Projective planeForm (programming)Communications protocolMixed realityUniform resource locatorService (economics)Function (mathematics)Level (video gaming)INTEGRALSocial classIntegrated development environmentFluid staticsComputer fileWave packetFunctional (mathematics)Derivation (linguistics)Web pageParticle systemWordMultiplication signBitSoftware bugRight angleBinary codeCache (computing)System callCloningMathematicsHash functionOpen sourceInterpreter (computing)Archaeological field surveyDot productLine (geometry)Computing platformAlgebraic closureCodeOverlay-NetzServer (computing)Data storage deviceFitness functionResultantExterior algebraDescriptive statisticsAbstractionLocal ringSoftware testingDirectory serviceBuildingEntire functionComputer animation
Computer wormEncryptionObject (grammar)Data storage deviceGradientOpen sourceEnterprise architectureComputer iconKey (cryptography)WeightProduct (business)Software testingServer (computing)Uniform resource locatorVariable (mathematics)Canonical ensembleDerivation (linguistics)Queue (abstract data type)CodeoutputLambda calculusIntegrated development environmentMessage passingDirectory serviceData managementCASE <Informatik>Service (economics)INTEGRALProjective planeComputer animation
Uniform resource nameFiber bundleTelecommunicationLambda calculusRead-only memoryMessage passingSource codeJava appletDisintegrationSoftware testingPresentation of a groupCore dumpComputerRevision controlRule of inferenceMotion blurLink (knot theory)Lemma (mathematics)Library (computing)outputElectric generatorPhysical systemLaptopCommunications protocolCloningProjective planeDatabaseJava appletSoftware testingVideo gameRevision controlGrass (card game)Process (computing)Local ringComputer configurationMoment (mathematics)Service (economics)File formatDerivation (linguistics)Table (information)INTEGRALHazard (2005 film)Cellular automatonRight angleSoftwareServer (computing)Open sourceCache (computing)Computer animation
DisintegrationSoftware testingDirectory serviceNetwork socketDatabaseLocal ringLoginPasswordRule of inferenceProcess (computing)Derivation (linguistics)DatabaseArithmetic progressionPhysical systemDirectory serviceComputer animation
DisintegrationSoftware testingLocal ringDatabasePasswordLoginCodeService (economics)Gastropod shellSoftware testingMereologyDirectory serviceConfiguration spaceService (economics)Physical systemMultiplication signDatabaseTracing (software)Computer animation
Function (mathematics)GoogolJava appletAddress spaceSource codeProxy serverSoftware testingIntegrated development environmentCodePattern languageGastropod shellSoftware testingService (economics)Cycle (graph theory)Traffic reportingSoftware framework2 (number)BitIntegrated development environmentAuthorizationMultiplication signHardware description languageExistenceVariable (mathematics)Formal languageExecution unitProjective planeSoftware developerInstance (computer science)Suite (music)Module (mathematics)BuildingVideo gameFunctional (mathematics)HookingException handlingWrapper (data mining)DampingText editorData storage deviceLocal ringGoodness of fitIterationSystem callTerm (mathematics)Computer configurationSoftware repositoryConnectivity (graph theory)Right angleShared memoryVirtual machineCodeError messageINTEGRALDebuggerUnit testingProduct (business)Front and back endsCASE <Informatik>MathematicsLibrary (computing)Grass (card game)Server (computing)Musical ensembleMixed realityStructural loadInternet forumStress (mechanics)
Transcript: English(auto-generated)
So the next talk is from Nicolas Mathieu, and he's going to be talking about testing foreign deploying to AWS environments. Thank you. Hi everyone again. So I learned an important life lesson at NixCon this year.
If you're going to a conference and aim at giving a talk, it's a good idea. But aim at only one. The consequence is this one's going to be much shorter, so feel free to interrupt, ask questions during the talk, share your experience, and hopefully we'll make it to half an hour.
So I did, during most of my career, I had someone dealing with the deployment for me, and I didn't have to care about it at all. Until I started this side project called TechDego, which is a presentation software. And then all of a sudden I was alone to do my deployment, I had to actually deal with setting up Postgres and everything, and I had to learn about it.
So really liking Nix, I tried to push as much of the complexity inside of Nix, and I didn't really want to use Docker-based software for building, for deploying. Thank you very much.
And yeah, so this is the story of my journey working, or making Nix work for AWS. So first a bit about TechDego, which is the presentation software that I'm actually using today. So the front end is Web Components and TypeScript. Web Components is a new standard in W3C for basically creating new HTML tags that have some JavaScript logic in them.
I have no idea how it actually works, so this is not my job. My job is the rest, the back end. So the back end was entirely written in Haskell, and for deployment and the build we used Nix.
And actually pushing the artifacts, starting these three servers, it's all Terraform. I never quite understood NixOps, so no NixOps there. From AWS we're using AWS Lambda, which is basically you push some codes and it runs somewhere. You don't have to create a machine, you don't have to set anything up,
it's just your code is there, and whenever there's a request arriving, it's being run. S3 for storing presentations. SQS, which is a queue service from Amazon that we use for different Lambdas to talk with each other. DynamoDB, we actually got rid of that, but at the beginning we used it, and the setup in Nix is kind of interesting, so we decided to share it.
And RDS, which is the relational database service of Amazon. So, if you want to check out dek.dek.go, it's fully open source on GitHub, dek.go slash dek.dek.go. It's a whole bunch of JavaScript, so that might be a bit scary, but there's some Haskell and some Nix.
All the code I'm going to show during the presentation can be found in this directory here. So feel free to have a look. Now, as I said, I didn't have much time to prepare this talk, so I'm missing one slide, which is the last one. And it's actually quite convenient because I can show you how dek.dek.go works.
You have a set of templates, you can select one. I'm going to have a last thank you, and there you go.
So, the first part is going to be the actual Lambda part. So I have this Haskell code, and this Haskell code needs to run somewhere in DWS.
And for this, Lambda is great, because Lambda is really just this abstraction. You don't have to store the server, you don't have to stop it. The problem is that when you build stuff in Nix, most of the time you need a Nix store. Or if you use Nix OS, it's very simple, just copy the closures, activate, and that's it. On Lambda, you have very limited size, I think what you push to AWS can only be 50 megabytes,
so you can't fit in their Nix store most of the time. You can't have the GHC closure with it. So the answer here is to use fully static Haskell executables, where there's no dynamic linking at all, you don't even have an interpreter bundled in your executable.
And there's one guy here, Niklas, where is he? Over there. Big applause for him, who made amazing work on getting this to work. It's kind of a very nice project, because it's Nix, and yet it allows projects to live outside of the Nix store.
So you have these standalone artifacts, and it's using cachex, so it's really a lot of the community coming together on this one. There's a funding page somewhere you can find it on the GitHub project, nh2-static-cascal-nix. And so feel free to chip in there.
Now, so we build these Haskell executables, and we just put them in a zip file, and the zip file is sent to AWS, and it just works. The actual upload is done with Terraform. So how does this static Haskell Nix work in practice? Most of you do Haskell here, and this is using the legacy Haskell platform, not the new Haskell.
And I just want to show you how it works, or how you can make any executable static, pretty much any. So this static Haskell Nix thing is basically just where the Niklas's project is, and there's a survey directory, which you can just import, passing it your normal packages.
In this case, my normal packages are just Nix packages with some overlays, adding dekde goes custom packages. And then, this is crazy, because line 16, you can see, you just do survey.packages with static Haskell binaries. Haskell packages, and there you go.
You have your Haskell packages that actually compile to static, fully static executables. This is beautiful. And then when you create your lambda, you just copy an executable. For instance, this one. There are a few bugs, right, so it might break at times. Just copy the executable, zip it up, and that's it.
Any questions on this? Great. And then, the next question is, OK, we have some stuff that's being built with Nix, but how do we teach Terraform to reuse that? And on the left, there's a weird thing.
So this function handler path, path equals, built in seek, something, and then the function.zip. The idea is that Terraform has this data external resource, or it's not a resource, it's actual data, where you can tell it, hey, Terraform, just run this command, and you can expect this command to output JSON.
And then you can use this JSON in Terraform as well. So, line 3 to 9, this part here, are just the lambda description. And the file name is the zip file that's expected by AWS. And here, this file name refers to data external build function user path, which is defined line 12 to 19.
And most of the time in Terraform, you have to say, oh, Terraform, please recreate this resource if the file hash has changed, for instance, or if the time of the day is later than something like that. So we have weird ways of making sure that Terraform notices when your code changes. And next, it's not a problem, because the entire file name
is going to change whenever you change code. So how this works is that we do a next eval. It's basically going to evaluate something, and tell next to actually print the output as JSON. So this is very, very cool, very convenient, because you don't have to have any other commands that you run.
Just call next, output JSON, and that's it. The weird part, which is here, so this is just to make sure that your function is actually being built. It's like a deep seek, because this is just an eval, right? Next, we'll try not to do any build,
and this one will give you a path back, but the path might not exist yet. So you just do a bit of a dirty trick here to make sure, it's basically import from derivation to make sure that the thing exists. Now, I'm going to go into the AWS services themselves.
So Lambda, for running the code, and now we'll talk about S3 and the rest. So the talk is deploying to AWS, but also testing for AWS. So I think this is the interesting part, because when you ship some code, you deploy, you don't always want to run a staging environment,
where you run your integration tests. So what we're going to do here is just, for each and every AWS service, we're going to try and find either an open source alternative, or some jars provided by AWS, some form that we can execute the services locally
inside our next build. And then we just redirect the URLs during the tests to the local servers, and we'll repeat for the next service. So we're going to do this for a few services. First one is S3. So you probably all came across MinIO. Who's seen it before? Okay, so MinIO is an open source clone of AWS S3.
It has its own protocol, but it also speaks S3 protocol. And it's a nice project. Works for my use case, which is testing. I heard people say that it was working great if you use it as a full replacement production. I heard some people say that it wasn't that great
if you used it as a full replacement production. So it depends a lot. But for testing, it works just fine. And how this looks is very simple. You add MinIO as an input to your derivation. You set some dummy environment variables because it requires them. And you start the server.
You say local host 9,000 for the ports. Give it a temporary directory where you can actually store its artifacts. And that's it. It's running, and you run your integration tests. The last thing you need to do is actually tell your code to use local host as opposed to canonical AWS URL.
In the case of Haskell, I'm using Amazon Qa. And you can give your own HTTP manager to Amazon Qa and just tell it, hey, listen, if you see S3 Amazon AWS.com, well, just redirect your local one. Disable HTTPS, and that works. Make sure to only use that during your tests,
and not in production, of course. Next one. Oh, questions about S3? All right. Next one is the SimpleQ service. So this is just for sending messages between lambdas using AWS. It's an AWS project.
It works fine on their server. But for this one, they don't provide artifacts or they don't provide a way of running it locally unless you use Docker. But there is an alternative one, which is ElasticMQ. Very much like MinIO, it's an open source clone.
But it speaks the SQS protocol. So what we do is that we just grab the artifacts that they release on GitHub. It's a jar. We just Java it, and it runs. So I feel a bit dirty inside for starting Java on my laptop,
but as long as I don't have to start Docker, right? Wash, rinse, repeat. Just as we did for S3. Replace the host, replace the port, disable SSL, and we're good to go.
DynamoDB. Who's here has heard of it? Yeah. For the others, it's basically like Redis. It's a very simple table format database. And on this one, AWS is actually pretty cool because they do provide ways of running it locally.
You can download the jar, which you can just start on your laptop. By the way, all these services, even though they use network, they never require anything like sudo. So that means that everything can just run derivation. It's actually very nice to have your tests running fully sandboxed.
If someone else in your company has run the test before, they're going to be cached in your shared cache if you have one. You don't even have to run the test yourself. So here you just grab any of those jar balls. You unpack it in your derivation.
And you just say, okay, Java start. You have some options to set the port. And after that, your new integration tests. And you don't forget to tell Amazonka to use your local version of DynamoDB. Questions for this one? Great.
Now what about Postgres? So Postgres is actually interesting because the exact same Postgres, or mostly the same Postgres, is going to be running on AWS. And for many, many years in my life, I thought, okay, I have tests that are in Postgres, so that means I need to install Postgres on my laptop. I need to install it through Ubuntu or a service service in XOS.
But you really don't have to do that. And this was a eureka moment for me. Postgres can use any kind of directory, and it runs as a background process if you want to, but it doesn't have to be a system-wide background process. That means that you can even start Postgres in your Nix shell,
or you can start Postgres inside a derivation and just keep it at the end, and you don't have to tamper with your system-wide Postgres. So you just tell Postgres, hey, just initialize the database in pgData. This is just a name I give it. You have some configuration to set. But that's about it.
Then you tell it, all right, start. And from there on, you have Postgres started. You just make sure that before you do anything else, you give it enough time. Then you create the database that you're going to need for your tests. And that's it. Run your tests. At the end, you say, all right, immediate stop, and no traces left of Postgres in your system.
Everything clear here? The really cool part about that is that all these services are provided through Nix, and they can be started and stopped at will. All these services so far have used temporary directory
so they're not going to write anywhere else than in your temporary directory folder. So you can go one step further with this and say, well, I'm going to have a shell wrapper that's actually going to start my services whenever I develop locally. So if I don't want to do a full Nix build for my thing, maybe I'm using GHCI for development,
but I still need the services to be there. This is something I find very, very, very valuable is to have those small shell wrappers that gets initiated in a shell hook and just creates a few functions that you can call from your shell, from your command line. So here I have one. Oh, is it big enough?
Say something. Sorry? So this one is for loading Postgres, very simple.
And this is where the heavy lifting happens. This function is store services in terms of UX. If you have coworkers that don't reuse Nix or don't know about Nix, they don't want to set anything up, you just tell them, well, enter the Nix shell, and when you want to start your services or when you want to run your tests, just call store services.
It's going to load Postgres. It's going to start SQS as well. It's going to start S3. And then anything that happens after that is going to have access to all these services. And when they're done, they just call stop services.
And everyone's going to thank you for this because most companies now still do, okay, you want to run the tests? Oh, you need Postgres. Just do sudo apt install Postgres. And with this, no need for that. People don't even need to know that Postgres exists. You might want to add a few things like a few REPLs.
Let's set environment variables. Here we actually need our tests to use some JavaScript build stuff. So set environment variable where there's some build JavaScript that was built by Nix. If you forget how you actually did the packaging, you don't have to worry about it.
I'm using this once a month. We started working in April, I think. One day I packaged it, and now I don't have to worry about it anymore. I actually forgot how it works. And that's it. Thank you for listening.
Do you have any questions or experience reports that you want to share? So are you aware of Nix OS testing suite when it spawns QMO machine,
and you can use actually Nix description language to start the services inside? And if yes, why didn't you use it? So first of all, I'm not using Nix OS in productions, right? So for a Nix OS test, it means I would have to create a new Nix OS module with my code, and then ship that into a full Nix build.
The other problem is that Nix OS tests only run on Linux, as far as I know. They could run on Darwin, for instance, but no. So I'm using Linux, but the friend I'm working with on this one uses Darwin, so it wasn't really an option. And also, it doesn't allow you to do the local development, right? If I use GHCI to test my code, I also have unit tests,
unit tests, that run again Postgres. So in that case, that would mean, okay, I'm in GHCI, I make a change, then I close GHCI, I do Nix build, then it has to build everything, actually rebuild from scratch the whole library, the Haskell library, and then start the tests. It would take probably a minute for everything to happen.
Whereas if I just do, if I just start Postgres in the background, or anything like that, I just do colon error, reload GHCI, main, runs the test, and I'm good. Iteration cycle is about five seconds.
Did you look at TerraTest? At? TerraTest. TerraTest, no. What is it? It's a framework wrapped around Terraform to do integration testing. So it spins up components,
runs a test cycle, and destroys those components. It's a little bit like InSpec and ServerSpec, but backed with support for Terraform. Very nice. So this is also something that takes some time to run, right? But this is, I didn't know about this, and this is great.
It has some overlap. Thank you very much. I also want to make another tooling suggestion, which is TerraNix. I don't know if the TerraNix author is here today, but it's a really cool way to write Terraform, the syntax in Nix instead,
and it does away with all the horrors you have to go through when you realize that the Terraform language itself can't do all kinds of stuff. You can write it in Nix instead, and I find it very convenient. So that might also be something that's useful. Yeah, although if the goal is really to hide Nix from your coworkers and they don't hate you,
this is a bad move. I have a question. Do developers use these services started from Nix Shell for local development
or only for test running? You mean these? Yes. Yeah, so in this project I'm the only Haskell developer, so I'm the only one using the REPL things, for instance. But whenever I work at a company, basically whenever I work, I write these and people like using them
because it makes their life much, much simpler. Okay, I have a question then. What do you do with front-end? Is it started like this or do developers don't run the front-end locally? I don't do front-end. I let them deal with their mess.
No, I don't know. Everything except front-end, right? Yeah, exactly. So they use Webpack and whatever they use, so I have no idea how it works. So I'm not even attempting at helping at all. Sorry. Oh, no, that's not true. No, no, I've done it before for a different company,
but it was very tricky to get right because most of the time their editors are very tightly integrated with the build system and so it just breaks everything for their editors, which is a common theme in many languages actually. All right.
Thank you very much for listening. Don't hesitate to find me at the end. Oh, no, it was a clap. Great. Bye.