Serverless Computing
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 10 | |
Author | ||
License | CC Attribution 3.0 Germany: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/63079 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | |
Genre |
1
2
3
4
5
6
8
10
00:00
NeuroinformatikCartesian coordinate systemNeuroinformatikData managementXMLComputer animation
00:12
Lambda calculusFunction (mathematics)FAQBookmark (World Wide Web)WindowView (database)Graphical user interfaceServer (computing)Data managementTask (computing)Channel capacityCodeFocus (optics)SoftwareData modelScaling (geometry)Event horizonDisintegrationService (economics)Scale (map)BuildingFeedbackIterationOverhead (computing)Visual systemModemStrategy gameElasticity (physics)Bridging (networking)Integrated development environmentComplete metric spaceVideo gameCycle (graph theory)Lambda calculusServer (computing)Scaling (geometry)CodeService (economics)INTEGRALFunctional (mathematics)Cycle (graph theory)Integrated development environmentCartesian coordinate systemSoftware as a serviceSoftware developerArithmetic meanComputer wormDifferent (Kate Ryan album)Information2 (number)Adaptive behavioroutputParameter (computer programming)Stack (abstract data type)NeuroinformatikGateway (telecommunications)MereologyBuildingComputer hardwareOpen setData storage deviceComputer fileDemosceneSource codeMathematicsIdentifiabilityBootingData centerComputer animationProgram flowchartXMLUMLJSON
Transcript: English(auto-generated)
00:00
Let's talk about serverless computing. AWS offers technologies for running code, managing data, and integrating applications
00:24
all without managing servers. So the main issue what AWS tries to resolve here is when developers can focus only on writing their own code without managing any servers.
00:46
So basically serverless means you don't care about server and you care only about code. And serverless application starts with AWS Lambda and even Drive and Compute Service
01:04
natively integrated with over 200 AWS services and software-as-a-service applications. So what serverless is give to us? So we move from idea to market faster.
01:21
So because we run just code, we code and then just run it. It helps lower our costs. What does it mean? We pay only for what we use.
01:44
So if we mentioned already Lambda, we pay only for Lambda invocations. So when Lambda doesn't run, we don't pay. Adapt a scale.
02:02
Same as. We don't care about servers and AWS care about scale of our application. And also it help build better applications more easy way because you just focus on building
02:24
just application without pay much attention on server side. Services on AWS.
02:40
Server services on AWS consist all three layers of stack. So this is compute integration and data stores. So compute is AWS Lambda and AWS Fargate. Lambda is when you just write your code and run your code.
03:05
And AWS Fargate in simple way is when AWS helps us to run containers on some shared services on AWS and AWS will take care of the services of the servers and services itself.
03:31
And application integration, Amazon Event Breach, AWS Step Function, Amazon SQS, SNS, API Gateway and AppSync.
03:40
In general, these all six services are servers because they provide some functionality and we don't run any server side applications by us. But also you have possibility to integrate, for example, AWS Lambda with Amazon API Gateway
04:10
or AppSync and you can proceed your requests in API Gateway or AppSync via Lambda.
04:23
Then data store, Amazon S3, VFS, DynamoDB, LDS proxy, Aurora servers, Red Shield servers, Neptune servers and Open Search servers. For example, Open Search servers, Amazon provides Open Search as a serverless solution for us.
04:47
And this Amazon S3 is also reuse this storage as a service and it for us serverless means
05:05
but also we can integrate, for example, AWS Lambda with Amazon S3 bucket. And for example, if you want to change some files in our buckets after when our users
05:25
upload some files to the S3 bucket and we want some make changes on them, we can do this via triggers and these triggers can be done via AWS Lambda functions.
05:43
As for me, the good explanation of serverless is to explain how Lambda function works and what function ref cycle is. So first, when some invoke came to the Lambda, it can be done only via Lambdas API, so request
06:06
made to Lambda API. And here I have to mention that we have cold start and warm start. Firstly, we will talk about warm start. After when request was made to Lambdas API, service identify if warm execution environment
06:27
is available. If no, this is first start in a while. And this is square, you can see here. This all happens behind the scenes.
06:44
And Lambda go find some available computer source. When Lambda is find available computer source, your code from the Lambda starting to download
07:00
to that server. After when code on the already on the server, server start execution environment. So this means if you choose in your Lambda Python environment, it will start for your
07:23
Python environment. If you choose example node environment, it will start node for you. Then it make executing it, it mean install all inputs here.
07:42
So in Python, when you write inputs, all inputs are here happen. And then you have invoke handler. This function, like what will be executed and also input parameters will be passed if
08:04
you write this and if invoke handler is required some input parameters from request, it will also happen here. After when invoke handler starts, it runs and then it return complete invocation.
08:23
And then we have Wormstart. Amazon doesn't provide any information about how much minutes your environment is still alive. So somewhere you can find it's still alive for around two or three hours, something like
08:44
that. But Amazon doesn't provide this information officially. So if we work with Wormstart, for example, our Lambda function is executed just few seconds ago and request made to Lambda API again.
09:03
So service identifies if worm execution environment is available and worm execution environment is available and then it proceed invoke handler. And after when it runs this code, it throws to us complete invocation.
09:25
So that's the difference. Here is a serverless and here is a service application, but here is square part of the real server because when Lambda is going to find some available computer source, this
09:46
is real hardware somewhere in data center in AWS.
10:16
Okay.