Moving Beyond Infrastructure as Code
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Subtitle |
| |
Title of Series | ||
Number of Parts | 57 | |
Author | ||
Contributors | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/54469 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
openSUSE Conference 201726 / 57
3
6
10
15
16
19
21
23
24
27
28
29
34
36
38
40
41
42
46
47
51
53
55
00:00
Machine codeBitPerspective (visual)Open setComputer animation
00:52
Event horizonMachine codeComplex (psychology)Service (economics)Point cloudKolmogorov complexityStack (abstract data type)Queue (abstract data type)DatabaseMessage passingMobile appMathematical optimizationComputer hardwareMainframe computerTime evolutionMiniDiscBefehlsprozessorProjective planeAuthorizationMultiplication signParameter (computer programming)Different (Kate Ryan album)Point cloudCartesian coordinate systemArtificial neural networkLine (geometry)Complex (psychology)MultiplicationConnectivity (graph theory)Formal languageService (economics)Stack (abstract data type)EvoluteGoodness of fitOperations support systemLink (knot theory)Type theoryProcess (computing)Physical systemSimilarity (geometry)Mainframe computerDatabaseSelf-organizationNeuroinformatikCloud computingOperating systemCASE <Informatik>Web 2.0Programming languageServer (computing)Division (mathematics)Disk read-and-write headPoint (geometry)Interface (computing)Computer hardwareInternet service providerNatural languageXMLUMLComputer animation
07:10
Time evolutionMainframe computerPoint cloudComputer hardwareMiniDiscBefehlsprozessorDataflowThomas KuhnLevel (video gaming)Maxima and minimaSineEndliche ModelltheorieCASE <Informatik>Web servicePoint cloudServer (computing)MultiplicationMainframe computerRight angleArithmetic progressionComputer hardwareNumberVirtualizationTransformation (genetics)View (database)Point (geometry)SoftwareMixed realityPhysical systemOpen sourcePiData centerSoftware developerConnectivity (graph theory)Service (economics)Endliche ModelltheorieAnalytic continuationStack (abstract data type)Open setDenial-of-service attackComputer animation
13:11
Endliche ModelltheorieInformation securityInformation managementInternet der DingeControl flowExponentiationSurfaceComputer configurationScalable Coherent InterfaceSoftwareTerm (mathematics)Type theoryInformation securityShift operatorAreaBuildingBitPhysical systemOpen sourceDenial-of-service attackPattern languageTask (computing)Normal (geometry)Data centerCartesian coordinate systemWeb serviceComputer architectureMultiplication signEndliche ModelltheorieVulnerability (computing)Programming paradigmThread (computing)PlastikkarteEmailState of matterInternet der DingeSystem softwareInternetworkingArtificial neural networkComputer animation
17:39
SurfaceTrailControl flowInformation securityComputer configurationGame theoryInternet der DingeBuildingFocus (optics)Scale (map)Endliche ModelltheorieServer (computing)Vulnerability (computing)Connected spaceSoftwareTelecommunicationMixed realityFirewall (computing)Information securityEndliche ModelltheorieIntegrated development environmentWaveNumberPublic key certificateVulnerability (computing)Validity (statistics)Physical systemQuicksortNumbering schemeCanonical ensembleBuilding1 (number)Virtual machineDecision theoryWorkstation <Musikinstrument>AreaKey (cryptography)Data centerPoint cloudLevel (video gaming)VirtualizationServer (computing)Natural numberRight angleMultiplication signShooting methodMathematicsPairwise comparisonProjective planeInformation managementVotingBookmark (World Wide Web)Standard deviationComputer animation
26:24
Endliche ModelltheorieInternet der DingeFocus (optics)Scale (map)Information securityBuildingServer (computing)Integrated development environmentAbstractionEvent horizon.NET FrameworkComa BerenicesCartesian coordinate systemInformation securityElectric generatorEndliche ModelltheoriePhysical systemMultiplication signMachine codeComputer fileProcess (computing)Operating systemPoint cloudMainframe computerIntegrated development environmentSystem callInformation technology consultingGroup actionOperations support systemBitMathematicsEvent-driven programmingDependent and independent variablesSelf-organizationNumberWeb serviceLambda calculusFormal languageElectronic mailing listMusical ensembleNeuroinformatikAbstractionServer (computing)SoftwareFile systemComputer animation
35:09
Event horizonIntegrated development environmentClassical physicsBridging (networking)Execution unitFunction (mathematics)Machine codeSystem programmingStatement (computer science)Bus (computing)Gateway (telecommunications)MathematicsFunctional (mathematics)Web serviceOpen sourceHand fanEvent horizonData managementCartesian coordinate systemMachine codeScheduling (computing)Mechanism designSingle-precision floating-point formatProgramming paradigmMereologyPhysical systemIntegrated development environmentAutonomic computingSoftwareSoftware developerInterface (computing)Mobile appContext awarenessData centerNeuroinformatikProduct (business)Decision theoryElectronic mailing listArithmetic meanVideo gameMultiplication signParameter (computer programming)Event-driven programmingData analysisOperating systemSoftware frameworkMeeting/InterviewComputer animation
41:58
Function (mathematics)Machine codeSystem programmingEvent horizonRoboticsComputer programmingEndliche ModelltheorieComplex (psychology)outputMaß <Mathematik>Streaming mediaInformation securityDuality (mathematics)Parameter (computer programming)MathematicsSoftwareState of matterFocus (optics)Endliche ModelltheorieInterface (computing)Formal languageConfidence intervalAutonomous system (mathematics)Finite-state machineOcean currentInformation managementPoint (geometry)DataflowDecision theoryLevel (video gaming)Computer programmingWeightAutonomic computingGroup actionFigurate numberPhysical systemType theoryProcess (computing)QuicksortMachine visionThresholding (image processing)HookingGoodness of fitMultiplication signEvent-driven programmingProjective planeSearch engine (computing)Artificial neural networkCASE <Informatik>Bridging (networking)InformationSelf-organizationInformation securityMiniDiscEvent horizonInstance (computer science)Software developerProgrammierstilRoboticsBuildingComputer animation
48:44
Information securityEndliche ModelltheorieBuildingInternet der DingeDecision theorySystem programmingCASE <Informatik>Event horizonInformation managementPhysical systemAsynchronous Transfer ModeCASE <Informatik>Physical systemEvent horizonMereologyConfiguration spaceConfiguration managementIdempotentCoroutineGroup actionMultiplication signAutonomous system (mathematics)Message passingMultiplicationArithmetic meanDisk read-and-write headSet (mathematics)Projective planeLine (geometry)Virtual machineType theoryQuicksortBus (computing)Information managementFunctional (mathematics)Decision theoryGoodness of fitParameter (computer programming)Computer animation
55:29
HypermediaComputer animation
Transcript: English(auto-generated)
00:08
All right, I think yesterday went incredibly well. I always enjoy coming out to Open SUSECON, and I'm very grateful for the opportunity
00:22
and your willingness to listen to me yammer on yet again. But so I wanted to talk to you about really a lot of what the future of infrastructure deployment looks like right now. A lot of the challenges that we're facing.
00:42
I'll look at it from a pragmatic perspective, and also talk a little bit about the event-driven infrastructure and what that means with some of the new and emerging technologies as well as new and emerging problems that we're facing. So as a quick introduction, my name is Thomas Hatch.
01:03
I'm the CTO of SaltStack, and I'm the original author of The Salt Project. Now, infrastructures out there are becoming increasingly more complex.
01:21
I had a wonderful argument about this a few months ago at a banking, at a panel that was prepared to many banks back east. Everybody else on the panel was explaining that no, no, no, infrastructure's getting simpler
01:40
because we have more automation. And they seem to have a hard time understanding that even if that were true, which it isn't, I mean, we do have more automation, but that doesn't magically make things simpler, effectively what it does is it allows us
02:01
to deploy more services. So I finished this argument on this panel, and I go back down and I talk to the head of an infrastructure division at one of the large banks, and they say, thank you for arguing the fact that things are getting more complicated. And he was saying, before we introduced Salt
02:24
and similar automation tooling in our infrastructure, he said, my team could deploy five applications. We maintained five applications. And he says, now that we have automation tooling,
02:40
we have to maintain 50 applications. And so this is really one of the main reasons why things are getting more complex. It's just that there are more services out there. But something else that's occurring that's making things more complex
03:02
has to do with all of the different types of deployments, all of the different types of systems that we need to manage and interface with. It used to be, right, that we would set up, say, a server deployment, and we'd maintain some desktops, and that was that.
03:24
But now, as you're all very well aware, there's just a lot more going on. Okay, so why?
03:41
Not only has infrastructure changed dramatically over the last few years, but it has a great deal to do with the tools that are available. Again, it used to be, and many of you remember this probably quite vividly, because it wasn't too terribly long ago,
04:01
that the primary case was to deploy a link to the LAMP stack. And to carve that up, and to carve out LAMP, you would have Linux, and that's your managing your operating systems, Apache, that's a web server, MySQL, that's database,
04:22
and P for all of the good programming languages of the day, you know, Python. Microsoft just came out with a new language called P. Just so apparently they're trying to go back in time and take over that.
04:41
Now, that was the old days. We had those basic components to deal with, a database, a web server, an application, and the operating systems. This is certainly not the case now. We still oftentimes need to, I shouldn't say oftentimes, we still need to interface with and manage operating systems.
05:02
I imagine that's particularly prevalent in the minds of people who come to a conference about an operating system. And instead of the old days where we would say, well, we've got an SQL database, and we're going to argue between MySQL and Postgres,
05:23
we have a few more databases to consider and manage now, whether those databases are SQL, NoSQL, on-premise, in the cloud, spanned across multiple cloud services and providers, et cetera. But also, we've got so much more along the lines
05:43
of big data and artificial intelligence systems. In the last year, Amazon deployed hundreds, if not more than a thousand, new services to their cloud. If this isn't increasing complexity, I don't know what is.
06:03
And then, again, tying back into my original point, we're deploying more and more applications to these clouds than we ever have before. I say clouds, but I should say just to our infrastructures.
06:24
Okay, so if infrastructure looks a lot different today than it did 10 years ago and 20 years ago, I just wanted to take a brief and happy step through time and discuss the evolution of infrastructure.
06:41
So back in the 70s, this is supposed to be the 70s, maybe the 60s with that hair, but we had mainframes. And that was how computers in large organizations worked, right, mainframes. And then we evolved to commodity hardware,
07:03
which I'm sure many of you have seen some beautiful cabling jobs in your lives. I tried to find one that you could still see the servers behind the cables. Linux in and of itself was one of the great enablers
07:22
for commodity hardware. And for having many independent servers in the rack, this scenario we're so familiar with now. And I wanna stop and put a little emphasis on this and talk about the fact that open source software enables us to progress significantly faster
07:44
in what we were able to build and deploy. Because all of a sudden, we of course have the availability of that stack. Now as we move forward and start to look at the cloud, when the cloud really took off,
08:00
if we look at virtualization, sure in the late 90s VMware came along and they said, hey we've got this virtualization thing. But what really made the cloud take off was Zen. And the availability of virtualization in the open.
08:24
And that I think was really when the transformative aspect of cloud started to occur. Because all of a sudden, we had legitimately free and open tools that we could continue to build on top of and kind of go bananas. And yes, that visual of virtualization
08:44
is meant to be comical. The numbers are probably too small to see. It's a wildly over provisioned server, which that never happens. So if we have progress, right, that we've moved forward from mainframes
09:02
to commoditized hardware, to cloud, to containers, and then as we are currently moving into things like serverless, why is it that when we oftentimes go into a large data center and deployment,
09:21
it instead looks a little more like this? Where we see that they're using multiple clouds, and by multiple clouds, I do mean that they're using AWS plus three other clouds. Usually one is some cloud that, you know, that guy Bob in the back set up.
09:42
Yeah, I hear some laughing. You know what I'm talking about. And you have no idea why some random critical service is running on Linode. I mean, I like Linode and all, but so, and then when we look at the bare metal on premise systems, which still exist,
10:02
we end up seeing a very wide swath of what's being used there, from classically managed maintained servers, all the way up to the vast and very quickly proliferating Kubernetes systems. So the question I wanted to pose is,
10:23
why do infrastructures look like this today? Why have infrastructures and these deployments, in so many cases, been unable to completely move forward into the cloud, and into containerization, and into serverless, and that we seem to leave
10:42
this long tail out behind ourselves, and then we end up needing to manage that tail continually. So one of the philosophies that I like to present
11:02
is that the pie keeps getting bigger, but the actual amount of pie that any individual component takes up doesn't necessarily shrink. Many technologies are very hard to kill.
11:21
They're like, I don't know, certain villains in Bond movies. They absolutely refuse to die. I'm amazed how much Fortran still exists. I'm also amazed to walk into organizations and see that they're still building software
11:41
and maintaining it using models that they developed in the early 90s. So a lot of these older components refuse to die, and it's more about we keep making the pie bigger, and as we keep making the pie bigger, we keep introducing new technologies into the mix.
12:00
Even under, let's just say Microsoft's best CEO ever, if you remember, he liked developers. Microsoft's physical install base didn't really go down, but the overall deployment of servers and services
12:23
went up to the point where we just ignored the relevance of Microsoft in the data center. So I guess the point I'm trying to make is that it's very, very easy and very tantalizing
12:42
for us to be completely confused and consumed by what is coming out tomorrow, as opposed to looking at everything that we've got from a more complete view and saying,
13:03
what is it that's out there? What are the real things that are going on? And how can I build and manage my systems and deal with the incredible flood of open source software which is continually bearing down on us?
13:23
So with that said, with that introduction, I want to talk a little bit about some of the emerging areas of infrastructure, some of those emerging patterns and some of those emerging problems. So I'm gonna start by talking about security
13:43
and how the security landscape in just the last few years has had to change so dramatically. And now we've seen a very significant shift in the type of software that is being built and deployed for security. When it comes to Internet of Things,
14:01
to use a terrible term to just try and deal with the fact that we can run software on everything now, similarly, the threats and the opportunities in Internet of Things has become incredibly vast.
14:21
And I also want to spend some time talking about serverless. As we see containers taking such a strong hold in our infrastructures, I want to put a strong emphasis on the fact that the serverless architectures today are receiving a lot of developmental attention
14:43
and that as we are seeing systems like Kubernetes take such a strong hold in the data center, similarly, these serverless architectures and serverless systems are becoming very, very important and we need not only to figure out how they work,
15:02
how to interface with them, how to build applications on them, but also what those implications are for existing data centers, existing applications, and existing deployments and systems, okay? Oh, and artificial intelligence is a thing, apparently.
15:25
It's ironically always been a thing. I mean, that was kind of Alan Turing's original dream, was it not, to build artificial intelligence. And I would say that since it's so difficult to find normal intelligence, building artificial intelligence in and of itself
15:42
seems to be still a rather daunting task. Okay, so let me talk a little bit about security. I got an email last night. Do you guys have a restaurant called Chipotle in Europe?
16:03
No, there's no Chipotles out here. There's one in Frankfurt. There's a Chipotle in Frankfurt. I was gonna make a crack about how I never find Mexican food in Europe. It might be because you're like far away from Mexico
16:20
and you eat Spanish food instead. That makes sense. It might be the same reason why you can't find English food in the States. But unlike English food, Mexican food I think tastes quite good. So, Chipotle. I got an email last night from Chipotle
16:41
informing me that their cash registers have been hacked. This is not the first company whose cash registers have been hacked. It's tiresome. I'm sure that the banks love the fact that they need to reissue everyone's credit cards
17:01
every three months now, it seems. So, if we're looking at the emerging security threat, we have to consider systems, paradigms, and models which are very, very different
17:21
from how we have traditionally managed security. How we have traditionally managed security, of course, is to make sure that our systems are patched, make sure that our system software known vulnerabilities are taken care of, and then we firewall the ever-loving daylights out of our networks. But when we start to look at environments
17:43
where we have so many end devices, we have to come up with new models of managing security. And we have to come up with new ways of exposing where our security faults are.
18:03
And over the last few years, there have been a lot of security companies which have emerged, and a lot of security companies which have focused around a number of key areas to try and mitigate security issues. Now, these areas, generally, you've got your classical approach to security,
18:22
which is to say, yes, let's make sure that our vulnerabilities are taken care of and our systems are patched, and that our systems are configured in a secure way, auditing against security standards, government security standards, et cetera. Now, as we move forward,
18:42
some of these emerging security companies, again, are doing things like creating honeypot systems and having very aggressive deception systems built into the network. I'm not entirely sure how that's gonna work for cash registers, or how it's gonna work for weather stations or oil rigs,
19:04
or one of my personal favorites, slot machines. Although, I have to admit, I'm generally less concerned about the overall security of slot machines. I don't see that as a personal major economic impactor. Now, I say that because something like half of the world's
19:24
slot machines have salt on them. So, let's see. One of the other problems that we run into with security has to do with the fact that many companies go through the security assessment,
19:41
and then they come to the conclusion that it's going to cost more money to defend from a security breach than it is to mitigate one. I think that this was an incredibly startling revelation that many companies have made the very real decision
20:03
to roll the dice and hope that they aren't the ones that get hit. Now, with that said, though, we see that there are far more issues with security today than there were traditionally.
20:22
So, as the scenarios change and the threats change, I wanted to make a comparison to a major security change which occurred in Europe in the 14th century. See, that was about the time that we came up with cannons.
20:42
That's a great idea. We can shoot giant balls of lead at our enemies. So, the problem with cannons and defense was that all of the walls around cities which had been built up until that time
21:03
were fairly thin walls. They were made to defend against a guy on a horse with a spear or a sword. They weren't made to defend against gunpowder-propelled projections.
21:21
And so, as we look at many of the cities around here, and again, you guys probably get to see city walls a lot more often than I do. We've given up on city walls in the US. We just kind of had lots of guns. That's still an ongoing problem for us.
21:40
But to great expense and to a great economic burden, huge numbers of cities in the 14th century had to tear down their existing old city walls and build city walls that could deal
22:02
with the onslaught from a cannon and change them from what they had which were again these tall, flat, thin walls that a cannon can easily get through and change those into large earthwork sloped walls that a cannonball will bounce off of and it won't matter.
22:23
And admittedly, when this first happened, they went to the best architects of the day to say, how are we going to defend from this? Effectively, what I am proposing is that the emergence today of security threats
22:40
and how they have changed in the last few years poses a similar threat to infrastructures and devices as the cannon did to medieval walls. And that the requirement exists to look at a fundamental revamp
23:02
of how we are managing security inside of the data center and inside of managed devices. Now, so the question is, do we have the right models in place to deal with these emerging threats?
23:25
One of the major problems that we went into is that the nature of the threats changed so dramatically based on the systems which we are interfacing with. And so when we dive into saying that
23:43
I need to secure a data center, the way that's worked has changed so dramatically because of microservices and because of virtualization and even because of the cloud. One of the emerging technologies, I shouldn't say emerging, it's fairly emerged, is a concept called microsegmentation.
24:06
Microsegmentation is the ability to map all of your allowed network systems and then have individual firewalls on all of your servers so that you only allow all of the inter data center communication which is required
24:22
and then have an accurate and up-to-date map of everything that's of all of your network connections so that as soon as the network connection tries to occur which doesn't look normal, you instantly become aware of it. Now, microsegmentation schemes have been able,
24:40
a few of them have been able to keep up with the cloud but there's some strong questions about whether or not these sorts of approaches are gonna be able to keep up with a containerized infrastructure or a serverless infrastructure. And then when we look at vulnerabilities, is that enough?
25:01
One of the things that's great about a lot of containerized infrastructure tools is that they can be set up to prevent you from deploying a container that has known software vulnerabilities in it. It sounds like a really good idea.
25:23
It also sounds kind of like always checking the validity of your certificates when you make TLS connections. And no one in this room ever accepts a self-signed certificate, right?
25:40
Never. Everyone in this room also with SSH connections, I'm sure that you always manually check to make sure that the system you're connecting to is what you think it is. And so I find it easy to believe that
26:00
people who are deploying to infrastructures that are checking for vulnerabilities, it's easy to flip that off. So something else that we need to consider when we're looking at security in this new wave of system tooling is also figuring out how to make that security
26:21
as non-cumbersome to the users as possible. Yesterday we had an excellent talk from the FSF about people switching away from Linux on the desktop. People don't care about what runs on their computers,
26:45
or I should say the vast majority of them don't care. What they care about is getting done what they need to get done. They care about doing their jobs. And if we have security tools
27:00
which prevent people from doing their jobs, people will do everything they can to break those tools. And I'm sure you've seen it before. And I see some people chuckling, I appreciate that. But this is a very real and serious thing that if we cannot provide security
27:20
which actually delivers real security, but at the same time, if that security delivery makes people's jobs more difficult, then they're gonna turn it off. Which is why I can't tell you how many times I've sat down and talked to people
27:41
about the workings of SELinux in my past. And the ubiquitous response to SELinux is why would we leave that on? It's annoying and it gets in the way. I got my first job managing systems
28:01
because I was able to explain to them that SELinux was their problem in a job interview. And they hadn't realized yet that they needed to do something about it. So again, we need to make sure that the security tools that we build don't impede users.
28:22
And so that becomes the question, what security models do we need for the next generation of systems? And do they even exist? Okay, I'm gonna change gears now a little
28:40
away from security. Since security is boring anyway, right? Nobody wants to do that. It's tedious. And I wanna talk to you a little bit about serverless. Serverless is a fairly new thing.
29:02
Originally introduced, or the concepts of serverless were originally introduced, I should say originally, modernly introduced by a tool called Amazon Lambda. And the idea behind serverless is that you can put your application somewhere
29:22
and you don't need to be aware of or care about the underlying operating system at all. The operating system logically becomes irrelevant to the person who is running their code. The operating system, of course, is still relevant. I actually was on a consulting call with a group
29:42
that was asking me, they basically wanted to bring me on a consulting call so that I could explain to them that the operating system wasn't relevant anymore and that the operating system was dead and they didn't like the fact that I disagreed with them.
30:01
Please do not hire a consultant just because you want someone to agree with you. That's not what hiring consultants is for. So the last thing I want to do in talking about serverless is make people think that I think that operating systems are irrelevant.
30:22
I think that operating systems are increasingly relevant if for no other reason, just the sheer fact that we need to install them on more stuff. And that if there is a serverless infrastructure, somebody has to take care of the operating systems
30:41
which exist underneath it. Now, so serverless, this idea that we are able to just say put code out there, I don't care what it runs on, just so long as Python is there or I almost said Perl,
31:01
but I've never known a serverless system to deploy Perl. Do any of them deploy Perl? So just as long as the environment I need to execute code is there. I make jokes about Perl, but Perl is a fantastic language and Perl 6 admittedly, I think that it took them a long time, but they've done a really nice job.
31:22
So if anybody likes Perl, it's not personal. Okay, back at serverless conference, a big serverless conference that happened maybe a month ago. They presented that there were three pillars
31:43
of serverless and that those pillars are abstraction, micro billing, and the fact that it's event driven. Now, one of the things that I find extremely comical about this scenario is micro billing. Does anyone remember where micro billing
32:02
comes from in computers? Going back in the history? Sorry, was somebody gonna say something? Mainframes. I've heard people say a couple times or a number of times that all you need to do to come up with a new startup and infrastructure
32:22
is to pull out an old mainframe manual and to make one of those applications and try and sell it. In all honesty, a serverless setup isn't too different from what we used to present to users as applications inside of mainframes. Where we would present to a user at jail
32:42
and say, here's a jail and you can run an application in that jail. And I think this ties into, and I'm going to, I never thought I'd say this until very recently. I'm gonna have to argue that I think
33:00
that mainframe sales are gonna go up dramatically in the next five years. I'm relieved to see many nodding heads, a few rolling eyes, and I think somebody in the back vomited. I'm sorry about that. How many times have we walked into organizations
33:21
that have said, we're gonna deploy OpenStack, and a year later they say, well, there goes $8 million and we don't have OpenStack. And how much easier it would be for those companies to say, I'll just invest $100,000
33:41
and get a mainframe that just is a cloud and I'm done. So this is definitely something that's emerging. But that's enough about micro-billing. The abstraction, of course we've already talked about that. You need to make sure that the person
34:01
writing the application doesn't need to care about the operating system. Now, this has challenges. For some strange reason, especially a room full of operating system people, for some strange reason we built many wonderful tools into our operating systems,
34:22
like logging, and networks, and really fantastic file systems, like butter FS. And so to tell someone that they're going
34:41
to be executing their code inside of a sandbox that doesn't care or have visibility into the operating system might sound. Somewhat offensive. But at the same time it can definitely allow us to deploy applications faster. At the end of the day that's what the people
35:00
writing the checks want. Is they want more applications delivered faster and more easily and if possible for less money. If possible. Another statement that was made yesterday was the argument of maybe as open source people we've spent too much time trying to argue
35:24
the cost benefits of open source. When OpenStack came out there was a huge amount of market speculation that VMware was in deep trouble.
35:46
VMware's stock has done just fine. Because if someone perceives that money can remove difficulty from their life, quite frankly that's what money's for.
36:02
And they're happy to spend it. The expenditure of money is often further down the list if they are able to give their teams more convenience and deliver more product. So as much as I love the fact,
36:22
and I'm a big fan of course of the fact that open source is free, it is something that I do think that we need to consider. Okay, the third pillar of serverless is the one that we actually need to talk about. And this is event driven.
36:41
So what does it mean then for something to be event driven? In the serverless idea it's all about deploying a function, basically it's deploying an application, and that application has a function which is going to be triggered when an event is fired.
37:00
Whether that event is a change to data inside of a database, or whether that event is someone hitting an API gateway. So this is all fine and good, and those events become important. You have to have an event bus.
37:24
But do we need more than single event reactions? And this is the part of this talk, I know I'm almost 40 minutes in
37:42
and I feel like I'm starting to get to the punchline. This is the part of the talk that personally I'm most interested in. The serverless paradigm is all about events, but one of the first things, and one of the biggest things I learned with Salt,
38:03
is that you can only do so much with a single event reactor. A reactor that waits for a specific event and reacts to it. What we really need is to be able to deal with
38:23
more than single event reactions. And so even as we begin to see the emergence of this serverless concept, I want to emphasize that the serverless concept
38:41
really creates a dynamic distributed operating system. An operating system that allows us to deploy code in a transparent and distributed way without needing to worry about the underlying mechanism of things like scheduling where that code runs.
39:02
Which is kind of what an operating system does. So in a distributed computing environment, what do we need to do to take it further? First, what do we need to do on the application layer to make applications be aware of events?
39:22
So I'm going to argue that it is incredibly important that applications themselves need to be aware of the events that are being fired inside of distributed systems. Whether those events are monitoring events, whether those events are triggers
39:44
which are occurring outside of the application, or whether those events are custom events which the application is creating. But that need to have the application interface with the event system I think is very, very important and extremely enabling to the application developers.
40:04
And that it also delivers an event bus, an event framework back to the application developer. Because another thing that's very important about operating system development is understanding that what we build in the operating system
40:21
is built for the purpose of delivering software to an end user, however competent said end user may be. And often, and we need to remember that the software that we're delivering to the end users, those end users could be other software developers like us.
40:42
That is, I would argue, definitely Linux's strength is delivering solutions to people like those who are building it. Which is why it's more popular in a data center than on a desktop. Despite the fact that I will argue
41:00
that the Linux desktop is superior, but whatever. I might just be out to lunch on that one. Okay. So, I wanna talk about autonomous event-driven software development.
41:25
So I've done a lot to lead up to this and I'm gonna try and tie a lot of these concepts together. What does autonomous event-driven software mean?
41:40
When we look at artificial intelligence, the vast majority of what we're doing in artificial intelligence right now is advanced data analytics and advanced data management and searching. And less to do with making decisions.
42:04
Now we've got some really good examples of artificial intelligence systems which are decision-making systems. Autonomous cars is one of them. But when we look at a lot of organizations around this, really what they're trying to do is to say, person X wants information about thing Y.
42:24
I will argue that a lot of use cases of artificial intelligence today is really us trying to make search engines suck less and be targeted to the users still. But so, what does it mean
42:41
then to have autonomous systems? The best example out there today I would say are robotics-style systems. Like autonomous cars.
43:03
Now, I can neither confirm nor deny that I have ever done any work for the US Navy. I did used to work for the US intelligence community. It was way like, it was in the past. Don't get after me. I don't know, I wasn't involved in actually deploying any systems that spied on you.
43:24
They never told me where they were putting the things I built. Anyway, a project I am familiar with was an early autonomous submarine system.
43:43
And I learned a lot from this autonomous submarine system. And what they had done was built an early type of flow programming system. Now, I mentioned a little earlier
44:03
that Microsoft just came out with a language called P. Which, sorry, I just have a hard time with calling anything just the letter P. It doesn't denote a whole lot of confidence in me. But anyway, it's an event-driven programming language
44:26
that ties very directly into a flow programming style. And a lot of the emerging programming systems that we're seeing are using this flow programming style. And inside of salt, for instance, the thorium salt reactor
44:40
is a flow programming interface. So, a flow programming system means that all of the software that you are writing exists inside of a state machine. And it means that based on events that are coming into the system,
45:02
your software changes state. And then the states that your software changes into and out of have the ability to munch and change and manipulate the incoming data. Subsequently, that model allows us to create decision engines.
45:22
Engines which are very easy for us to put in what are the parameters and the thresholds before we make a particular decision. Again, the salt thorium reactor is, I think, a decent example of this because it's able to do things like say, as soon as the system has been offline
45:42
for a certain amount of time, I'm going to do something about that so that we're able to very easily build in passive non-polling styles of decisions. But also, these sorts of engines are the types of things that make it very easy for us to say
46:02
that we're going to make very specific action when we see that the disk IO weight is particularly high. But more importantly, these decisions can be passed further into aspects of an infrastructure like security and IoT.
46:27
See, you thought I was just wandering around like some madman for the past 40 minutes, didn't you? I had a point. When we start to look at how to manage historic systems,
46:42
current systems, and future systems, the argument I am presenting is that we have to have event-driven models and that those event-driven models need to tie into autonomous automation.
47:03
And that that autonomous automation needs to be developed sufficiently to be able to hook into disparate systems. It can't be narrow-mindedly tied to whatever the new thing is. They have to be broad-based.
47:23
They have to be able to bridge these gaps. Because what happens on a cash register affects things which we are doing inside of an infrastructure. And what happens on a security camera changes the alerts
47:41
that need to be presented to humans about dealing with what's happening on that security camera. And we can't bridge these gaps with using the models that we have used in the past
48:01
because so often we love to have tunnel vision. And it's hard not to. As a software developer, your job is to figure out what it is that you're building, focus on it, and build it. And if you have to consider the whole world around you, I mean, many of you are probably aware
48:22
and have gained sufficient wisdom in your lives to realize that that becomes a very impractical setup. All right. So using autonomous systems and autonomous models,
48:42
I believe is how we get to that next level of accepting the fact that no, we're not just gonna throw away the past and that the past rears its head yet again because just because some guy lived in 1977, it doesn't mean that he didn't know what he was talking about at the time
49:03
and that we need to keep an open mind to what is moving forward and what everything came from. Okay. So right now, the main work that I'm engaged at
49:22
in SaltStack is around building autonomous systems which are able to interface across multiple topics and multiple aspects of an infrastructure that goes well beyond what somebody generally thinks of salt being.
49:43
And this is one of those messages that I generally have a hard time getting out. See, I built this thing called salt and everyone sees it as a configuration management system. When I built salt, the idea behind it was that I needed a high-speed executor
50:03
to be able to execute arbitrary routines on distributed groups of systems so that I could make a decision engine. In building those executors, I realized that I wanted two kinds of executors,
50:23
one-off executors that allow you to do one-off routines and execute functions, but also a need for idempotent executors. It just so happens that idempotent executors are also called configuration management.
50:43
And so we ended up in this config management trench and are still very much so viewed to be in that arena. And also I will admit it stalled us a great deal and redirected a lot of my original intents.
51:01
And so I'm very excited at the fact that now we're finally getting back to taking the parts of salt that I am interested in outside of config. All right, I have to back up and at least come back and argue
51:20
that salt still has the best config management system. I'm really biased, but I still definitely think it does. Now, so what we're looking at is first, that we built this thing called the thorium reactor, okay?
51:42
And the event bus inside of salt and the ability to grab events from disparate systems and types of systems, ingest those events and then react to them. Now, using that system, I've got people who have done all sorts of crazy things,
52:04
but most importantly, it has been the backbone in research for figuring out how to glue together all of these disparate types of deployments and all of these disparate types of systems
52:22
that need to be managed. Again, whether those are sensors on an oil rig or whether those are slot machines or light bulbs or security systems in someone's apartment or home, okay?
52:44
And so again, that's the punchline. We've got the thorium reactor inside of salt that does some of the things that I was talking about, that we're just beginning to crack that case. But at the same time, we've got a lot more
53:00
along those lines that we are actively working on. And I wanna conclude a big thanks yet again to SUSE for not only putting up with hearing me talk yet again, but also that they've been a wonderful support
53:21
to the salt project. And that I'm incredibly impressed by the value and the capabilities of the SUSE engineers. So in a nutshell, autonomous systems need to exist
53:43
to tie together disparate deployments of devices. We need to be able to build these systems in such a way that they take into consideration a broad set of use cases.
54:01
Because it's very important that we are honest with ourselves in recognizing the fact that infrastructure as well as systems deployments are becoming significantly more complicated than they were only a few years ago. And also being honest with ourselves with respect to the diversity of deployments
54:22
which exist out there and which need to be adequately managed. All right, so this is where I get into trouble. Fortunately, I'm just about out of time.
54:41
But does anyone have any questions, comments, arguments, rebuttals or rotten fruit or vegetables? Or have I spoken so ridiculously and abstractly that you're all just thinking I'm mad?
55:04
Okay, I hope that you all have a good time today. I'm really excited about some of the talks. I won't endorse any in particular. But I think that we've got a really, a really fantastic day today at the conference. And thank you again for letting me come and speak.