We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Combining EASY!Appointments with Jitsi for online appointment management

00:00

Formal Metadata

Title
Combining EASY!Appointments with Jitsi for online appointment management
Title of Series
Number of Parts
542
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
During Covid a lot of public and private services switched to – in most of the cases -closed source or commercial services online solutions to book appointments and meetings. At, GFOSS, we adapted EASY!Appointments, an online open source platform for appointment management and Jitsi, an online open source meeting with audio/ video and combined them into to a seamless integration for booking an appointment that will take place online rather than physically.
Java appletOpen sourceSoftware developerComputer virusSoftware engineeringService (economics)Just-in-Time-Compiler
Java appletPoint cloudJust-in-Time-CompilerCompilerService (economics)Java appletIntegrated development environmentJust-in-Time-CompilerService (economics)Cartesian coordinate systemDynamical systemFitness functionMereologyComputer animation
Mobile appJava appletBefehlsprozessorBroadcast programmingMietserverRead-only memoryPoint cloudScaling (geometry)Service (economics)Internet service providerVariable (mathematics)Point cloudJava appletMobile appCloud computingClient (computing)Goodness of fitBefehlsprozessorNP-hardOrder (biology)Instance (computer science)Cartesian coordinate systemGraph (mathematics)ScalabilityNumberEnterprise architectureImplementationJust-in-Time-CompilerIndependence (probability theory)Variable (mathematics)Computer animation
Independence (probability theory)Just-in-Time-CompilerMachine codeSpeicherbereinigungJava Virtual MachineBefehlsprozessorRead-only memoryOrder of magnitudeCrash (computing)Scaling (geometry)DeterminismMaxima and minimaJust-in-Time-CompilerJava appletCodeBefehlsprozessorSemiconductor memoryTheoryCrash (computing)Thresholding (image processing)Instance (computer science)Right angleSpeicherbereinigungMereologySoftware testingNumberMultiplication signSeries (mathematics)Arithmetic meanDifferent (Kate Ryan album)Graph (mathematics)Enterprise architectureCartesian coordinate systemGraph (mathematics)Revision controlInterpreter (computing)CausalityStructural loadScaling (geometry)Integrated development environmentComputer animationDiagram
Independence (probability theory)Process (computing)Just-in-Time-CompilerCompilerService (economics)Compilation albumPoint cloudPlane (geometry)Control flowRun time (program lifecycle phase)Link (knot theory)System programmingProduct (business)Operations researchRead-only memoryIntegrated development environmentEnterprise architectureOpen setConsistencyOpen sourceComputing platformJava appletMultiplicationClient (computing)BefehlsprozessorCache (computing)BefehlsprozessorRange (statistics)Just-in-Time-CompilerType theorySoftware repositoryServer (computing)Video gameClient (computing)Heegaard splittingBelegleserMereologyJava appletTheoryCASE <Informatik>Online helpOpen sourceIntegrated development environmentSemiconductor memoryCompilerPoint cloudService (economics)Compilation albumCartesian coordinate systemCycle (graph theory)Process (computing)Set (mathematics)Perspective (visual)Level (video gaming)Computing platformRight angleInstance (computer science)Stability theoryDifferent (Kate Ryan album)Distribution (mathematics)Computer animation
BefehlsprozessorCache (computing)Population densityReduction of orderRead-only memoryClient (computing)WebsiteDatabaseScale (map)AverageStructural loadThread (computing)Compilation albumOverhead (computing)Single-precision floating-point formatOpen setShift operatorPairwise comparisonConfiguration spaceDefault (computer science)Virtual machineMobile appIntegrated development environmentInstance (computer science)EncryptionTelecommunicationCompilerJust-in-Time-CompilerPoint cloudJava appletRun time (program lifecycle phase)DisintegrationJust-in-Time-CompilerJava appletQR codeGoodness of fitCompilerMaterialization (paranormal)Web pageSlide rulePoint (geometry)EncryptionServer (computing)Position operatorTelecommunicationCompilation albumArithmetic meanSound effectIntegrated development environmentInstance (computer science)Client (computing)Different (Kate Ryan album)Semiconductor memoryCartesian coordinate systemBefehlsprozessorNumberStructural loadScaling (geometry)Graph coloringReplication (computing)MultiplicationPointer (computer programming)Computer animation
Event horizonSoftware developerCellular automatonReading (process)Java appletFile formatCodeCore dumpOcean currentUniqueness quantificationClient (computing)InformationProfil (magazine)Cache (computing)Metropolitan area networkJust-in-Time-CompilerArmMobile appComputer architectureComputer animation
Probability density functionJava appletServer (computing)Java appletInstance (computer science)Mobile appCache (computing)
Server (computing)Java appletJava appletMobile appJust-in-Time-CompilerRight angleServer (computing)Medical imagingRemote procedure callBeta functionMultiplication signProcess (computing)Software testingMereologyPublic key certificateComputer animation
Program flowchart
Transcript: English(auto-generated)
Hello. I'll get started. Okay. My talk is entitled The Next Frontier in Open Source Java Compilers, Just-in-Time Compilation as a Service. Whoops, this isn't working.
My name is Rich Haggerty. I've been a software engineer for way too many years. I'm currently a developer advocate at IBM. So we're all Java developers. We understand what a JVM and a JIT is. We'll do the JVM, execute your Java application. During runtime, it
sends the hot methods to the JIT to be compiled. With that in mind, we're going to talk about JIT as a service today, and we're going to break it down into three parts. First, I'm going to talk about a problem, right, which is Java running on cloud, specifically in distributed dynamic environments like microservices. Then we're going to talk about the reason,
which is going to take us back to the JVM and the JIT, which has some great technology. It's great technology, but does have some issues. And then the solution, which is the JIT as a service. So is Java a good fit on the cloud? So for context,
we'll talk about legacy Java apps, enterprise apps running. They're all monoliths running loaded with a lot of memory, a lot of CPUs. They took forever to start, but it didn't
matter because it never went down. We have clients running Java applications for years. If they did upgrade, it would be every six months to a year, do some simple refreshes. That was the world of legacy Java enterprise apps. Now we move to the cloud. That same monolith is a bunch of microservices talking to each
other. They're all running in containers, managed by some cloud provider with a Kubernetes implementation to orchestrate. And we have auto scaling up and down to meet demand.
So the main motivators behind this, obviously, are flexibility and scalability. Easier to roll out new releases. You can have teams assigned to specific microservices and never touching other microservices. Once you're on the cloud, you can take advantage of any of the latest, greatest cloud technologies like serverless coming
out. Obviously, you'd have less infrastructure to maintain and manage, and the ultimate goal is saving money. So before we start counting all our money, we got to think about what about performance, right? So there's two variables that impact cost and performance.
It's container size and the number of instances of your application you're running. Here's a graph showing all the ways we can get these variables wrong. Starting down here, containers are way too small. We're not running enough instances. It's pretty
cheap, but the performance is unacceptable. On the opposite side, we have our containers are too big. Way too many instances running. Great performance. Wasting money. So we need to get over here. This is a sweet spot, right? We got our containers sized just right. We have just enough instances for the demand. That's what we want to get
to. Very hard to do. In fact, most conferences have a lot of talks about how to get here, right? Or their fixes for this problem. So before we can figure out how to fix it, we got to figure out why it's so hard. And in order to do that, we got to talk about the JVM and the JIT. So first the good. Advice independent. Java became so popular
because we write once, run anywhere, in theory. 25 years of constant improvement. A lot of involvement from the community in it. The JIT itself, optimized code that
runs great. It uses profiler, so it can optimize a code that you can't get doing it statically. Has very efficient garbage collection. And when the JVM collects more profile data and the JIT compiles more methods, your code gets better and better. So the longer your Java application runs, the better it gets. Now the bad.
So that initial execution of your code is interpreted, so it's relatively slow. Those hotspot methods compiled by the JIT can create CPU and memory spikes. CPU spikes cause lower quality of service, meaning performance. And your memory spikes
cause out of memory issues, including crashes. In fact, the number one reason JVMs, or a main reason JVMs crashes because of out of memory issues. And we have slow startup and slow ramp-up time. So we want to distinguish between the two. Startup time is the time that it takes for that application to process first request,
usually during an interpretation time. And ramp-up time is the time it takes a JIT to compile everything it wants to compile to get to that optimized version of your code. So here we have some graphs to back that up. Here we take a Java enterprise application.
And you can see on the left, we got CPU spikes here happening initially, all because of JIT compilations. Same thing on the memory side. We got these large spikes that we have to account for. So let's go back to that graph I had, finding that sweet spot.
Now we have a little more information, but still we need to figure out a way to right size those provision containers. And we got to make our auto scaling efficient, right? So we have very little control over scaling. We control the size of our containers, but as far as scaling goes, we just have to set the environment enough
up correctly so that auto scaling is efficient. So on the container size portion of it, the main issue is we need to over provision to handle those out of memory spikes,
which is very hard to do. Because JVMs have a non-deterministic behavior, meaning you can run the same application over and over and you're going to get different spikes at different times. So you got to run a series of tests with loading to figure out, to get that number kind of right. And on the auto scaling part of things,
again, we talk about the slow startup and ramp up times. The slower those are, the less effective your auto scaling is going to be. And the CPU spikes can cause other issues. A lot of auto scalers, the threshold for starting new instances is CPU load. So if you start a new instance and it's spinning, doing JIT compiles, your auto scaler may detect that
as a false positive, say, oh, the demand is going up, you need more instances. In this case, you really didn't. So it makes it very inefficient. So the solution to this problem
is we need to minimize or eliminate those CPU spikes and memory spikes. And we got to improve that startup and ramp up time. So we are proposing here, we're going to talk about JIT as a service, which is going to solve these issues or help solve these issues.
So the theory behind it is we're going to decouple the JIT compiler from the JVM and let it run as an independent process. Then we're going to offload those JIT compilations to that remote process from the client JVMs. As you can see here, we have two client JVMs talking to two remote JITs
over here. We have the JIT still locally in the JVM that can be used if these become unavailable for some reason. Since they're all in containers, it's automatically managed by the orchestrator to make sure that they're scaled correctly. This is actually a mono to micro solution. So
taking the monolith in this case is a JVM. We're splitting it up into the JIT and everything left over in the other microservice. And again, like I mentioned, the local JIT still is available if this service goes down. So this actual technology does exist today. And it's called
the JIT server. And it's a part of the Eclipse OpenJ9 JVM. It comes with the, it's also called the SAMRU Cloud Compiler when used with SAMRU runtimes, and I'll get to that in a minute.
And I'm sure everyone here knows OpenJ9 combines with OpenJDK to form a full JDK, and totally open source and free to download. And here's a GitHub repo there. A little history of OpenJ9, it started life as the J9 JVM by IBM over 25 years ago. And the reason IBM
developed it was because they had a whole range of devices they needed to support. And they wanted to make sure Java ran on all of them. That's all the way from handheld scanners to mainframes. So it was designed to go from small to large in both types of environments
where you have a lot of memory or very little. And about five years ago, IBM decided to open source it to the Eclipse Foundation. And OpenJ9 is renowned for its small footprint, fast startup, and ramp up time, which we'll get to in a minute. And again, even though it's got a new name, it's OpenJ9. All of IBM Enterprise clients have been running their
applications on this JVM for years. So there's a lot of history of success with it. Here's some OpenJ9 performance compared to Hotspot. Again, this doesn't take into
account the JIT server. This is just the JVMs themselves going left to right here. OpenJ9's in green, Hotspot's in orange. So in certain circumstances, we got to see 51% faster startup time, 50% smaller footprint after startup. And it ramps up quicker than
Hotspot. And at the very end, under total full load, we have a 33% smaller footprint with OpenJ9. So, SAMR runtimes. So that is IBM's OpenJDK distribution. Just like all
the, someone just mentioned, there's a ton of distributions out there. This is IBM's. And it's the only one that comes with Eclipse OpenJ9 JVM. It's available no cost. It's stable. IBM puts their name behind it. So it comes in two editions, open source and certified. The only difference being the licensing and what platforms are
supported. And if you're wondering what SAMRU comes from, the name comes from, Mount SAMRU is the tallest mountain on the island of, anyone know? Java. There you go. See how that makes sense? If I had a T-shirt, I would have given you that. All right.
From the perspective of the server or the client talking to this new JIT server, this is the advantage that they're going to get. From a provisioning aspect, now it's going to be very easy to size our containers, right? We don't have to worry about those spikes anymore. So now we just, we level set based on the demand or the needs of the application itself.
Performance-wise, we're going to see an improved ramp-up time, basically because the JIT server is going to be offloading, we're going to offload all the compiles and the CPU cycles to the JIT server. And there's also a feature in this JIT server called AOT cache,
so it's going to store any method it compiles. So another instance of the same container application calling it, and it'll have that method, it'll just return it, no compilation needed. Then from a cost standpoint, obviously anytime you reduce your
resource cost or your resource amounts, you're going to get a savings in cost. And I mentioned earlier the efficient auto-scaling, you're only going to pay for what you need. Resiliency, remember the JIT, the JVM still has their local JIT, so if the JIT server goes
down, it still has, it could still keep going. So this is kind of an interesting chart, this is pretty big. So we're going to talk about some of the examples of where we see savings. So this is an experiment where we took four, let me see my pointer works, we took four
Java applications and we decided to size them correctly for the amount of the memory and CPU they needed, doing all those load tests to figure out what this amount should be. And we have multiple instances of them. So the color indicates the application,
you can see all the different replications. The relative size is shown with the scale of the square. And in this case, we used OpenShift to lay it out for us and it came out to use three nodes to handle all of this, all these applications in your instances. Then we introduced the JIT server, ran the same test. Here's our JIT server here, the brown,
it's the biggest container in the nodes. But you notice the size of all of our containers for the applications goes way down. So we have the same number of instances in both cases, but we've just saved 33% of the resources. And if you're wondering how they perform, whoops, went too far, you see no difference. The orange is the baseline, the blue is the JIT
server. And from that stable state, meaning once they've run, the performance is exactly the same, but we're again saving 33% of the resources. Now we'll take a look at some of the
effects on autoscaling in Kubernetes. Here we're running an application and we're setting our threshold, I think it's up there, at 50% of CPU. And you can see here all these plateaus are when the autoscaler is going to launch another pod. And you can see
how the JIT server in blue responds better. Shorter dips and they recover faster. And overall, your performance is going to be better with a JIT server. Also, that other thing I talked about with false positives. So again, the autoscaler is not going
to be tricked into thinking that CPU load from JIT compiles is the reason for demand. So you're going to get better behavior in autoscaling. Two minutes. All right, when to use it? Obviously when the JVM is, we're in a memory and CPU constrained
environment. Recommendations, you always use 10 to 20 client JVMs when you're talking to a JIT server because remember, that JIT server does take its own container. And it is communication over the network, so only add encryption only if you absolutely need it.
So some final thoughts, we talked about the JIT provides great advantage that optimize code, but compilations do add overhead. So we disaggregate JIT from the JVM, and we came up with this JIT compilation as a service. It's available in Eclipse OpenJ9,
also called the Samru Cloud, it's called the Eclipse OpenJ9 JIT server, that's the technology, and it's also called the Samru Cloud compiler. It's available on Linux, Java 8, 11, and 17. Really good with micro containers, in fact that's the only reason I'm bringing it up today. It's Kubernetes ready, you can improve your ramp up time, auto scaling,
and here's the key point here I'll end with. So this is a Java solution to a Java problem. Initially I talked about that sweet spot space, so there's a lot of companies, a lot of vendors trying to figure out how to make that work better, and a lot of them involve doing other
things besides what Java is all about, running the JVM, running the JIT, so it is a Java solution to your Java problem. That's it for me today, that QR code will take you to a page I have that has a bunch of articles on how to use it, also these slides and other good materials
about it. That's it for me, thank you very much. Any questions for Rich? Yes, you already have a question from the right there. Hi, so it sounds amazing. It's amazing, it really is amazing.
Are there situations where I shouldn't or couldn't just rip out my current JVM and use OpenJ9? Well why wouldn't you? OpenJ9 is a perfectly, I mean it's a viable JVM, it's nothing special,
right, and nothing unique about it that makes you change your code. It's a JVM that just points to the OpenJDK, the OpenJ9 JVM. A question from Simon in the back there. Okay, here it comes. The first was that we noticed that the JVM dump format was different,
so we couldn't use Java Mission Control or the flight recorder, was it? Maybe that has been fixed? I think so, because I've seen examples of using those apps in tests, so you better check that, yeah. And the second thing was that there were no wipers for ARM architectures.
Yeah, okay, that may be a problem. Yeah, you should go out and check the the latest coverage of that. Let me get to the man in the back there, Simon. I'm trying to shout. With your AOT cache, how do you deal with things like profiling information specifically around these optimizations? Well the way the AOT cache will work in this case,
for the JIT server, it's going to keep all that information and the profile has to match from the requesting JVM, right, so if it matches it'll use it, right, because also on the clients they also have their own cache. They'll keep it, but they go away once they go away, right,
or when you start a new instance of that app you have a brand new flush cache. There were more questions. I'm sorry. How would you compare these, I guess, reduced footprint
Java micro-containers compared to something that's compiled down in Graphium? Yeah, so that's what we were talking about. You want to go static, you're going to get a smaller image running statically, but you lose all the benefits of the JIT. Over time, yes, so that may be a great solution for short-lived apps, right, but the longer your
Java app runs, the more you're going to benefit from that optimized code, right. Yes. For Eclipse, they are not able to actually release the partners because they cannot actually
access the TCK certification process. Yeah, so that whole, yeah, the whole TCK issue is a, I don't know, well, I guess I could say, it seems to be an issue more with IBM and Oracle, right, so our own tests are going to be, they're going to encompass all the TCK stuff.
So basically you release all the, all the runtimes? Yes, it's OpenJ9 is managed by Eclipse, but 99% of the contributions are from IBM. It's a big part of their business. It's not going
to go anywhere. If you have to do open source, this is like the best of the most worlds, I think. It's available, it's open, you can see it, but you know you have a vendor who has their business based on it that it's not going to go anywhere, and they're going to put a lot of resources to making it better. So, you know, and I'm just telling you right now that we just
came up with a JIT server. We're going into beta on Instant On. I don't know if you've heard of that. It's based on Cryo. So we're going to be able to take snapshots of those images, and you can put those in your containers. Those are going to start up in milliseconds. So JIT basically handles the JIT server, handles the ramp up time, but Instant On will handle the startup time. So we're talking milliseconds, and that's coming
out in the next couple of months or so. Anyway, thank you. Well, if you don't have the JIT, then you're going to be running interpretive, right?
That's like the worst of everything, right? Oh, well, it won't be, but you still want to use the JIT remotely.
Oh, you're talking about locally. It will not be used. It will not be used. By the way, yeah, and by the way, the JIT server is just another persona of the JVM. It's just running under a different persona. No, it won't do that. Okay, thank you very much. Okay, thank you.