Automated Machine Learning With Keras
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Anzahl der Teile | 115 | |
Autor | ||
Mitwirkende | ||
Lizenz | CC-Namensnennung - keine kommerzielle Nutzung - Weitergabe unter gleichen Bedingungen 4.0 International: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben. | |
Identifikatoren | 10.5446/58814 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Abstract |
|
EuroPython 202129 / 115
1
3
19
25
31
34
36
38
41
42
44
46
48
52
60
61
62
65
69
73
76
82
83
84
92
94
96
103
110
113
00:00
GoogolAtomarität <Informatik>Mailing-ListeSchnittmengeVirtuelle MaschineSchaltnetzDifferenteProgrammbibliothekHypercubeMinimumMathematisches ModellGraphfärbungNotebook-ComputerFlächeninhaltFigurierte ZahlBitProzess <Informatik>ComputerarchitekturPerzeptronGewicht <Ausgleichsrechnung>DatenstrukturGlobale OptimierungZeitrichtungParametersystemDimensionsanalyseFunktion <Mathematik>ZahlenbereichAlgorithmusMAPExpertensystemRuhmasseMereologieVorhersagbarkeitWellenpaketAutorisierungEin-AusgabeMathematikNatürliche ZahlNeuroinformatikOrdnung <Mathematik>DruckspannungEndliche ModelltheorieWeg <Topologie>Zentrische StreckungSchlüsselverwaltungGebäude <Mathematik>CASE <Informatik>Innerer PunktKernel <Informatik>Bayes-NetzDatenfeldMinkowski-MetrikAuswahlaxiomBitrateFunktionalFilter <Stochastik>CodeTypentheorieSampler <Musikinstrument>Kategorie <Mathematik>PunktArithmetische FolgeErwartungswertVererbungshierarchieVerkehrsinformationEnergiedichteRechenschieberSpieltheorieMultiplikationsoperatorEreignishorizontRechter WinkelGüte der AnpassungKlasse <Mathematik>Objekt <Kategorie>Message-PassingZellularer AutomatDefaultWrapper <Programmierung>Faltungsoperatorp-BlockFreier ParameterSchreib-Lese-KopfElement <Gruppentheorie>Konfiguration <Informatik>Bildgebendes VerfahrenGanze ZahlBildverstehenDivergente ReiheProgrammschleifeGamecontrollerInformationVerschlingungAbstraktionsebeneDigitalisierungGeradeInstantiierungValiditätDynamisches SystemBackpropagation-AlgorithmusParalleler AlgorithmusPaarvergleichDatenverwaltungNormalvektorAlgorithmische LerntheorieTermFormation <Mathematik>MultiplikationComputerspielSoftwareentwicklerKartesische KoordinatenCodierungFitnessfunktionNeuronales NetzDatensatzBesprechung/Interview
00:26
Mathematisches ModellBinärdatenSoftwareentwicklungParametersystemGlobale OptimierungFormation <Mathematik>HypercubeMAPVererbungshierarchieVirtuelle MaschineWellenpaketExpertensystemSchnittmengeAutorisierungNeuroinformatikGebäude <Mathematik>ComputerarchitekturEnergiedichteGewicht <Ausgleichsrechnung>Rechter WinkelZeitrichtungEndliche ModelltheorieAlgorithmische LerntheorieParametersystemErwartungswertBayes-NetzAlgorithmusBitRuhmasseMinkowski-MetrikMereologieZahlenbereichAuswahlaxiomRechenschieberVorhersagbarkeitWrapper <Programmierung>Arithmetische FolgePunktMinimumMessage-PassingZweiGraphfärbungRandomisierungFlächeninhaltMathematisches ModellGlobale OptimierungOrdnung <Mathematik>ProgrammbibliothekParalleler AlgorithmusNeuronales NetzSpieltheorieDimensionsanalyseGüte der AnpassungMultiplikationsoperatorDifferenteNatürliche ZahlHilfesystemBackpropagation-AlgorithmusComputeranimation
10:22
MAPNotebook-ComputerIdeal <Mathematik>ZeitbereichCompilerDatenmodellInnerer PunktKernel <Informatik>AuswahlaxiomLastOvalMachsches PrinzipFormation <Mathematik>Ein-AusgabeNormierter RaumFunktion <Mathematik>Arithmetische FolgeWellenpaketEndliche ModelltheorieInformationAlgorithmische LerntheorieKernel <Informatik>Globale OptimierungHyperbelverfahrenNotebook-ComputerFunktion <Mathematik>FunktionalGanze ZahlObjekt <Kategorie>AlgorithmusExpertensystemKategorie <Mathematik>SchnittmengeDigitalisierungFilter <Stochastik>TypentheorieRechenschieberVirtuelle MaschineDifferenteProgrammbibliothekFaltungsoperatorMathematisches ModellCodeKlasse <Mathematik>Prozess <Informatik>ZahlenbereichEin-AusgabeFreier ParameterBitBitrateElement <Gruppentheorie>PunktGeradeParametersystemInstantiierungVerschlingungMAPGamecontrollerNeuronales NetzVorhersagbarkeitSampler <Musikinstrument>Bildgebendes VerfahrenComputeranimation
20:17
SummierbarkeitOvalLastRelationentheorieGlobale OptimierungKonfiguration <Informatik>AlgorithmusDefaultFunktion <Mathematik>Ein-AusgabeEndliche ModelltheorieObjekt <Kategorie>Bildgebendes VerfahrenValiditätPunktBitTypentheorieVirtuelle MaschineKartesische KoordinatenMAPNormalvektorSchnittmengeNeuroinformatikBildverstehenFaltungsoperatorp-BlockGewicht <Ausgleichsrechnung>CodeMinimumDatenverwaltungWrapper <Programmierung>ExpertensystemAlgorithmische LerntheorieVererbungshierarchieHypercubeDatenfeldProgrammbibliothekMereologieGamecontrollerWellenpaketArithmetische FolgeMathematisches ModellCodierungParametersystemComputerspielComputeranimation
26:57
BitPerfekte GruppeWeg <Topologie>Rechter WinkelProzess <Informatik>MultiplikationsoperatorTermParametersystemFormation <Mathematik>Endliche ModelltheorieGlobale OptimierungHypercubeMathematisches ModellMultiplikationWellenpaketSchaltnetzWrapper <Programmierung>MereologieCASE <Informatik>ZahlenbereichVorlesung/KonferenzBesprechung/Interview
Transkript: Englisch(automatisch erzeugt)
00:06
All right. Good morning, everyone. Welcome to Day 2 again of EuroPython 2021 and to the Data Science MiniConf. So with me today, we have Andre from Treport, who's going to be talking about AutoML with Kiras. Hey, Andre.
00:25
Hi. Hi, everyone. All right. Just a little bit about him. He is a data scientist with Treport, and he's trying to help build AI for energy trading. And before him being a data scientist, he
00:42
used to be an astronomer. So without further ado, over to you. Thank you very much. And yeah, thanks, everyone, for coming to my talk on the automated machine learning with the Kiras library. Sorry for potentially not maintaining the eye contact.
01:01
My slides are over here. So before we actually start, I would encourage you to after this talk also to go check out a talk from yesterday's afternoon by Katarina and Mathias about automated machine learning with AutoSKlearn. It was really excellent with some of the
01:22
topics we naturally overlap, but it's good to give you a bit more overview. So like we said, my name is Andre. I do data science at Treport. We are this year proud to sponsor, and so we are looking forward to chatting with you or even playing some games with you in our
01:40
booth. So come and visit us. And with that out of the way, we can start. Now, like many people, from time to time, I like to think to myself as a cynic. And as such, a cynic can understand or perceive human progress as coming in two steps. These would be laziness and
02:04
chasing away disappointment. Now, humans are lazy beings, and there is a lot of things that we don't like to be doing and that we would like to outsource to something or someone else. And every once in a blue moon, there comes an inventor or a genius or something like that
02:20
who claims to have invented a machine, a tool, or whatever that helps to take this pain away to deal with it, to help outsource it. And everyone is very happy how progressive we are. That is until the point where the second step comes, people start using it and realize it's not quite what's being sold to them and that a lot more progress needs to happen before
02:45
it catches up with the expectations. Now, with quite a bit of tongue in cheek, we could have an example of the computers. Now, I believe since pretty much the Godfried Leibniz, people have been thinking about building a thinking machine that would be doing the
03:03
thinking for us. Thinking is hard. It's easy to be wrong. It's embarrassing, et cetera, et cetera. This was what people wanted to outsource. Now, in the 20th century, someone actually built a computer, a thinking machine, and people very quickly realized that even though, okay, these machines think in one way or another, it's kind of awkward and it's super difficult to communicate
03:26
to these machines what we want them to think about, what we want them to do. And even today, after decades and decades of development, even though we got much better at communicating with these computers, we still need basically experts to do this for us efficiently.
03:44
Now, we could similarly shoehorn this story to machine learning. The laziness starts with the modeling. Again, modeling is hard. Thinking about how the nature on something works is difficult. And here comes machine learning that promises to take this pain away. You take your
04:01
data, ideally gathered by something or someone other than you, and you give it to your computer who will figure out everything it needs and enable you to make predictions what will happen. Now, most of us probably know that even though machine learning has been around for a while,
04:23
for example, the backpropagation algorithm for the neural net training has been around since the 80s, it's only relatively recently, let's say 10-ish years, that kind of an order that machine learning has been really gaining traction. And we are getting better at that.
04:41
But again, and this is the common theme, you need experts to do this efficiently. And this is kind of the theme that with all of these things, specifically also the machine learning, we would like to kind of give it, let's say, to the masses so that everyone can more or less easily be doing that. And this is what the automated machine learning is
05:05
striving to do. And it's something that we are, let's say, starting to make a first step towards. So in this talk specifically, we will be talking about a very important part of this automated machine learning and that's automating the hyperparameter search. Now, just to be on the
05:24
same level, everyone, we probably all know that machine learning models have parameters and hyperparameters. Parameters are the numbers that directly interact with the data and give the predictions in one way or another. And their values are set automatically during
05:42
the model training. Now, hyperparameters, on the other hand, are set by some kind of an outside authority before the training and they don't change. They define the model architecture. Now, usually there are many parameters, but only a few hyperparameters. We have a very simple
06:06
example here, the schematic of a multi-layer perceptron, a very simple neural net. Data flows from the left to the right. And if you have seen this structure, you probably know that each of the arrows here is one parameter. Now, if you count as it is in this picture, you have 64
06:24
parameters. So a lot of them, but depending on how you count, but you could argue that you have only three hyperparameters, which are the dimensions or the sizes of these middle layers. Neural nets again come in layers. So three hyperparameters, which would be five,
06:43
four, and four. And we can change these hyperparameters, which would change the number of parameters. Now, why do we want to pick good hyperparameters? Well, this influences the model performance. If you pick them well, our model might perform well. If we
07:02
choose them badly, it will perform badly. The question is how to do this, and sadly, it tends to be more art than science. Now, there are several algorithms that might help us to do this, and here I list a few. There is, for example, a grid search in which you pick a set of values for each
07:25
of your hyperparameters, and you check the combinations of each. So you build a model with a different set of hyperparameters, you train it, you check the performance, and you repeat, and in the end, you pick the one that performed the best. It's schematically kind of shown
07:40
in this bottom left colorful picture, where we all look. The colors are, of course, the final model performance. Similarly, there is random search, where we pick the hyperparameters, obviously, at random. Now, this is good, but also bears the danger that we might, for example, avoid the area where the performance would be the best. For example, here in the bottom right figure,
08:04
it might be this red peak that no model actually covers. Now, this tries to be addressed a little bit with Bayesian optimization. Now, here the hyperparameters are not picked entirely at random, but on the one hand, the algorithm remembers where within the
08:26
hyperparameter space it already looked, and it tries to explore more, and on the other hand, it also tries to predict where it will be promising to check next, where it expects a good performance to lie, and exploit that. There is also a hyperband algorithm which starts
08:46
with this parallel algorithm. It starts with a lot of choices for these hyperparameters, and it tries to very quickly decide during the training which of the hyperparameter sets
09:00
are not promising, are probably not going to give good performance, and it just stops them and reallocates the computing resources to the one that it deems promising. So these are some of the algorithms, and now let's look at how we might actually use them. So as the
09:20
name of this talk indicates, we will be dealing with these algorithms within the Keras library. Keras, as we all probably know, is a deep learning library in Python that allows us to build deep learning models, so neural networks. Now to use these parameters, we will be using
09:40
two other libraries, namely Keras Tuner and AutoKeras. I encourage you to go check them out, give kudos to the authors, and basically the main message from this talk is go check these things out, play with them, they are really nice. Keras Tuner is a part of Keras itself,
10:02
so there is a chance that you probably already have it. It contains within it all the algorithms that we have just discussed on the previous slide. AutoKeras is a wrapper around Keras Tuner, and it adds a few more features, or a lot more features, that enable
10:24
a user to automatize some of the stuff. Now in this talk, what we will be doing is we will be trying to solve, or at least give a hint of a solution of a toy machine learning problem, and we'll look at three fictional people with different levels of machine learning knowledge,
10:43
and we'll be looking at how these people, with their knowledge, might use these two libraries to solve this problem. And this actually implicitly brings us to the start of the talk, where we were talking about needing experts to do machine learning. Well, this is
11:01
basically taking the first steps to allowing us to do machine learning, also if you are not really an expert. So let's see how this will work. Our toy problem will be one that many of us have probably interacted with already in one way or another. It will be the MNIST digit
11:24
classification. So we will have a ton of these little images, like shown here, these individual stamps, each of them having one handwritten digit on them, and these three fictional people from the previous slide will be trying to build models that assign the correct digit to this image. So
11:43
all the images from the first line will have assigned the label zero, all from the one will be one, et cetera, et cetera. Now, there is also an example notebook coming with this. This is the link. The link is also in the talk abstract, and again, I encourage you to go and check it out.
12:02
It contains everything that I talk in this talk about. It's a bit more verbose. It has some more examples, and you can have a play, edit it, and see how it all works. And now assuming that someone is still around after this, we can actually start. So the first person that we will be talking about
12:24
is a data scientist. So this person has a decent machine learning knowledge. They claim it's good, and that means that they more or less know what they are doing. Now, this person in solving this classification problem would like to retain a lot of control over the process.
12:43
This, for example, allows them to be efficient when dealing with possible issues. They know what they've done, and if something goes wrong, it's easier to identify. And also, by looking at how the whole process of hyperparameter search is going, they might, for example, gain new insights, what works, what doesn't work, what tends to happen,
13:02
et cetera. So this person might have somewhere in their code this kind of a function. The function builds a Keras model, as its name suggests. It has no inputs. It does a few things we will look at, and then it returns a Keras model ready to be trained. Now, let's look
13:24
into a bit more detail what happens in the individual steps. So to define the layer, the data scientist might use the Keras functional API. And this is how it works. You essentially define each individual layer as a variable, and you use some kind of a Keras object.
13:45
Then you assign some properties to this layer, for example, these filters and kernel size, which define exactly these hyperparameters we are talking about. And then you connect it to some kind of a previous layer, in this case, the input layer. The similar thing then works with other layer types. And in the end, we have the whole neural net ready.
14:07
The next steps, which I call the prepare model, is simply connecting this neural network to the outside world. So defining inputs where data will be coming in, and defining the outputs where the predictions from the model or something will be coming out of. And the final step is the
14:23
compilation of the model, where, among other things, we, for example, determine the optimizer, which is the algorithm that will be driving how the model learns. And this can have a hyperparameter of its own. For example, here, their learning rate with a value of 1,000.
14:43
So we have a function like this kind of sketched out here. And you will see that you will, of course, understand that this is a bit awkward when you want to try a different set of hyperparameters, of course. You need to go to the function. You need to at least update these numbers or maybe even add new layers and update a lot of things. So now let's
15:05
try having a look at how we could update this function so that we can use it in this automated hyperparameter search. And for this, we will take three of the code snippets from the previous slide and see how we will edit them. First, the function itself. First, it
15:23
didn't have any inputs, but now we will add this input we will call HP, which stands for hyperparameters. And this allows us to go inside of the function and make these definitions of other things a bit more verbose. So let's see, for example, what happens to the convolution
15:43
layer. We see that the hyperparameters change, expanded a little bit. And instead of these hard-coded numbers that we had before, now we have some kind of placeholders. For filters, where we have 32, now we have this HP, this hyperparameter.int, so we have some kind of an
16:02
integer value. We call these convolutional filters, and we say that we want them to be at least 16, at most 128. And potentially, we also want this integer to be increasing in the steps of 16. Similarly, with the kernel size, which starts at the fixed value of 3, and now
16:22
instead, we want the algorithm to choose one value from at least one, three, or five. And similarly, you can do this with other layers, with floating point numbers, et cetera. Now, this hyperparameter is not just limited to the neural net layers, but pretty much any kind
16:41
of parameter within our build model function. So for example, our learning grade, we can, again, instead of a fixed value, we can make it variable to be chosen. So again, now we sketched out how we would update our function, and we can proceed to the hyperparameter search.
17:00
So our data scientist will be using the Keras Tuner library, and for this example, they will be using the Bayesian optimization. So one of the algorithms that we discussed a few slides back, you can, of course, use the others. The data scientists will take our new and updated function, and then they can define a tuner, which will be an instance of this Bayesian optimization,
17:25
where the hypermodel will be our model-defining function, and we also need an objective. So some kind of a criterion that, if we have two models, can tell us which one is doing better. Now, this hypermodel doesn't need to be function. It can be a class, as we will see,
17:43
but it kind of works the same way. And anyway, now with the tuner defined, we can now load the data. Fortunately, the MNIST data comes with the Keras, so we can really load them very easily, and we can start a search with this tuner. Now, just we can quickly
18:04
switch to a notebook. This is the notebook that I mentioned that you can download and play with, and let's just quickly run through some code to see how it works or to repeat. So we imported a lot of stuff, and now we loaded the data. This goes a bit further than what we had in
18:23
the slides, but essentially, this is how the data looks like. So for example, the 11th element is this kind of an image, and we know it's a number three, like this. So in this cell, then we define the class that we call the MNIST hypermodel, but this is essentially what I
18:42
mentioned. This is the class instead of the function, but it contains this build function that does what we've seen in the slides. For example, in here, we define the convolutional layer and you are probably familiar already with this line. The filters will be HP.int with some particular
19:04
values. So this is the same thing. Now we define the tuner as we've just done in the slides. The only difference here is that we define the hypermodel as this class. This is it. We can just quickly look at what hyperparameters we will be optimizing. So it's not important the output,
19:24
but you can study it afterwards. And then you can start this tuner.search with the data that we loaded before. And this starts giving some kind of an output that is potentially very useful for us. So it's now running the first trial. It's giving us a set of hyperparameter
19:45
that it's currently using, and it's giving us the progress bar that you are probably familiar with if you've ever tried some training with Keras. And this will proceed in a series of
20:05
loops, giving you a lot of information. Now, this is taking a while. So actually, let's go back to the slides and see what then might happen, what that might happen later. So let's give it a second to load. Yes, here we are. So we started tuner.search. In the notebook,
20:27
it takes a bit more parameters, but this is really the only thing you need to do. And the output that it ends up with later is this. So we've seen that it's running the trials. It also records how it did the best. So for this particular screenshot, it finished with 90%
20:46
accuracy, but at some point previously it almost reached 95%. It reached this 95% with this best set of hyperparameters so far, and currently it's running with this set. So that's quite
21:00
clear. And in the end, after this is done, the data scientists can extract the best model and use it for their purposes. So we've seen that this person can put away a lot of the hyperparameter search, but still retain a lot of control of what's going on. Now let's switch to the second person that's going to be trying to solve this problem,
21:24
and this I'm calling the technical manager. This person knows some coding and also knows machine learning, not at such a detailed level as our data scientists, but they know some kind of high-level principles. And we will see that these tools actually allow the
21:42
technical manager to translate these principles directly into the code more or less. So how will this work? Well, the technical manager will be using autocarass. This is the wrapper around Keras tuner. And now they will be defining some kind of this hypermodel in a way that's
22:01
similar to how you define a model in Keras. So you will be kind of defining these individual layers or blocks that will do the thing. So the technical manager knows that they are dealing with images, so they will start with an image input. They also know that you need to
22:22
normalize data in one way or another to ideally get a good performance of your model. So they will add this normalization block, connecting it to the input node. Then they know that it's a computer vision problem and convolutional neural nets are good at solving those. So they will add a convolutional block just like this. And finally they know that they
22:45
are dealing with a classification problem, so they will end up with a classification head. And this is pretty much it. So high-level concepts translated into the code. Now you will notice we put no options there, but if you actually go to autocarass, it allows you to go into quite
23:03
a lot of detail telling the model what to do or telling the algorithm what to do. If you don't do this, there is a lot of default values that will be chosen for you. And this is it. Now they can define what is called the auto model similarly to the Keras. So they
23:22
define how this will communicate with the outside world, what the inputs and the outputs are, and again defining the objective which will be the validation accuracy. And with all of this in place, they can just go with the data, they load them just as we did
23:40
in the notebook, and they can run fitting immediately. No normalization needed because now it's a part of the model. You will again notice that this is the very similar input to what happens with Keras, but now it's of course training a lot of models under the hood and giving you just the best one. And once they start running this, again
24:07
the familiar output pops up with the trials, the best trial so far, the hyperparameters. Now you see that there is a bit more of them and they are a bit more complicated, but essentially broadly it's the same, and the progress bar at the bottom. So technical manager is able to solve
24:27
this problem even though not being an exactly a machine learning expert. And now let's get to the beta scientists of the future. Now I am labeling this person like this very reluctantly because the idea is that this person doesn't actually know anything about machine learning.
24:43
So it's a bit of a scary future, but if you want you can replace this with like your parents or your grandparents. It should work the same. And this actually allows us to see the first steps towards the machine learning as it was more or less promised to us. So this
25:01
effortless thing. Now admittedly our problem lends itself well here, but it's a good example. So this person will use AutoKeras as well, but they only know the setting of the problem. So they know they want to build a machine learning model.
25:21
So they specify one. They know they have images on the input, so they will specify that, and they know they want to classify these images on the output. And that's really it. That's all they do. They load the data like this and they can again run this automodel.fit
25:41
just as we did before. And it just works. Now it turns out that this can be made even easier because image classification is such a common problem. So there is already an image classifier within AutoKeras. And similarly there are other these type of hyper models for
26:02
other kind of common applications. But anyway this is defined. Once they start running it, it again gives them by now a familiar output. And they managed to do this without really much about machine learning. And so this is really it. We've discussed a bit about automated machine
26:25
learning, specifically the automating of the hyper parameter search. So how it can make life easier for data scientists, but also how it can give the non-experts easier access into the
26:42
field and maybe get them excited because they can do powerful things very early on. And we played a bit with the Keras tuner and the AutoKeras libraries that are definitely worth exploring. So thank you very much for your attention. Fantastic talk. I truly loved
27:06
your talk and especially the fact that you went in so much detail explaining the tuner, the whole patient optimization. I personally loved it. I think we have one or two questions.
27:24
So I'm going to put those up and we can discuss those. We have about three minutes to discuss those. So the first one is, does AutoKeras keep track of training for each combinations of hyper parameters and use those for the next runs? Or does it go through all the possible
27:41
combinations? So, I mean, it depends. So obviously if you get out of the grid search, you can't really go through the old possible combinations unless you give it the choice, in which case you can. You can always specify the number of trials you want to have. And this
28:12
yeah, to try all of them, but I don't think it does that automatically. Now I'm not sure to be honest whether it keeps track of everything. I just think it really keeps track of the best
28:23
one and then you can get the best model out of it in the end. So I think that's the answer to that. Perfect. Since you mentioned about this other talk from before about AutoSKlearn, there was a question about how does AutoKeras compare with AutoSKlearn? And maybe I will
28:42
extend that question to ask, when would you use what, if you were to use AutoKeras? So, I mean, to be completely honest, I'm an expert of neither, maybe on AutoKeras, but there is definitely overlap. So for example, AutoKeras allows you to do some automated
29:05
hyperparameter search with SKlearn. Now from yesterday's talk, I got the impression that AutoSKlearn is a bit more advanced in terms of also data wrangling and for example using
29:22
this model ensemble. So it actually keeps multiple models and combines their combination. I think in this case, AutoKeras just gives you the best model in the end and do with it what you want. So there are definitely comparisons, but yeah. Right. Now that makes sense and
29:46
fair enough. Probably one more question. So there can be multiple hyperparameters right to be tuned and this might be a very compute intensive and time intensive job. So is there a way to run these parallely or in a distributed way to perhaps some way
30:05
reduce the compute time and the overall time it takes? Yes, it is possible. So AutoKeras, like I said, is a wrapper around Keras Tuner, which is a part of Keras, which is then a part of TensorFlow. So essentially whatever you can do with TensorFlow, which includes all of these
30:24
things that you just mentioned, is possible to do with these tools as well. And the documentation is really nice with the tutorials. So they also, I think I saw bits that describe how to do this. Perfect. Well, again, thank you so much for answering all the questions and lovely talk.
30:45
What I would recommend is there were a lot more questions. So I would recommend if you all can move over to the breakout room for Parrot and for the Parrot track and maybe take those questions over there. And thank you again. Thank you.