We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

When to use Machine Learning

00:00

Formal Metadata

Title
When to use Machine Learning
Subtitle
Tips, Tricks and Warnings
Title of Series
Number of Parts
132
Author
License
CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Artificial Intelligence, and machine learning in particular, is one of the hottest topics in tech/business. I will explain the core of machine learning, and the main goal of this talk will be to help you judge the success whenever someone yells "I know! let's solve this using machine learning!". I will also provide tips and tricks on how to increase the success of such projects. The second part of the talk will be about 2 open-source python projects I've created, as well as a project I'm working on regarding the trading of cryptocurrency... and their relation to machine learning. Specifically, the challenges and findings in making these cases work will be explored.
Machine learningPascal's triangleSoftwareCode division multiple accessMessage sequence chartStatisticsIntelProjective planeWhiteboardWell-formed formulaOpen sourceStatisticsTwitterOnline helpPerspective (visual)Observational studyNumberNatural languageMachine learningVirtual machineDifferent (Kate Ryan album)Machine codeProcess (computing)MereologyArtificial neural networkType theoryCycle (graph theory)Multiplication signInteractive televisionRoboticsNeuroinformatikComputerEndliche ModelltheorieCloningComputer animation
Data typeTotal S.A.IntelBefehlsprozessorElectronic data interchangeEndliche ModelltheorieFunctional (mathematics)PredictabilityHeat transferBounded variationTransformation (genetics)Fitness functionMachine learningVirtual machineParameter (computer programming)Dependent and independent variablesBitDifferent (Kate Ryan album)Correlation and dependenceZoom lensVariable (mathematics)Library (computing)
Port scannerBuildingVirtual machineMachine learningTerm (mathematics)Decision tree learningPredictionEmailLink (knot theory)FreewareLogicNonlinear systemConfiguration spaceDiagramGraph coloringLibrary (computing)Machine learningTransformation (genetics)Hand fanFingerprintRight angleLogicMereologyNetwork topologyEndliche ModelltheorieDecision theoryText editorNumberWeb browserCASE <Informatik>PreprocessorProcess (computing)Statement (computer science)Computer programmingMultiplication signDifferent (Kate Ryan album)Product (business)Installation artPredictabilityFunction (mathematics)Virtual machineComputer animationSource code
TouchscreenRight angleRevision controlFeasibility studyComputer animationSource code
Computer fileCartesian coordinate systemMultiplication signMathematicsNeuroinformatikPixel1 (number)outputEndliche ModelltheorieBitVirtual machineDifferent (Kate Ryan album)PlanningIntrusion detection systemLaptopRight angleLevel (video gaming)Perspective (visual)Source code
Loop (music)FeedbackMachine learningProcess (computing)MereologyConfiguration spaceLibrary (computing)Normal (geometry)Multiplication signRevision controlComputer animation
Network topologyDifferent (Kate Ryan album)Point (geometry)Module (mathematics)NeuroinformatikoutputSource codeComputer animation
Execution unitData modelMachine learningSlide ruleConnectivity (graph theory)Virtual machineEndliche ModelltheorieKey (cryptography)DivisorComputer animation
Rule of inferenceComputer-generated imageryMachine visionTask (computing)Heat transferKolmogorov complexityoutputCASE <Informatik>NeuroinformatikCoefficient of determinationSocial classDampingMachine visionPredictabilityLocal ringTask (computing)Presentation of a groupMedical imagingNumberMobile appProjective planeOnline help1 (number)Template (C++)Core dumpSet (mathematics)Endliche ModelltheorieDifferent (Kate Ryan album)MultiplicationRule of inferenceHeat transferMereologyMultiplication signGoodness of fitPixelBitObject-oriented programmingRight angleState observerSoftwarePoint (geometry)Complex (psychology)Weight
Complete metric spaceTorusMachine learningMathematical analysisComplex (psychology)Data modelEndliche ModelltheorieInstallation artRouter (computing)Multiplication signDifferenz <Mathematik>Virtual machineWordMoment (mathematics)Artificial neural networkSelectivity (electronic)Projective planeComputer simulationEndliche ModelltheorieGame controllerSoftware testingPoint (geometry)Medical imagingMachine learningSource codeMathematical analysisPreprocessorLine (geometry)MereologyEvent horizonForm (programming)Cycle (graph theory)Volume (thermodynamics)Wave packetComplete metric spaceElectric generator1 (number)Machine codeSet (mathematics)Modal logicDifferent (Kate Ryan album)Diagram
Bookmark (World Wide Web)Reading (process)Set (mathematics)Library (computing)
Normed vector spaceMaxima and minimaFatou-MengeTotal S.A.Right angleMachine codeCASE <Informatik>Multiplication signWave packetGoodness of fitSurvival analysisPredictabilityFitness function
Machine learningData modelEndliche ModelltheorieInstallation artComputing platformTime domainProcess (computing)Core dumpLoop (music)FeedbackComputer configurationRule of inferenceVirtual machineSoftware frameworkDomain nameProjective planeProper mapCore dumpPresentation of a groupEndliche ModelltheorieSoftware developerPredictabilityBitRule of inferenceOpen setMultiplication signCASE <Informatik>Goodness of fitTime seriesFiber bundleFunctional (mathematics)NeuroinformatikFeedbackMachine visionCycle (graph theory)Different (Kate Ryan album)Right angleMachine learningPreprocessorPhase transitionCross-validation (statistics)Product (business)outputPixelMedical imagingIterationSocial classCartesian coordinate systemIntegrated development environmentComputing platformBuildingComputer animation
Transcript: English(auto-generated)
All right, glad that you're all here. So my talk is going to be about when to use machine learning. First, something about myself, who am I? I've studied methods and statistics, a research master.
And during that time, I got really interested in machine learning. So even though I was focusing on statistics in my study, I was really interested in artificial intelligence. So I'm also an Intel AI innovator.
What I love is open-source innovation, and particularly human-machine interaction. So how can computers help empower people? And I want to thank Jipes for enabling me to speak here
today. I'm a senior data scientist there. I've been working there for almost four years at 15 different companies. Jipes itself has 35 data scientists. And in my four years, I've worked on blockchain,
a social robot from which you can see the picture, natural language processing, for example, chatbots, or analyzing the news like Reddit or Twitter, and a lot of purely machine learning predictive
and that kind of projects. So usually, I say something about the open-source projects that I've done. But I want to keep them for the talk, actually. So I don't know if anyone's familiar with the Gartner's Hype Cycle. It's an interesting one to follow because they're
discussing which thing, like where are things in the trends with tech. And this is where you can find me. So I like to do these kind of innovation topics. And I also have GitHub. And so, yeah, you can already have a look there.
So today, I want to first try to explain what machine learning exactly is, and then not from formula's perspective, but just to give you an intuition. So no formula's. And afterwards, I will tell you when to use machine learning
because you should not always use it. That's a spoiler. Yeah, I'm sure that a lot of you already have ideas what it might be or have applied it, cloned some GitHub code and then just run some example code or maybe a bit further than that.
But yeah, I hope something of this will be inspiring to you. So let's start with what it is. So this is an example where we have the simplest possible data. It's kind of sociology data that was ironically my major.
So I guess, does anyone want to guess what the numbers should be there, or what would we predict as people here? Yeah, I think that it's pretty clear to see immediately that's what we would think.
Well, so yeah, that's indeed what you would like a computer also to predict, right? We have this intuition, but how can a computer learn that? So basically, you want a computer to be able to generalize from the example.
So it would, for example, learn on the first three examples and be able to generalize to other examples that it hasn't seen before. So in this example, you would train on the first three and you would hope that it can predict the other two. So let's say this is your whole data set.
You're going to split it up, and then you would learn on one part and validate your model on the other part. And actually, I wanted to show you this also, the example in Python code. Let's see how the switching is going. So I don't know who of you is familiar with scikit-learn,
but it's a very good library, which helps you develop machine learning models. And it popularized the fit, transform, and fit-predict. So as long as you're still kind of making a pipeline,
you're doing fit and transform. So you learn how to apply. For example, with the age and the income, it's kind of a function that you're trying to learn, You have an argument. Someone's age, and then you try to predict the income. And scikit-learn is really good for this kind of stuff
where you want to train on some data. And then, yeah. So I don't know if it's readable. I guess I'll zoom in a bit.
So let's first then create some data. This is usually how you name the variables. So the x is what goes in.
And y is what you want to predict. Sorry? All right, thanks. So you can evaluate it. And so this is how the variable would look like.
And the y variable looks correct, right? So now we make a model. And the idea is that you fit it on the data.
So you want the first three examples. You want to learn on that. And then you can predict the rest. So what you can see here is you're using the first three examples,
then you're predicting the last one. So this is like a way to learn a linear relationship. But the interesting thing is that in machine learning, there are a lot of different ways that you can have a model. And this is the simplest variation where, you know, like numerical data.
And you will basically any model will have some kind of special like features that it learns, some parameters, that's the thing that you're trying to learn. So here you can see that the value is 1,000, which means if you multiply the age by 1,000, you'll get income.
So actually, the interesting thing is, of course, what would you do for numbers that it hasn't seen? And in this case, if you say 70, it will predict 70,000, obviously.
OK, so this was a numerical example. Now kind of an idea of like decision trees, which you can draw always like if this, then that. And there's two things here. Like this makes it so that you can have non-linear kind
of like transformations or predictions. And in the end, you're predicting a yes or no. That's like not on a miracle value. So that's also possible. And another example of those is predicting spam. So hi, John, how are you? We would say that's not spam.
And clink, clink for free, that might be spam. Unless your name is John, then probably also the first one is spam. So rather than like what could we use machine learning for? Rather than write a lot of if-else statements, you can learn logic based on the existing output examples.
So the steps here are like find a problem, do some pre-processing. In our example, we didn't have to do any. Find a model that works, and then use this best model in production. So now I want to bring your attention to the following. The significance of machine learning. I don't know if you guys know fan diagrams,
but this is how machine learning is. Or actually, I think it's more like this. Machine learning is just a small part in this. You want to automate some process, and machine learning can be part of that.
So now it's time to go to the examples. So this is a library that I put online. And I wanted to explain to you how I came to this idea. I had installed Arch Linux on my MacBook. I would not recommend that.
It's a horrible idea. Don't try it at home. But at night, programming at night, I've noticed that there's difference between the colors. So you have those that are trying to know it's dark. So it will apply some orangey filter.
I never got used to that one. But what I noticed was that the browser, which is mostly white, was still really bright compared to when I looked at my editor, which was really black. So I thought it would be cool to take that into account. And I also thought it would be cool
instead of having a lot of configuration, that it would be cool if you would have actually no configuration at all, and you would be able to still be able to do something about it. So yeah.
This is example data of the bright ML that it takes. And maybe it's fun to show it. I'm not really happy about switching because it's a bit slow. So yeah, that's a new version, right?
Then you always get these things. So whenever I switch between a screen, I see that it applies new brightness. For example, the last one is 73. And going back to, I don't know if it's visible on this screen. I guess not.
Now it starts doing it. OK, well, it's mostly designed for a laptop. So I only experienced here if it would work on external monitor. But the idea is that you need to collect features that can help you predict it. And these are the ones that I've got.
So you see here the new brightness. That's when I'm raising the brightness on my computer. It changes this value. It's a file actually on your computer. And that changes. And whenever a change is being made to that, then it's being recorded.
And it records these kind of features. So my battery power level, which application I'm in, but also that pixel value that I wanted. So that's like a value between 255 and 0. So the idea is I want to have a model that could potentially learn the difference between high values, low values.
And maybe the time is important, right? Maybe location. And last one, ambient light. That's like the sensor in your laptop. That's also a useful feature, of course, because yeah, that already does some kind of like, are we in the dark or not? So I had a question. Like, does anyone else have ideas,
like what you would want to use as input here? Well, I guess I got quite some here. But yeah, I actually found out one more yesterday, which was when I was boarding the plane and I couldn't charge my laptop anymore.
I know it has a bad battery because I'm running Arch Linux on a MacBook, which you shouldn't do. So I would have only like 45 minutes battery or something. And even though my battery was completely full, I still wouldn't want my brightness to be full. But from the model's perspective,
it's going to be like, yeah, no problem with battery. Let's just do full brightness because it's day. So yeah, it's a bit of contradictory example, right? And as a person, you would be able to, like you can learn it, but it's not going to generalize because when I'm ever going to be again in that situation,
it's going to be so rare. And this is one of the main problems with machine learning, these kind of rare situations. So yeah, the main takeaway here, actually, I didn't mention it yet, but the cool thing about it is I don't have to do anything other than just change the brightness like normal.
And over time, I should just notice that I need to change it less and less, right? So that's really the cool part. You don't have to do this whole process of collecting data and whatever. You know, I want to change it. It will work for that time. You go to a different situation. So it's zero config while it's still personalized.
And I think that's really cool about it. But the thing is still, you have to think about like which features are available and which do I want to use. But hopefully, you know, this would allow people to create their own brightness setting without too much effort. And brings me to my next one, which is another library.
So it uses a Wi-Fi signal to detect where you are. And here we go. Let's do it here. Um, EuroPython fin tree.
Let's see if it works. So I have it here. See? It's in my bar. So my computer knows where it is. And I think it's cool because this one is using a smaller module that I've made
just to get like, just to give an idea of how it works. So it uses the scanner and you have like, a lot of Wi-Fi inputs, like some ideas, and how strong your signal is. So that's kind of how this one works is if you're sitting on your couch
or you're being somewhere else in the house, the computer could know the difference because the signal strengths are going to be different between the different access points that you have. And the interesting thing here again is that, oh, is that, don't take pictures when there's an empty slide.
No problem. So the cool thing here is when we look at this one, it's actually using it. So I really like this idea of creating small models that you can then use in like, other things.
And I think that that's where we have to go with machine learning, create small models that then can be reused in other components. Because I do think that, you know, where I am is actually going to be a factor in how your brightness would want to be, or I mean, it can be predictive of it. So plugability is key.
And well, one of the ideas is easier to learn from your observation than to have to say something if my signal strength is this much, you know, like no one wants to do that, right? That's like really crazy. So, oops. Yeah, so yeah, so I've told you how to solve
some kind of X to Y problems, right? Some input to some numerical value or some like class. And so you can only solve these kind of problems, some X to Y problems.
It's pretty limited, you would think, but people have been very creative in posing their problems as X to Y problems actually. So for example, in computer vision, what would be any respectable presentation without a seemingly off-topic picture? So this is an example of the ImageNet data set.
It's something like, I think they noticed, I think they were predicting dogs and they looked at which ones were wrongly predicted and then they saw this. I mean, I think it's very, it's hilarious. Yeah, so, but how would you use this in a model, right?
How did I do it? Well, it's like different classes. In this case, is it a dog or is it not a dog? Zero or one, one is a dog. And on the other hand, you have pixels. And this is the crazy part. You have for every image, you have like 80 pixels by 80 pixels by three channels,
like red, green, blue. And like that is something like, I think it was like 19,000 or something data points. So you have to imagine each of those is a value between zero and 255. It makes it that you immediately have big data with some sizable number of images.
But yeah, so that's an idea on computer vision. And so I want to also talk about one case at, you know, from work. I've worked for multiple insurance companies and in one of them, we wanted to investigate what computer vision could do for them.
And in this case, they wanted to predict the amount of damage, like how much it would cost to repair it from damaged car pictures. And well, it took a really long time to get this data because obviously they have not prepared for it to be used like this. And so we started working with academic car data.
And so pretty much, yeah, like this is an easy one, but they have examples where you have like a tiny scratch and it's very difficult to see like the very small feature on the whole thing. So we started with, we actually made it a bit easier, this problem for ourselves, like more on like sides
of the car, like this as a start. So we were trying to predict which side of the car we're looking at and then you would, yeah. So it's kind of about localization and I did this over two years ago. But yeah, it's a good case because it's, you know,
it's very time consuming in a way. And so, but the problem is they didn't have enough data, you know, and the cool thing is though, there's something called transfer learning where you can use like an existing model. At that time we used the Inception V2 model from Google.
They were training it for like three months on like a 30,000 euro machinery, something like that. And the idea is that they made this whole network and only actually this like last red one on the completely on the right is the actual prediction is happening. Like, you know, this is,
and in that case they use the ImageNet. So it's like, this is a dog, this is a car, this is, you know, a lot of, like I think a thousand different classes. But the cool thing is the part before that very last one that can still be reused in other cases. So in our cases, so features that are useful to help the dogs might not be that useful, but you know, there's always some kind of features
that it will learn to represent this whole data, to learn this whole data set that you can use in another task. And yeah, it was a very interesting project. It was when TensorFlow was 0.8 and we made a, or we used our template to create an Android app. We changed it so it would accept our model.
And so it's still fun to walk around and project it on cars and have a laugh every now and then. Yeah, so what is typical in insurance companies? Like they have strict rules already in place. So then yeah, this was innovation.
So that's one part there. Transfer learning can certainly help. But you know, like image data, it's a very difficult one. And I think I've missed this point earlier. We advised them not to continue on this one because it was not their core business, this car. Like that was just one of their things. They were very broad insurance company.
And we advised against going forward with this because it's just not their main, like their core thing. And it would be very expensive to get to label all their data and like that. So, and another thing is like, you know, you're going to have a difficult one with like compliance where you're going to be like,
you know, 60% of the time it's accurate. Like compliance doesn't really like that. So I wanted to go to another complex problem. I don't want to go too much into depth in this one. But I thought it would be fun to have a neural network learn to complete neural network code.
So, yeah, there's a lot of generative models out there. So you just give it a bunch of text and it will learn to predict the next word or character based on the things that it has seen before. So that's a generative model. And yeah, it can be very generic.
It can be time data, or it can be like text or images even. But I don't want to go too much into depth about this one. I think that generating in company is not that interesting usually, unless you're making really that to be your thing.
Like for example, I think it's amazing what kind of art they make. Like Google made something called deep dreaming or something. It's very interesting art. But yeah, so these are just like inspirational ideas. It's not really, I wouldn't really recommend like this as your next project.
Though it can be interesting, of course. So next one, I don't know who here has cryptocurrency at the moment. Okay, everyone's sold it off already, right? Yeah, so this is actually
my only personal closed source code. And actually three years ago, I was working on blockchain and at those times I was also a bit skeptical of it. I'm still quite skeptical of blockchain. But yeah, I was always laughing when companies were saying something like,
yeah, we're combining AI, IOT and blockchain and this is going to be the thing, like now you have like three problems instead of like one. But yeah, and those things are also, well, I didn't have a lot of money and I thought it was also going to be way too expensive to trade these things,
like thinking about stock where it doesn't make any sense unless you have really big volumes. And that there are already way too many people doing it. But actually a few months ago, I thought let's just collect data and let's analyze some of it.
And most of the models, like the popular ones, then they apply the latest machine learning techniques, hoping that it's going to give them an edge. So basically what they're just doing is take this like price data over time and they hope that they're able to predict like if it's going up or it's going down.
So that's what most people are doing. I thought, I wanted to do it for some time, but I thought, yeah, that's not really, like that's really risky or I don't have anything to say about, I cannot control anything, it's going to be, you're going to wake up and anything can happen.
So yeah, I thought, you know what, I'm going to analyze the data and I'm going to see like what are the most obvious things. Like I noticed at one moment that a coin was like one of the coins was going to work together with Microsoft and that really increased the price enormously. So I thought, you know what, if I'm just going to monitor for such events,
those things that, you know, it's going to be obvious for everyone that the price is going up, you know, that maybe I can do something with that. So just make something very simple. There's no need to always try to do the most complicated machine learning model. And I can also assure you that it's a good experience to have your personal money on the line.
Like it's a good experience in the sense of when you lose some money, then you're really going back and like you will make sure that you're really doing good monitoring there. And yeah, another big point here is that you don't need machine learning to create training and test sets or run simulations, right?
So machine learning is just one part, just running simulations where, you know, like the things that I'm doing is like, you know, just two values or something I'm trying to find is not based on machine learning and a lot of just, yeah, you do tests backwards, but it's not necessarily machine learning.
Yeah, don't underestimate the work necessary next to machine learning. I didn't bring that up, but yeah, it takes so long before, yeah, because you're depending on the exchanges to trade and, you know, from idea to actually have them,
to have something like that working, it takes a long time and yeah, you can also do analysis sometimes instead of like forcing it to be machine learning and for Python, right, like simple can be better than complex, it also holds for machine learning
or modeling or of any kind. So another one that I made is X to Y. I gave a talk about that also sometime earlier and it's the idea of automating these steps of like, because a lot is, you know, you have some data, you do some kind of pre-processing on it,
like missing, dealing with missing values because otherwise it doesn't, Scikit-learn cannot deal with it. You do model selection and I mean, in the end, after you've done a lot of like different projects, you have a kind of, you start to like a couple of models you know when to apply them. If you just throw them together, well, you get, if you throw this together,
then you get something like X to Y and so I'm loading that library, of course
and reading in some data, I think it's this.
So that looks like this. Yep, it's one of, it's actually my favorite data set to be honest. It's like who survived the Titanic, they collected that and so you can say something about like,
women indeed like, was they kept in the last standing or these kind of, you know, these ideas, but yeah, women did have a better chance, so that's good, right? I'm thinking that we don't have the most time anymore, but let's see how far we get.
So I'm going to get you the data again. Survived, so that's like one is survival, zero is not survived and we have to remove this like,
so now it's not there anymore and then we can say, okay, same pattern as before except now we're like, this is kind of messy data. There's missing values, there's text and whatever and so I could learn cannot deal with that. So the idea was, you know, let's just do something that's very simple. There's also similar to the example code,
but what you can say is like, let's fit on half of the data. That's for the people that don't know, like that's how you can get half. It's a simplification. So now it's training and then you can use it to predict.
So there we go. Then you can compare if the predictions are actually equal to, so in this case we got 77% correct.
Skipped over those, okay, yeah. So the idea here is that, you know, you have image data, time series, text data
and you know, there's so much that you have to do like pre-processing if you just start again. It's nice to kind of bundle things and this is kind of the thing that I also recommend to companies. Make like this kind of a platform that does this kind of pre-processing for your common things, right?
If you're concerned with churn, then make sure that you're like the main things that can be predictors of that are actually going to be there. Take already care of pre-processing, cross-validation and like what is going wrong with my data. You can do that all and yeah, deal with your core domain features
and I mean, only the final step is actually the models. So you know, like just bundle it and if you can make quick iterations, like that's going to help you like in a company. So then you can see, okay, we're missing data or let's add this data and it will be very quick.
So it's important. But then of course there's always productions. I mean, that's like the next step there. So that always takes more time, like compliance, like proper development cycle. So make sure that you'll have that as well. Okay, so gave a lot of,
I threw a lot your way. Let's wrap it up. In the end, machine learning is just a tool but it can be really powerful in the right circumstances. Like learning this kind of function between your input and something that you want to predict but it's not more than that, right? So it's not like that's,
some people don't really understand it. They think everything is going to be automatic with machine learning. You have to do a lot of work around it. Think or ask yourself, is it easy to create a feedback loop here? Or if you have to do a lot of effort to create this like new data, new annotated data where you got the answer, right? Like if you cannot collect income data,
then it doesn't matter if you have age and it's a good predictor, which is, yeah. So this is a very important one. Yeah, don't forget to think yourself like what could be useful features, right? It's a bit of a simple one, but yeah.
Also, I think plugability is key. Don't try to solve everything in one model, make different models and spread out a problem. And I think actually that it's going to be very interesting to see what in the next few years people are coming up with models that you can then use in your model because if you look at OpenCV and computer vision, they can do facial detection and these kinds of things,
but it took a long time for people to, yeah, to build this. But once we have these kind of models for machine learning, it will be nice to chain them together. And don't try to solve the most complex problems. If it's like way too complicated data or you know, like there's so many rules,
then you know, just start with something easier, like especially when many strict rules are there, like insurance companies, banks, you know, they have so many strict rules. If you cannot explain it, if you cannot, you know, cannot reason why you're doing it or if it's obviously wrong why you're doing it, like discrimination or whatever, then it's not going to work.
So, and most people find optimizing models fun, you know, get the better score, but you know, optimizing the model is usually not the best like thing to do here. If you have the simplest model, you can still really make big improvements
by getting better data or you know, talking to the people that can help you get better data. So this is also a very important one. And never underestimate the work required besides machine learning to get it actually in production, right? Like even if you have the model, you're very happy with it. You know, it takes time to get it to work inside a whole application environment.
Build a framework in your company. Yeah, so I wanted to thank you and come and say hi. If something wasn't clear to you or you want to discuss your own examples
or just chit chat, I'll be here until Saturday. And my final and most important suggestion is make little projects and then give a presentation about your machine learning projects at the next year of Python. Thank you. Thanks very much.
We have a few minutes for questions if anyone has any. Have you actually made money on your blockchain project? Good question. It's, well, I have not lost anything yet.
I'm still in the like development phase and projections are that even if Bitcoin is going down 20% per month, then it should still be okay. Nice, thank you. Anybody else?
There are some cases where plugability could hurt performance. For example, in the lightness prediction model you were showing, suppose there are like two places that are close by to each other, but you would want very different brightness values for those.
And since the prediction is based on classes like you just predict the place, if it predicts the place incorrectly, it could really hurt the performance of the brightness prediction. So what would you recommend doing in those cases? Well, yeah, it's a good question.
The thing there is, you know, there are so many other features. So you just hope that, you know, the model will learn to prioritize others. So, I mean, if you are afraid that it's going to be messy, this prediction, then the model eventually will learn that it's like not an important feature, right? So then it wouldn't use it.
It's, you know, I guess in the example of, you know, being at EuroPython or somewhere completely different, then, you know, it's going to be, it's going to learn that that's, in that case, a good example. But maybe, you know, couch one, couch two, if you want to learn that difference very close to each other, yeah.
In this case, how it's parameterized, I would expect that it's not going to put too much effort on this one. So it would just instead focus on like time or, you know, the pixel value or something like that, yeah. Thank you.
Thanks. Anyone else with a question? No? Okay. Okay, well, the next talk is in here at 10 past 12. Can we say thank you again to our speaker?