Deep Learning with TensorFlow 2.0
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Subtitle |
| |
Title of Series | ||
Number of Parts | 118 | |
Author | ||
License | CC Attribution - NonCommercial - ShareAlike 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/44763 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
| |
Keywords |
EuroPython 2019108 / 118
2
7
8
15
20
30
33
36
39
40
45
49
51
55
57
58
63
64
66
68
69
71
74
77
78
80
82
96
98
105
107
108
110
113
115
00:00
DataflowSoftware developerCategory of beingMachine learningAreaComputer programmingMathematicsBitRevision controlMultiplication signTensorProjective planeChannel capacityVirtual machineLecture/ConferenceMeeting/Interview
01:01
GoogolLibrary (computing)Open sourceComputer networkGraphics processing unitOpen sourceComputer hardwareGraphics processing unitProjective planeNeuroinformatikLinear algebraLibrary (computing)Web pageAbstractionMathematical optimizationGoogolModel theoryConnectivity (graph theory)MathematicsBitSoftware frameworkPhysical systemLevel (video gaming)Utility softwareArtificial neural networkBefehlsprozessorDataflowAlgebraTensorMathematical statisticsComputer animation
02:43
Library (computing)Open sourceComputer networkGraphics processing unitGoogolBeta functionEndliche ModelltheorieInternationalization and localizationAsynchronous Transfer ModePixelGoogle MapsBeat (acoustics)Connected spaceLocal ringDirection (geometry)Reduction of orderGraph (mathematics)PixelMachine learningComputer fileMultiplication signGraphics tabletData centerDataflowSoftware repositoryBitSoftwareAugmented realityMathematicsBand matrixPoint (geometry)CASE <Informatik>Power (physics)Physical systemRevision controlLevel (video gaming)Asynchronous Transfer ModeMusical ensembleRight angleIntegrated development environmentArtificial neural networkScaling (geometry)Computer hardwareNeuroinformatikTouchscreenSoftware developerVirtual machineChaos (cosmogony)Focus (optics)Computer animation
05:34
ReliefSimulationAxiom of choiceUsabilityScale (map)ScalabilityDemo (music)FingerprintServer (computing)Scripting languageJava appletPointer (computer programming)RankingInformation privacyVector graphicsComputer-generated imageryNewton's law of universal gravitationDatabase normalizationMoving averageMenu (computing)Einbettung <Mathematik>Total S.A.Module (mathematics)Musical ensembleAlgorithmWave packetDataflowMedical imagingCASE <Informatik>TensorProduct (business)Endliche ModelltheorieInstance (computer science)FreewareGoogolProcess (computing)Scaling (geometry)Right angleDifferent (Kate Ryan album)Web browserInheritance (object-oriented programming)Projective planeLatent heatPower (physics)Object (grammar)Online helpAxiom of choiceSpecial unitary groupNatural languageMachine learningKeyboard shortcutSpacetimeMachine visionReinforcement learningComputer configurationUsabilityNeuroinformatikBitFile formatFormal languageLevel (video gaming)Graphics processing unitModel theoryJava appletEstimatorMoment (mathematics)BefehlsprozessorMathematicsLaptopServer (computing)CodeVirtual machineWordFunction (mathematics)Library (computing)QuicksortScalabilityResultantElectronic mailing listDatabase normalizationThermal fluctuationsBlack boxCommutatorMultilaterationBuildingGraph (mathematics)Extension (kinesiology)Computer animation
12:43
Logical constantOperator (mathematics)MultiplicationCodeGraph (mathematics)Matrix (mathematics)CASE <Informatik>BitQuicksortVariable (mathematics)Dataflow
13:18
Function (mathematics)Default (computer science)SubsetImplementation10 (number)Moment (mathematics)Installation artLatent heatStatement (computer science)BitProjective planeRegular graphCodeDataflowCarry (arithmetic)Condition numberProgrammschleifeOperator (mathematics)Functional (mathematics)Normal (geometry)Front and back endsDefault (computer science)Semiconductor memoryLevel (video gaming)Computer animation
15:40
Endliche ModelltheorieModel theoryPopulation densityCompilerInheritance (object-oriented programming)System calloutputModel theoryFunctional (mathematics)SequenceRow (database)Social classCASE <Informatik>CodeLine (geometry)Expert systemDifferent (Kate Ryan album)outputSystem callBitWave packetMetric systemDampingInsertion lossDataflowSoftware testingMathematical optimizationComputer animation
17:02
SequenceGenetic programmingGraph (mathematics)Model theoryError messageImperative programmingComplete metric spaceControl flowFunction (mathematics)Model theoryLatent heatError messageSymbol tableGame controllerInstance (computer science)CodeBytecodeCASE <Informatik>Multiplication signBitGraph (mathematics)Complete metric spaceDataflowFunctional (mathematics)Expert systemSequenceComputer animation
17:53
outputState of matterFunction (mathematics)Hyperbolic functionLine (geometry)CodeFunctional (mathematics)Graph (mathematics)Model theoryArtificial neural network2 (number)Multiplication signPoint (geometry)BenchmarkCellular automaton
19:02
Strategy gameDistribution (mathematics)Endliche ModelltheorieModel theoryPopulation densityMetric systemDistribution (mathematics)Computer hardwareStrategy gameDifferent (Kate Ryan album)Integrated development environmentGraphics processing unitScaling (geometry)Model theoryRow (database)BitLaptopCodeCASE <Informatik>Computer animation
20:04
Level (video gaming)StapeldateiFloating pointComputer-generated imagerySoftware developerVirtual machineDataflowTensorSet (mathematics)Coefficient of determinationModel theoryElectronic mailing listSoftware testingComputer-assisted translationWave packetEndliche ModelltheorieGoodness of fitSlide ruleComputer animation
20:53
Computer-generated imageryStructured programmingVideoconferencingHuman migrationCompilation albumScripting languageInformationMathematical optimizationMorley's categoricity theoremRandom numberMultinomial distributionInsertion lossBeta functionNormed vector spaceThomas KuhnModel theoryMach's principleOperator (mathematics)Inheritance (object-oriented programming)Library (computing)Virtual machineSystem callCodeWebsiteHuman migrationElectronic program guideProjective planeScripting language1 (number)Analytic continuationServer (computing)Endliche ModelltheorieRevision controlGroup actionQuicksortFeedbackDoubling the cubeRegular graphOpen sourceMedical imagingDataflowRepository (publishing)10 (number)WeightSet (mathematics)FluidWeb browserOnline helpElectronic mailing list
24:34
System callModel theoryObject (grammar)Multiplication signWave packetCASE <Informatik>Different (Kate Ryan album)EstimatorDirection (geometry)BitNatural languageQuicksortError messageMachine learningBeta functionLibrary (computing)Numbering schemeInformationComputer fileLatent heatDistribution (mathematics)Term (mathematics)Alpha (investment)Insertion lossMachine visionFunction (mathematics)NeuroinformatikUniform resource locatorNumberEndliche ModelltheorieRoundness (object)Bit rateMetric systemPoint (geometry)1 (number)Artificial neural networkSet (mathematics)BuildingInternet service providerMathematicsDataflowDampingTensorProcess (computing)Parameter (computer programming)Near-ringProjective planeVariable (mathematics)Sampling (statistics)KnotFunctional (mathematics)Interface (computing)Line integralLecture/Conference
Transcript: English(auto-generated)
00:01
Thank you so much for the introduction. So again, my name is Brad. I'm a developer programs engineer at Google. And so what that means is that I spend a lot of time working very closely with developers, attending conferences, and just speaking with you, and learning about the things that you may like or dislike about some of the projects that we have going on.
00:20
So I specifically focus in the areas of machine learning and big data. And one of those projects that happens to fall into those categories is TensorFlow. So today, I'm going to tell you a bit about TensorFlow 2.0, how you can get started with it, as well as some of the changes from the TensorFlow 1.x versions. So just before we dive in, I would just like to get a show of hands here.
00:41
Who here has done machine learning before in any capacity? Just put your hand up. OK, cool, that's a good chunk of you. What about deep learning specifically? OK, TensorFlow, have you used it? Just any either version? OK, awesome. So let's get into it. Just to give you an idea of what we're going to specifically discuss today, I'm going to introduce TensorFlow
01:01
and just tell you what it is generally, just so that we're all on the same page here. I'm then going to discuss TensorFlow at Google and how we're using it on some of the projects that we're working on. We'll then discuss why Python is so important to TensorFlow and how the two go hand in hand. We'll then discuss TensorFlow 2.0 and some of the features
01:21
available to it, as well as how to upgrade, if you're using TensorFlow 1.x right now. I'll tell you how you can move to 2.0. And then getting started, just if you haven't used TensorFlow at all, just some of the resources that are available for you to continue your learning and to use TensorFlow 2.0.
01:40
OK, so what is TensorFlow? It is an open source deep learning library that is developed at Google. It was released in 2015, but it actually existed a little bit before that. We were using it for projects internally, but then we released it as an open source project in 2015. And so what is it? Well, TensorFlow, it's a Python framework
02:00
that includes a lot of utilities for helping you write deep neural networks, deep neural networks, of course, being the main component of what makes deep learning what it is. And so a lot of deep learning involves using mathematics, statistics, linear algebra, and then low level optimizations with your system. And so what TensorFlow actually does is it removes a lot of those abstractions away from you
02:22
so that you only have to worry about actually writing your model. And so it just takes a lot of those what otherwise would be complicated steps and just makes it super easy for you to use. TensorFlow provides support for both GPUs and TPUs. These are hardware accelerators that are GPUs and TPUs specifically work heavily with linear algebra
02:42
and mathematical computations. And so TensorFlow is able to utilize this hardware right out of the box so that you can get those other benefits of using these. To date, TensorFlow has over 2,000 contributors all over the world. And the 2.0 beta version was released just last month
03:01
in June. So as I mentioned, TensorFlow was released publicly in 2015. And since then, we've just seen massive growth both in internal use and also throughout the community. And so we're continuously adding new features to this. Here's just a bit of a brief timeline to show you some of the changes that were done over time. And I mentioned TensorFlow is being used all over the world.
03:22
I love looking at this graph just to see how TensorFlow is able to help developers build their machine learning systems globally. It really blows my mind. As I mentioned, we have over 2,000 contributors. And just there's a lot of activity in the repo, which is super exciting. And yeah, so TensorFlow is used all over the world.
03:42
But it's also used internally. We use it to power all of the machine learning and AI that we have going on inside Google. And so I just want to tell you about some of the examples and how we're actually using this stuff. So one of the first things I like to talk about is how our data centers are powered using AI. So given that we are Google and the scale
04:01
that we operate at, we have a lot of data centers that do a lot of computations and use a lot of power. And so what we're actually able to do is use AI and TensorFlow to help optimize the usage of these data centers, both to reduce bandwidth, make sure that network connections are optimized, reduce power consumption as well.
04:20
And this just helps the environment and just is really a better way to have these data centers actually be running. So we're using TensorFlow and AI to do that. We're also using these technologies for global localization in Google Maps. So for those of you who may have used Google Maps before, you might know that we have an augmented reality feature, where if you're walking
04:40
through a city, such as Fazil, then you can use it to help you get from point A to point B directly on the map, or directly on your phone, and you can see an example of that here, how the directions are actually just showing up on your screen so you know which street to go down. These are using TensorFlow and artificial intelligence. And then we're also using it heavily also in Google Maps,
05:01
but within the Google Pixel itself to help optimize some of the software that we have going on there. So in this use case, we're talking about the portrait mode on the Google Pixel, which helps you blur out the background of an image so that you get to focus in specifically on the, whatever it is that you wanna focus on. And then here's an example of an audio synthesizer.
05:23
And so what we actually have here is effectively a chaos pad, for those of you who may have used that before. But the idea here is that you can slide, you can slide the cursor around on the pad and it will actually generate music. And this is done on, trained on an algorithm that was using TensorFlow to train it.
05:41
We're also using TensorFlow and AI for medical research. So in this use case, we have two images here. The one on the left is what we might consider a retinal image of a healthy eye. And the one on the right is an image of an eye that has what we call diabetic retinopathy. And so what we're actually able to do is,
06:00
there's research going on that's using TensorFlow and computer vision to actually predict which one of these is a healthy eye versus an unhealthy eye. And then this is probably my favorite example of the bunch. This here is, we're using AI and TensorFlow to help us actually predict whether or not foreign, or whether or not objects in space are planets.
06:21
And just a brief astronomy lesson of how this works. If you imagine that you have, if you look at something like the sun, this might be hard for you to actually see. But if you imagine you have a large body of light and something moves in front of it, then the brightness of that object will decrease ever so slightly. But enough that we have, we can use telescopes and whatnot
06:41
to actually pick up on the differences in those brightness. And we can then graph that, as we see here on the right. And then using artificial intelligence, we can actually predict whether or not those fluctuations in the brightness results is due to it being a planet or another object. So this is another example of the sort of research
07:01
that we're doing. Okay, so I talked about some examples. And now I just wanna briefly discuss why Python is so important and why we use Python as the, why we're using Python with TensorFlow. So Python is a great choice for scientific commuting. Of course, it's very easy to use. I would hope everyone agrees, which is why you're all here
07:22
and also it has a super rich ecosystem for doing data science. You have tools such as NumPy, Scikit-learn and Pandas. And if we look at the success of these, a lot of these do stem from the package NumPy itself. And NumPy is great because it has the performance of C, but it has the high level API of Python
07:41
and the ease of use of Python. And so when TensorFlow was being built, the idea is we wanted it to have the simplicity that NumPy has. So with that, it has the performance of C, but also the ease of use of Python. And that's why we're actually able to use Python for this because we're able to leverage the best of both worlds with both of these.
08:01
So let's talk about 2.0 and some of the changes that have come with it. So for those of you who may have used TensorFlow 1.x before, you might have realized that it's great, it's powerful, and there's a lot that it can do, but it definitely had its shortcomings. I'll be the first to admit these, of course, just having used these personally. Some of the things that I personally found frustrating
08:21
were using session.run, just it didn't necessarily feel super Pythonic, as well as having multiple different ways to do the same thing. So an RNN layer was implemented multiple different ways, and how would you know which one to use? It could sometimes be a little frustrating. And so both of these things that I mentioned were actually addressed in TensorFlow 2.0.
08:41
So the redundancy in the API was cleaned up a lot, so there should be one way to do most things. So we're, of course, focusing on making sure that we remove all the redundancies as we continue to develop the project. And also session.run has been removed as we use a concept called eager execution, which effectively means that your TensorFlow code
09:01
runs just like NumPy code. And I will show you an example of that in just a moment. And then another change is that we've introduced Keras as the main high-level API. Who here has used Keras before? Just a quick show of hands. Okay, so I don't know about you, but Keras is personally, I loved using Keras. It's super easy to use. And so we've actually taken Keras
09:21
and adopted that into the TensorFlow project. And again, more on that a little bit later. We also wanna make sure that TensorFlow is powerful and that it's flexible, it's usable for research purposes, for production purposes. And we really wanna make sure that we can get this into the hands of as many people as possible and help as many people as possible with their projects. So it's super flexible.
09:40
And then also given that it operates at this, or given that we've tested TensorFlow at Google scale, we know that it works at this scale, so it's super scalable and it should be able to use it for your use case as well. We're also able to deploy TensorFlow code anywhere. Or what we're at least hoping to do is continuing to make this as flexible as it can be.
10:03
We wanna make sure that you have different options for where you can run your TensorFlow models. So the first example is on TensorFlow Extended, which is a Python library that you can actually run on your servers to productionalize your models. We also have a package called TensorFlow Lite, which lets you run your TensorFlow models on edge devices.
10:24
And then you can also run your TensorFlow models in the browser using TensorFlow JS. And so why is it that we're able to do this and how is it that we're actually able to do this? So we use something called a saved model, which is the format that you can output your TensorFlow model and once you've trained it.
10:42
So for those of you who have done data science before and who have built a machine learning model, you know that you start off by reading and processing the data. You then apply layers to it via tf.keras or using TensorFlow estimators, which are black box models. You then choose to distribute it either over just the CPUs on your laptop or GPUs or GPUs on a cluster.
11:02
But once you do all that and once you have the model actually trained, you can export this into what we call a saved model. And this saved model is a universal format that you can then load into any one of the deployment options that I mentioned earlier. So in this case, you can use TensorFlow Extended and TensorFlow Serving to be able to run it on servers. You can use TensorFlow Lite for edge devices,
11:21
as I mentioned, and then TensorFlow.js to run it in the browser. But also we have other language bindings available. A lot of these are community driven, but for some of the examples, we have C, Java Go, C Sharp Rust and R. And using the saved model, it lets you actually run these anywhere.
11:42
Some other packages that exist in the TensorFlow ecosystem are for more, I guess, niche use cases. So I have some examples listed here. TF probability, TF agents, Tensor2Tensor. And so these really, as I mentioned, just exist for these more specific use cases. For instance, TF agents is a package
12:00
that exists to do reinforcement learning and it has some higher level APIs stacked on top of TensorFlow to help you build reinforcement learning. TF text is used for natural language processing using TensorFlow. And so there's a whole long list of these and definitely worth checking out if you have a specific use case that you want to use TensorFlow for. We also are introducing TensorFlow Hub,
12:20
which is, you can loosely consider the GitHub of models in that you can actually store, you can store and download pre-built models here and you can actually get started working with TensorFlow and machine learning using these models. You can modify them and you can do whatever you want with these, but this is just a place for you to start working with machine learning.
12:44
So earlier I mentioned that you can use TensorFlow 2.0 just like NumPy. And so for those of you who have used NumPy before, this code may look sort of familiar to you and that we're creating just a two by two matrix in this case and then just doing a multiplication on it and then we can print it out immediately.
13:01
We couldn't actually do this with TensorFlow 1.x. You had to initialize the variables, you had to then run the graph. And there was just a bit more involved than just creating the matrix and then doing the mathematical operation and then printing it. So this is definitely a really nice feature and just makes it much more easy to use. So then just to talk about some of the specifics
13:22
of what's gone and then what's actually new. So I keep mentioning this, but session.run is gone and we don't have to worry about that anymore. A lot of the TensorFlow specific operators such as conditionals, if statements, while statements that you had to use TensorFlow specific operations for have actually been removed. You can just use normal Python code.
13:42
And there's a reason for that, which involves using a new feature that I'm gonna mention in just a second. But the last thing that I also wanna mention is gone is tf.contrib. The reason for this is that the pack, the project just got so large and just so much involvement from the community that we had to actually just remove it from the base build
14:00
it would just, it was just, it's too much memory. So it still exists, but it has been removed just went from a, if you just do pip install TensorFlow you won't necessarily get it anymore. But then some of the things that are new include eager execution enabled by default. So this allows you to run TensorFlow using a NumPy-esque style. Keras is the main high level API.
14:22
And then tf.function, which is a Python decorator that lets you run your regular Python code using while loops and your if statements, just in Python but it will actually get compiled down to TensorFlow code. And we'll talk a little bit more about that in just a moment as well. So the next thing I want to talk about is tf.keras.
14:42
And so I asked earlier who here has used Keras and I personally mentioned that I really like Keras. And so the TensorFlow community agrees. And so what we're actually doing is we've joined, or we've implemented the Keras API into TensorFlow itself as the main high level API. And so what does that actually mean?
15:00
For those of you who have used Keras before, you may notice it's, Keras serves as an API spec. So it's not in and of itself an engine. It actually relies on using something like TensorFlow or Theano as the backend. So all we've done is we've taken the API spec of Keras and just moved it into TensorFlow. The two projects do exist separately still, but they are very closely related.
15:21
So if you want to just use regular Keras with whatever backend you'd like to use, you could just do pip install keras. But then if you want it to actually, or and then do import keras. But if you want it to use a TensorFlow specifically, you can just, you'd install TensorFlow and then from TensorFlow import keras. And the experience should be more or less the same.
15:41
And so when you're actually using Keras with TensorFlow, there's two ways that I like to describe that you can get started using this. And so one of them I say is what's called for beginners. The other one is what I say is called for experts. They're more or less interchangeable and I actually like the beginner's method more, but it just depends on your use case. So if you're using the beginner's method,
16:02
the way that you would do this is you would import a Keras sequential model and then just add the layers row by row. So each one of these actually represents a layer of your model. So in this case it's just five lines of code and you have a model built. Once you have this, you then compile the model which just essentially makes sure that the model or the layers line up and that the input and output sizes are correct.
16:23
And then you provide your optimizer, your loss function and then the metrics that you want to optimize for. You then fit it on your training data and then evaluate it on your test data. So this is using the beginner's method and then there's also the experts method as we say. So this is effectively using Python subclassing
16:42
and this allows you to inherit the tf.keras.model class to then create a model from scratch. And so this gives you a bit more customizability and then you just add a call function and then you're able to treat this like you would use Keras layers otherwise. And so what's the difference between these two?
17:01
We talked about it briefly here but just to give you just a general idea. If you're using the beginner's method which we call the symbolic method, we're using the Keras sequential. Your model is a graph of layers. Anything that compiles will run and that TenderFlow actually helps you debug by catching the errors at compile time. So this removes a lot of the debugging away from you
17:22
and just makes the code I guess a bit easier to develop. But then in an instance where you may want to use the imperative method or what we call the experts method, your model is Python bytecode. So it runs just like Python code would. You have complete flexibility and control over what it is that you're actually building but of course with that, it becomes a bit harder to debug,
17:41
a bit harder to maintain and there's definitely pros and cons of using each method. It really just depends on what your specific use case is. So next I wanna talk about tf.function. So I mentioned earlier that this is something that lets you run Python code just as you normally would. What do I mean by that? So let's say here that you just have a function.
18:02
Here we're just having a function that just calls an LSTM cell from a deep neural network. So if we have a benchmark here, we'll see that this would take let's say 0.03 seconds but what we're actually able to do to convert this into TensorFlow code is add this tf.function decorator.
18:21
Just an extra line of code and you'll actually see that we have about a nine times speed up here from this example but the idea is that you can do this on any Python code that you have. And so the reason that this is possible is that we're able to use a technology called autograph. So what it will do is it will take any Python function you have and as I mentioned,
18:40
convert it into the appropriate TensorFlow code and if you wanted to see what that looked like, you can use the tf.autograph.toCode function and it will take this function here and it will change it into this. You don't need to know how this works. This really isn't important for necessarily building the model but it can sometimes be interesting to actually see what's going on underneath the hood.
19:03
So next up, we'll talk about distribution strategies. So I mentioned how we want TensorFlow to be flexible and scalable and how you can use it over different hardware environments. So let's say that you have this model here that you may have just built locally on your laptop. If you wanted to then take this, let's say you train this on a couple hundred rows just to make sure that it works
19:21
and that you have something reasonable that's working before you actually deploy this onto a larger scale. If you wanted to then take this and then move it over to whatever hardware cluster you have set up, all you'd have to do is just add it within the scope of a distribution strategy. And so distribution strategies are effectively ways for you to just take the code that you have
19:40
and deploy it over your hardware cluster. And so in this case, we're using the mirrored strategy. What this does is if you have, let's say, four GPUs, it will just take the same model and just copy them over all the different GPUs. There's different ways to do this. You can take a large model and split it up over the multiple GPUs. This is a little bit outside the scope of this talk, but in this case, we're just using the mirrored strategy
20:02
for this example. The next thing you wanna talk about is TensorFlow datasets. This is one of my personal favorite features. I think it just helps developers get up and going with machine learning much faster. So for those of you who may have worked with data before, you know that it can sometimes be very difficult
20:21
to actually get a good dataset to work with. Models are only as good as the data, I like to say. And so what we actually have is a bunch of datasets available for you to use within the TensorFlow datasets package. And so there's a list of these that I'll show in the next slide, but the idea is you just, you load whatever dataset it is that you wanna load,
20:42
you can then split up the training and the test datasets, and then you can take this data and plug it into any model that you want. And so I'm using the cats versus dogs example here, but we have a whole long list of them. They're all available at TensorFlow.org forward slash datasets. Some examples here include ones that you may have seen before.
21:01
We have MNIST, we have a CIFAR-10 ImageNet, the Titanic dataset. So some of these might seem familiar, but again, if you're interested in seeing the entire library of what we have available, TensorFlow.org forward slash datasets. So let's say you're using TensorFlow 1.x and you wanna actually upgrade to 2.0.
21:20
How can you do that? So we have a bunch of migration guides available on our website, TensorFlow.org. So that would definitely be the first place that I recommend going to if you wanna learn how to do this. We also have a library available called tf.compat.v1. And what that will do is that as some of the APIs are deprecated in 2.0, we do have this library available for you
21:40
to actually gain access to some of the older APIs if you're not ready to fully move away from those. And this is also mostly relevant using the tf.upgrade v2 script. And so what this will do is you can execute this on top of any Python script and it will take the tf1.x code and actually convert it to 2.0 code. Similar to, if any of you have used
22:01
the Python two to three script before, it sort of does the same thing. And with that, it'll tell you what was actually changed between the two versions and then it will also implement, yeah, it'll rename it and then show you what was actually changed inside of the scripts themselves. So if you're just curious about
22:21
how to get started generally with TensorFlow, again, I keep mentioning this, but definitely go to the website at tensorflow.org. But also if you wanna just get started today, you can install it now just using pip install-u double dash pre-tensorflow. So feel free to do this now or at the conclusion of the talk. Tensorflow.org, we have tons of resources
22:40
available for you, collabs, introductions, documentation, API specs, all of this is available here. We also have partnerships with Udacity and Coursera. We have TensorFlow courses specifically designed to help you get started. So I definitely recommend taking a look at these if you're interested in a deep dive with world-class instructors.
23:01
We also work with deeplearning.ai, which is run by Andrew Wing, who is very active in the machine, one of the biggest names in the machine learning community today. We're also on GitHub, of course, so if you're interested in actually getting involved with the project, definitely take a look at the GitHub repository, and we'd love to hear your feedback or just if you wanna add new features or anything,
23:22
and just get involved in the open source project. By all means, we'd love to have you. And then lastly, I just wanna talk about two extra projects that we have going on in the TensorFlow community. So these are Swift for TensorFlow and then TensorFlow.js, which I actually mentioned earlier. So these projects are actually, so Swift for TensorFlow is a movement
23:41
to actually use Swift to develop machine learning models, and so Swift in and of itself has become increasingly popular in the data science community for its ability to, what people argue is I fix a lot of the shortcomings that come with Python. That's definitely debatable, but I think it's super interesting, so I definitely recommend you checking it out if you're curious.
24:01
And then TensorFlow.js will allow you to actually run machine learning models using JavaScript in the browser, or you can also run them on servers using Node. It works with both regular JavaScript and Node, so super interesting. And again, if you're curious, definitely check that out as well. And with that, I issue a call to action to go build.
24:20
So definitely go and install the project. Continue to learn about this and let us know what you, yeah, what you think. So thank you all for listening, and I'll stick around for a few minutes if anyone has questions. Thanks. Thanks.
24:40
Okay, question time. Can you raise your hand if you'd like to raise a question, please? No questions. Surely there are. Okay.
25:00
Thank you very much for that. Yeah, so for some new information about 2.0, it's very useful to know. Something that I'm always wondering about is how people actually kind of curate the information they get out of their kind of training. Yeah, the training that you're doing and kind of the improvement on the losses over time
25:21
and how they kind of, yeah, essentially just kind of curate where do they store all of the models locally, say, and how do they evaluate which model has been performing the best for a certain set of examples or, yeah. Sure. So I think you asked a couple things in there.
25:40
So I'll just, I'll try and answer this one by one. So the one thing you asked is where models get stored. And so, if I heard you correctly, so one way to do that is to store the model on something like a, on a bucket, whatever cloud provider it is that we're using, you can store it in some central location and then you can just access the model via an API call. That's one way to do it
26:01
if it's just gonna output it as a file. In terms of evaluating if a model is actually good, that's, it sort of depends on what your use case is. There's different metrics for evaluating how effective a model is. In some cases, you might wanna use accuracy, which is just, given 100 samples, how many of these did it correctly predict?
26:22
But that's not gonna always be the case in something like if you have a medical, if you're building a model for medicine, that's detecting some very rare disease. If you just say that every case is negative, you're gonna have a very high accuracy rate, but that's obviously not helpful for picking up whether or not it's, the model works. So then you would use something called precision and recall,
26:42
the thing called precision and recall to actually evaluate whether or not this is a good model. And you can do that using different hyperparameters. So for all of these models, there's different values that you can set when you're building the model. So the best way to do it would just be to basically train several models using these different numbers and just see which one is the best for your use case.
27:01
There's definitely a bit of trial and error in this and as you do this a bit more, you get some intuition, but at the end of the day, it's a lot of just, I guess, guesswork loosely. Okay, next question. Hi, thanks for your talk, it was very informative. You told us that TensorFlow 2.0 moves
27:23
into the direction of Keras and plans the interface, but I think you had one sentence that said, not 100% now. Is there some interesting case where you said, okay, there's this new TensorFlow.Keras thing that's not compatible to, if people are using Keras now,
27:44
so which would prevent you from moving to tf.keras? Thanks for the question. I think the biggest pull at this point would be if for some reason you didn't want to use TensorFlow as the underlying engine. In terms of the API, I don't know anything specific that is significant enough to say don't move to tf.keras,
28:04
but yeah, I guess that would be the one specific use case I could think of. Any other questions? Thanks for the talk, I think you're going in an interesting direction with TensorFlow. When do you think, will it be a stable release,
28:20
like right now it's beta? I'm honestly not sure. I think it's sometime in 2019 is what I keep hearing, but definitely keep a lookout for it. So the alpha was released in March and the beta was released in last month, so I would expect it to be sometime soonish. Okay, thank you.
28:42
Any other questions? Hello, I'm wondering what's the relationship between TensorFlow and all these other TensorFlow libraries that you mentioned, like tf.agent, tf.probability. Is it because of the distribution scheme that's the same,
29:03
or what's the relationship between all these entities? Sure, so TensorFlow in and of itself has these raw variables, I guess, if you will, and the ability to build models like you would use something like NumPy. So effectively something like TensorFlow.agents is just built using these TensorFlow objects.
29:21
So as you might implement something like a Q-learning function, and that's just basically using the TensorFlow objects underneath the hood. So it's just built on top of, similar to how something might be built on top of NumPy, these are just built on top of TensorFlow. Other questions?
29:42
Okay, can I ask a question please? I teach young people, and they are moving through things at a rate of knots, some of them, and they are tremendously interested in machine learning and artificial intelligence. If they took your set of tutorials on this subject and worked through it independently
30:02
or with a teacher's help, how easy or difficult would it be, do you think, for a simple project? So I think there's a ton of examples available for some of the simpler ones. Like if you wanted to do something like computer vision or something using natural language processing, there are a lot of resources available.
30:21
So I think it would be enough to get someone started. A lot of these introductory courses, both on Udacity and Coursera, also go through some of the more common examples. So those would definitely be another good place to go. But I think it's, yeah, just for simple stuff, I think it's pretty easy to get started with this. Thank you.
30:43
Yeah, hi, sorry, just to quickly carry on from what you were just saying about the Udacity course. I was just interested to know, is that gonna be on TensorFlow 2? Or is that still talking about the old TensorFlow with, I don't know, some sort of detail
31:01
about the new TensorFlow coming in there? It should use TensorFlow 2. It should be an introduction using this stuff specifically. Okay, next question.
31:20
Thank you for the talk. Just seeing that Keras has been integrated, how the estimators are going to, I mean, does it make sense to continue using estimators with the new Keras integration? Sure, that's a very good question. So the estimators are not being, I guess,
31:43
further developed, so they will long-term be deprecated in favor of the Keras APIs. They're still there, but I wouldn't expect any new changes to come to them any time in the near future. Okay, next question. No more questions? You sure?
32:01
I can stick around for a couple of minutes, too, if anyone has any, you know, wants to talk offline. Okay, can you put your hands together for a round of applause for Brad Mirow? Thank you.