We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Executing scripts in a few milliseconds with MicroPython

00:00

Formal Metadata

Title
Executing scripts in a few milliseconds with MicroPython
Title of Series
Number of Parts
160
Author
License
CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Executing scripts in a few milliseconds with MicroPython EuroPython 2017 - Talk - 2017-07-14 - Arengo. Rimini, Italy Command execution time can become important in a number of applications. Commands executed in command-line completion need to execute in less then 100ms or users will perceive a delay. In Shell scripting one might want to execute commands repeatedly in a for loop and fast execution times makes this more feasible. Python is a very powerful language but has a much slower startup time compared to other interpreted languages like Perl, Lua and Bash. It can take up to 10 times longer to startup then some of these other languages. MicroPython was written as a lean implementation of Python 3 with a small subset of the standard library mainly intended to run on microcontrollers. But it happily runs on Unix systems with excellent startup performance, making it an ideal candidate for implementing certain time sensitive commands. This talk will: Explain when achieving fast execution times matters and when it doesn’t. Present two different approaches to measuring command execution time, one simple and the other more detailed and accurate. Compare execution times of a simple set of scripts that add two numbers in an number of different interpreted languages (micropython, python3, awk, perl, lua, bash). Present an example use case of MicroPython on Unix. Bash completion for pip install that completes the names of available packages live from a remote pypi mirror. Demonstrate the auto completion script with pip on a local pypi mirror
Multiplication signLecture/Conference
Scripting languageWeb 2.0CoroutineWeb applicationScripting languageMultiplication signCartesian coordinate systemXMLComputer animation
MiniDiscPasswordMiniDiscNeuroinformatikBefehlsprozessorComputer animation
GUI widgetScripting languageComplete metric spaceText editorMeasurementSoftware testingLetterpress printingBefehlsprozessorWeb pageStatisticsComplete metric spaceMultiplication signScripting language2 (number)Programming languageComputer programmingCycle (graph theory)Statement (computer science)ImplementationCalculationData managementSoftware testingStructural loadReal numberPlanningQuicksortProcess (computing)UsabilityConfiguration spaceLetterpress printingBefehlsprozessorVideo gameBinary codeGUI widgetVirtual machineRadical (chemistry)SoftwarePhysical systemBenchmarkResultantMeasurementNumberDomain nameMixed realityCASE <Informatik>Variable (mathematics)Game controllerProgrammschleifePresentation of a groupHookingFunctional (mathematics)Source codeDifferent (Kate Ryan album)Single-precision floating-point formatComputer fileExterior algebraWebsiteRight anglePressureRun time (program lifecycle phase)Latent heatFrustrationType theoryStapeldateiSurvival analysisRoundness (object)Perspective (visual)Text editorPasswordLine (geometry)NeuroinformatikFunction (mathematics)Pairwise comparisonStandard deviationWordGoodness of fitMathematical analysisComputer animation
StatisticsBefehlsprozessorClient (computing)Server (computing)ImplementationCore dumpComputing platformNetwork socketPhysical systemComplete metric spaceMereologyUsabilityDemo (music)Price indexLink (knot theory)Local ringRevision controlImplementationCycle (graph theory)Proof theoryQuicksortInternet der DingeScripting languageRight angleLine (geometry)Server (computing)Sampling (statistics)Single-precision floating-point formatComputer fileInstance (computer science)Term (mathematics)Context awarenessComplete metric spaceStapeldateiGraph (mathematics)ResultantDifferent (Kate Ryan album)CodePresentation of a groupNeuroinformatikConsistencyLetterpress printingNumberComputer programmingBefehlsprozessorSoftwareMultiplication signPattern languageMicrocontrollerWeb 2.0Computer hardwarePurchasingVirtual machineBenchmarkProcess (computing)LaptopRegular graphGame controllerCondition numberElectronic mailing listExecution unitConnected spaceSpherical capComputer architectureLibrary (computing)UsabilityCore dumpSelectivity (electronic)DemosceneClient (computing)Programming languageNetwork socketMereologyLevel (video gaming)ExpressionRadical (chemistry)Function (mathematics)Video gameFrame problemContent (media)Perfect groupMixed realityType theorySet (mathematics)VarianceInternetworking2 (number)Open set1 (number)Real numberStandard deviationStatisticsEquivalence relationComputer animation
Compilation albumSoftware maintenanceStatisticsComplete metric spaceExecution unitInterpreter (computing)Link (knot theory)Installation artPrice indexSubject indexingComplete metric spaceLetterpress printingComputer configurationLoop (music)Fluid staticsRevision controlWeb 2.0FreezingServer (computing)Electronic mailing listIntegrated development environmentParameter (computer programming)Presentation of a groupInterpreter (computing)Variable (mathematics)TrailCodeSpacetimeMultiplication signFunctional (mathematics)Computer fileStapeldateiDemo (music)InformationUniform resource locatorBitFile formatSource codeSystem callRadical (chemistry)Coma BerenicesLine (geometry)Installation art2 (number)ParsingHeegaard splittingImplementationLocal ringDegree (graph theory)Compilation albumComputer programmingEquals signMoment (mathematics)ResultantMereologyFunction (mathematics)Physical lawType theoryDistanceVotingSubsetFitness functionLink (knot theory)Form (programming)PhysicalismWordGraph coloringGroup actionCanonical ensembleNatural languageState observerExpert systemPattern languageConnected spaceCausalityXML
ImplementationScripting languageCASE <Informatik>Multiplication signSocial classSound effectGoodness of fitHidden Markov modelSubsetProcess (computing)Letterpress printingType theoryComputer fileComputer programmingFunction (mathematics)QuicksortPresentation of a groupLibrary (computing)Heegaard splittingServer (computing)Level (video gaming)WindowDifferent (Kate Ryan album)Data managementPhysical systemRight angleSystem callVector potentialHookingInterprozesskommunikationIntegrated development environmentReal-time operating systemRun-time systemCodeLatent heatInterpreter (computing)Open setComputer architectureBit rateTelecommunicationTerm (mathematics)Pulse (signal processing)Data structureFunctional (mathematics)CuboidRevision controlArithmetic meanExecution unitVideo gameWordWorkstation <Musikinstrument>DistanceLecture/ConferenceMeeting/Interview
SubsetWeb 2.0Library (computing)Functional (mathematics)Programming languageNetwork socketCodeElectronic mailing listString (computer science)Regulärer Ausdruck <Textverarbeitung>Cycle (graph theory)Power (physics)Right angleStability theoryLatent heatLine (geometry)Web browserServer (computing)BefehlsprozessorInternet der DingeLevel (video gaming)MehrprozessorsystemArchaeological field surveyMetropolitan area network2 (number)Complete metric spaceWorkstation <Musikinstrument>Process (computing)CodeCellular automatonMoment (mathematics)Active contour modelBit rateExpressionVector spaceLecture/Conference
Transcript: English(auto-generated)
So, before I start, I'm just, before I get into details of MicroPython and executing it, I'm going to tell a story. So in 2006, I fell in love, so over ten years ago, with Python.
And when you fall in love, you just want to spend all your time with your loved one. And we did. We did command line scripting, web applications, graphic applications, networking, coroutines. We spent all our time together. But then, on one fateful day, something horrible happened. Conki.
So, who here knows what Conki is or ever used it or heard of it? Raise your hand if you have. Good, it makes it even more exciting. So, Conki is kind of a silly piece of software, but very useful. And this is Conki, what it looks like. What it lets you do is just monitor the resources on your computer.
It's a very geeky thing to do, whatever, but you can see CPU, RAM, disk usage, stuff like that. And this is my Conki setup that I have on my computer. So I could do all this stuff with Python, I was very happy with it. And then, when I encountered Conki, I went to configure and customizing it.
And it has these 300 built-in widgets for CPU, RAM, so all that stuff. But then, you always have something custom, right, that's relating to your setup. And for me, I had these five things I wanted to watch for Debian packages, do I have any updates, my domain controller password, stuff like that.
So the way Conki works is if you want to extend it, they have a simple hook-in where you just, in your config file, put some scripts, and then you specify how often you want it to be called. So I was like, cool. So I created five scripts in Python, of course, the love of my life. And I wanted it to update every second.
So I was like, okay, so Conki is going to spawn these processes every second. And I have five of these scripts. So I just did a calculation. I was like, oh, crap, it's going to run this 400,000 times every day. So I thought, hey, maybe that's okay, let's see what happens. So I did. And initially, I just made a very simple Python hello world, and kicked it off and
said, okay, what's this going to do? And Conki itself is implemented in a very efficient fashion, because it would be really lame if your resource manager put a lot of load on your system, right? So when I did this, my CPU usage shot up to 15%. But then instead of Python scripts, when I just wrote hello world in bash, it used
a lot less CPU. And so I ended up just not using Python for this and slapping together some bash scripts and not doing it every second, doing it every 10 seconds and whatnot. But over time, over the years, I found stuff that Python hasn't worked out for me. And it's always frustrated me because I hate programming in bash.
I mean, if statements and for loops in bash are very ugly and painful to work with. So it frustrates me if I can't use Python. And these are two kind of other examples separate to the Conki example, where there's issues sometimes, let's say, in performance, and maybe you want to measure that performance.
So one example is, you know, in your text editor, you can hook in a lint checker whenever you save, and I use vim, and whenever I save my source code, it goes and runs flake8 on it. And if you just check this on a very, like, simple file, like a single line, print hello
world, it takes over 300 milliseconds, right? Now, from a usability perspective, right, anything beyond 100 milliseconds, people will perceive, and it's very frustrating when you're editing code, and then you save, and then you feel this kind of lag, you know, as you're saving the file.
But you know, it's very useful to have the pepe checking, syntax checking, so I accept it. And what I'm going to kind of present in this talk is an alternative to C Python that has a much faster startup time, which is using MicroPython, but we'll see that later. Another use case is bash completion.
So whenever you're in command line doing tap, tap, tap, you're typing, and you really want a responsive experience, you know, so anything that takes more 100 milliseconds, you start thinking what's going on. And you know, when I first started working with pip, I was like, oh, cool, they have bash completion for pip, so I set it up. And when I did pip, tap, tap, tap, you know, sometimes it can take up to a second, you
know, to kind of do the bash completion, because pip is a little heavy to import and execute. And this is very frustrating, because when I first started using it, I thought maybe there's something wrong with my terminal with SSH connection, maybe the network stuck or whatnot. And so in the end, I just disabled and stopped using it, because when something gets
so slow, then it, in terms of usability, then it impacts the functionality of it. So we're going to try and look at two things to measure in script execution. So we focused on, let's say you have a specific script, a program, and you're going to execute it, it's going to start, it's going to end, and you want these two simple
things. You want to know what was elapsed time, and also the clock cycles. Now why the clock cycles? And it's funny, I mean, like, from an end user perspective, all they care about is the elapsed time. But when I started running these benchmarks on this laptop, which is five years old, and then my other desktop, which is a newer computer, on the desktop was a lot faster,
Which is natural, you know, CPUs, and that can be kind of frustrating when you're comparing results. But what's cool is if you compare the clock cycles, it'll be the same, right? Because if you have the same OS, the same software, generally it's going to go through the same number of instructions to execute.
So clock cycles can be useful to kind of more fairly compare, and elapsed time is the real kind of experience of it. And then when you're looking at the case of concave, maybe you're planning to actually execute some sort of crazy job that every second is going to do some polling, you know?
And you want to kind of measure, you know, how heavy is this going to be on the CPU? So clock cycles will tell you that. So why measure one when you can measure five? So what I said is since I'm doing these benchmarks, and I know already that, you know, Bash, Hello World can be faster than, let's say, Python, what I did is I said,
let's throw into the mix five interpreted languages. So we have Python, Bash, Perl, Lua, and Ock. All of these languages are interpreted like Python, and they have variables and functions and for loops. And so I said, okay, this is a fair comparison, because it'd be unfair to compare it to C or Go, you know, that is compiled.
And so I created these five files, and essentially they're all the same implementation adding one plus one. So all of them, if you execute them in their language, they should output two. And then we're going to run all this through the performance and then see how does Python compare, you know? Is it unfair, you know, to expect like a very fast performance time from an
interpreted language, you know? And the first one, add.py will be used for the Python 3.5 test as well as MicroPython. So the same script will run it through both interpreters and see the difference in performance. So how do we measure execution time? The easiest way is to just say time.
And this is available in Bash on Unix systems like Linux and Mac. And now this time is actually built into Bash, and it's different than a time binary that you'll find, you know, on your computer.
So it's just important to know that distinction because some of the command line options are different when you look at the man page for time. It's likely to be this time instead of time. But generally, it's a very easy way of just measuring. So if you say time, you put the command. And the only number you really care about is the first number that's real, under real. And this shows that, you know, when I just run a script that prints one plus one
and Python, it takes on this machine 48 milliseconds. And now this is more like a thorough approach, you know, and this is a very powerful command. It's very useful. So this is a Linux performance tool that comes, you know, on Linux machines,
and you can use your package manager to install it. And the perf command has a bunch of subcommands, and this is stat, the stat subcommand. And I just highlighted in red some things that, you know, are worth noting. One is the dash r10. So anytime you do benchmarking, you don't want to just take one sample, right?
Because it's not really going to be indicative. So what's nice about this in the same command, unlike time, you can say, hey, do 10 shots, a sample of 10, and give me the aggregated results. So here it's saying it's running for 10. Another thing to note is the context switches.
So if you're running a benchmark, or you're trying to measure the performance of a piece of software, and you're running a lot of other programs on that computer, it's going to give you inconsistent results, because, you know, the CPU is switching, context switching between those other CPUs, other processes. So when you get zero context switches, it means this is good.
You know, you ran the benchmark on a machine that didn't have a lot of other stuff going on. Then now we can get the cycles number. So when you just, in Python, say print one plus one, you know, it goes through 50 million instructions, CPU cycles, which is surprising.
I don't know. I don't know much about, like, computer architecture and CPUs, but that's how much it goes through. Then the last thing is the elapsed time. And so you can see that this took 17.8 milliseconds. And the percentage sign, the plus or minus, is the variance.
So you just want to keep an eye out on this, because if it's, you know, plus one percentage points, that's fine. But, like, if you have, like, 100% variance or 200% variance, it means that you're getting a lot of inconsistency between those different samples, and then you should step back and be like, well, I shouldn't trust this number, right?
So now I jump back to my love affair with Python. And I bumped into this little beauty earlier this year, and I purchased this on January 15th, 10 years after first discovering Python. And it's a popular Internet of Things hardware,
and I got it just to tinker with it. It's a microcontroller running at 80 megahertz. And amazingly, you know, it can run Python through this implementation called MicroPython, which runs on bare metal. So when I got it, I just got it to play with Internet of Things, but then I started playing around more with MicroPython, and I discovered that you can install it on servers, on desktops.
It's not only for microcontrollers, right? So then I started to compare the performance of MicroPython against regular Python, and you'll see the results. But just a brief on MicroPython, it's a lean implementation of Python 3. They had to be much more cutthroat about the way they implemented, because it's targeted for microcontrollers,
which have limited CPU and RAM and the rest of it. And so what they've done is they've taken a selection of core libraries and implemented them, and they have this naming convention of putting the letter U, like micro, you know, in front of them. So you have some of the ones that I highlight is USocket. It's just like the Python standard library socket, and that lets you open HTTP connections,
so you can do HTTP client, server, and TCP and UDP. UOS, that lets you spawn processes, retrieve output. And JSON, regular expressions, time, etc. So it's got a nice mix of quite useful packages taken from the core library. So let's look at the numbers.
So when we run those scripts and just add two numbers, 1 plus 1, right? This is a single text file with a single line of code that's interpreted in a number of different languages. We see Python is like way out there. It's, you know, around 18, 15 times slower
than all of these other interpreted language, Lua, Bash, Perl, and all that, right? But what's really interesting is MicroPython beats them all, and, you know, and it executes so fast just adding these two numbers, right? And I was really surprised that you can do this in under a millisecond, because, you know, once you get to that level,
just the process of spawning a process, opening the text file, reading the contents, compiling it, right? And it's amazing that it does it in under one millisecond. So I really encourage people to explore MicroPython,
you know, outside the microcontrollers, you know, even on their desktops and their servers. It could be useful in a whole array of stuff. So those same sort of, this is, I use that same tools for both set of graphs, and this is in terms of the CPU cycles. And so we see that, you know, Python needed 46 million CPU cycles,
but MicroPython only needed one. And so it's a much kind of more lean implementation. And so now, you know, it's about like, I guess, 30 or so times less, which means that I could probably use this now
for a Conkey, and I wouldn't get this high CPU usage. So, cool. So the next part of the presentation is just looking at a real life implementation of using MicroPython outside of the microcontroller to do something useful. So what I've done is I've done PIP bash completion
using MicroPython. And what's cool about that is PIP already has bash completion built in. So we can compare the two in terms of performance and in terms of features. And besides speed, there's also stuff that you can do in MicroPython that would be frustrating to do in other languages that have better startup,
that have equivalent startup times like bash if you had to go and connect to a web server and fetch a listing of the wheel packages, which is what we'll be doing here. You know, it'll be frustrating to do in bash. So, cool. So let's see how the built-in PIP performs. And if we just import PIP
because, you know, when you do the auto-completion press tab, it imports it and then calls it. Just importing the package, like I haven't called the auto-completion you know, on my laptop takes one second. And a lot of the machines that I've seen it's heavy. And so you're really gonna feel that, you know, when you press tab. The general, you know,
benchmark is you want it to be under 100 milliseconds usability. Generally, anything under 50 milliseconds people can't perceive. You know, they can't perceive more than 24-25 frames per second. But stuff more than 100 milliseconds or 50 milliseconds, you're gonna feel that kind of jittery lag. And anything in the terminal typing will be like that.
The other thing is the built-in PIP completion it only completes the subcommands. So if you do PIP, it says install whatever but it won't complete the names of packages or stuff like that. So we're gonna take it to the next level and we will also complete all that. So now
the code that I'm showing and everything it's more kind of like a demonstration. It's not the perfect kind of implementation. So for the demonstration I have a local PyPy mirror that I've created and what I've done is I've just grabbed a whole bunch of wheels over 400 wheels and just have them served
on a static web server on Nginx. And to do that you just set these environmental variables so that PIP looks in that location. I'm doing it as local host but you can do it on any web server in your environment. And
so we'll auto-complete the subcommands in the implementation and we'll also every time you press tab it will connect over HTTP check, you know, on Nginx what's the set of wheels. So the moment a new wheel is published or put on the web server and you press tab it will auto-complete which is nice. Because sometimes
there's a trick when auto-completion is slow, people cache the results but it's frustrating because a lot of time you have to log out and back into the terminal, this will be live for each call. It will also complete not just the package name, the versions as well, so you'll get all of that information in the terminal when you do tab completion. So I won't go into too much detail about this
but you know there's different ways of doing batch completion usually if you wanted to run live each time then you have a batch function and you use this complete in that fashion. I've set the command option file names because when we say pip install, you know
click equals equals 1.0 equals is actually a special character in batch and it needs to be escaped and you'll see that, so this is how you say, you know, allow special characters file names and then you'll see that all the work is being done by this pip comp guy and we'll look at his source code and he receives one argument
comp c-word and comp c-word, if you're completing the first argument then it will say 1, if you're completing the second argument then it will say 2 and we'll need to keep track of that because if you do pip space tab then we want to say install freeze, whatever, if you do pip install space and then tab, we want to
give it the package names. So yeah, so let's look at the code. So these are all the packages that we'll be using in this implementation and at the top that just says hey use the micropython interpreter instead of python. U-requests is just like very similar to requests that we all know and love
except it's implemented to work with usockets and some more kind of lean implementation and then os and so this function does the bulk of the work and the first line will go and connect using requests to the URL provided, you know, and
fetch the HTML output. The second line will go parse all the HTML and get the list of wheel files that are hosted on the web server. And then we just loop through the wheels and parse by dash, split sorry by dash to show the name and version. The wheel file
format is really cool because in the file name it's encoded the name and the version so you can get all that information. So and then that's it, that's the whole function, the lower bit is just the main and all he does is he checks if you gave argument one then this is the list of subcommands that we'll complete and if you gave argument two then
we don't want to hard code the URL so we'll get what the user has specified in his environmental variables by getting pip find links, we'll call get packages and then we'll print out this newline delimited. So to do batch completion you just have to spit out, you know, a list of values that are
spaced delimited, newline delimited, whatnot. And so let's compare the performance so we know that just importing pip for autocompletion can take up to a thousand milliseconds so how does this fare? So we see that when he's just simply spitting out the commands he does in 3.9 milliseconds which is way under 100 and
when he connects to the package names to fetch the package names, he opens the HTTP connection, gets that, parts the HTML and prints it out all in 11 milliseconds which is fantastic performance, way more than what we need. So let's do a live demo.
Okay, so so the first thing that I will do is just set up a clean environment so this is shouldn't have anything installed on it
and we can start pipping. So let's just look at pip com for a second. So if I say one he should tell me the commands which he does, if I say two he should connect to my web server and we can see the nginx web server here and it just
has this bunch of wheel files here and it's just opening this URL and fetching and then spitting out the package names equals the rest of it. Now let's just check how many packages we have in the demo. 409 wheels and let's start doing some autocompletion. So if we do
pip, well let's do pip tab. So this is working now, I can put the commands. I can say pip s gives me options install. Now I can just do tab and it shows me 409 packages. So quite a few. I can go and say okay, show me all the packages beginning with C and you'll see
that it's showing click and colorama you know and now so I can go and say okay I want to install click let's say version 1.0 and do tab so you see the equal signs get escaped correctly and let's install colorama2
we can do pip freeze to see what's installed and why not upgrade click to 6.0 and if we see it pip freeze we can see it's been
upgraded to 6.0 so so that's it I'll just whoops yes that's the whole presentation thank you
First of all thank you very much for your talk it's actually a really interesting approach to use it for completions but I'm wondering to which degree this MicroPython is compatible or incompatible regular
Python and like is there like a realistic chance to port larger existing programs from Python into MicroPython thanks for the questions very good question so it is actually challenging like if you just took a big code base like pip for example you know or flake
you know and you're trying to import it it's just a lot of stuff's not going to work but you know what would be really great and Armin talked about this you know in the keynote a few days ago is you know let's say you have the specification of Python
it would be really nice if you could go and then have a formalized subset of Python called the MicroPython specification and then people could come in and be like hey I'm going to write flake 8 so that it supports both you know the full fledged Python and MicroPython and that's happened in the past with Jython for example
you know Django for a long time supported Jython and when you'd run Django it would make sure that this code would work equally well in Python as well as Jython even though the Jython runtime environment is so different so I think right now at this stage there will be
issues but I think there's just a lot of opportunity and potential and MicroPython is just it's really stable and it's used so I think there's a lot of potential and an opportunity for it but yeah
Thank you that was pretty informative I just wanted to add that apparently Linux has a long tradition for solving these problems like you had with Kanki it's like you
just spun a long running process because it generally boils down to the startup time of Python interpreter because this is a little misleading that you say all the way the performance of Python because mainly this is bootstrapping time if you
put nothing instead of print then you would approximately get the same time because this is all about loading libraries etc but yeah like in case of awesome window manager when you're writing scripts that want to run
fast then mainly you just spun long running process and then use just some some type of buffering like with Kanki you could just dump your data into file then cutting in Kanki and then you would get
some kind of real time because that's what you want 18 milliseconds is not so much really yeah you know what is 18 milliseconds right like when I first saw it I thought it's nothing but I think it's always good to look around at what's there
so when you can see an implementation of a subset of Python you know running it 30 times more maybe there's these opportunities that can arise from it so think of the presentation more as an eye opener to possibilities but what you said is very important and it's true I mean one could argue that
the architecture of Kanki is messy you know you're just going to spawn a process every time right even if it's fast you know that can feel very kind of sloppy right maybe I should have a long running process and then do some inter-process communication right like D-Bus or something like that but one of the reasons why
calling other processes and just getting the output is such a popular method of hook-in is because it's so easy to implement and it's very widespread right like Kanki uses it Vim uses it, Emacs uses it like when Vim I'm using Syntastic you know when I press save each time it's calling Flake right
you know even though maybe someone could have made a Flake server and then it could kind of call it more efficiently right the same thing is true of a lot of Jenkins as an example you know like when it wants to hook into Python a lot of times it's spawning these processes so it's kind of a very popular way of hooking
in executing programs and just capturing their output but you know as you said it's not necessarily a fair you know assessment of Python and I wouldn't put it as a criticism of Python because I've never had at least in the systems I create or production environments this sort of issue
if I have a startup issue I create a long running process and then do some inter-process communication and what not like you said but yeah thank you Hi thanks for the presentation I don't know anything about MicroPython apart from what you just showed us but I was wondering
if there's any split between MicroPython 2 and 3 or it's only 2 or it's only 3 or Yeah good question so MicroPython isn't that old so when they created it they said we're not going to support Python 2 so the whole thing is Python 3 from the start and so oh yeah so that's a very important thing actually because
you don't want to go back at this stage right so it's just all completely cleanly implemented in Python 3 and it's the same exact syntax as Python 3 there's some differences some edge cases but you know you have all the same data structures, functions, classes and the rest of it. The more I worked with it the more I didn't encounter
any surprises Thank you for your talk Quick question Do you think MicroPython
will have an effect impact on Python 3 itself I mean C Python Sorry say the last bit again and just cut off Do you think MicroPython a good thing in MicroPython will have an impact on the C Python implementation hmm well I hope so because
you know before January 17th if someone came to me and said hey Python should start in 1 millisecond I'd be like buzz off 18 milliseconds is great it's such a powerful language right like in a way you could argue bash you know or or some of the other languages they're very limited
you know in a way they're not as powerful as Python but I think now that MicroPython has been implemented and there's such a growth in Internet of Things and so much contribution in MicroPython and stability I think it's going to make us ask that question again you know Armin again raised about like you know
PyPy does it have to follow the C Python thing you know maybe we can have this specification maybe we can have a Python specification and then a subset you know or like you said maybe some of these tricks that MicroPython has C Python could use but I'm not sure about that because keep in mind MicroPython there's
drawbacks to it there's no multithreading no multiprocess and that's how it was able to get away with this kind of oversimplified but it's amazing that you can get that much functionality with that few CPU cycles so I I'm surprised at it
Hi, thank you for the great talk Regarding the the problems with the incompatibility with the code is this in the syntax level or is the lack of the standard library what kind of incompatibilities do you encounter?
So good question for me when I implemented this so one good example is Requests Requests is such a popular library right that everyone's like do I have to directly talk to socket because they give you socket but they don't give you
the Python 3 libraries that you have for the HTTP client right so imagine like writing low level socket code to connect to an HTTP web server it's a pain so you know the MicroPython guys made you requests and they made you Json
regular expressions and the rest of it so I think a lot of the popular libraries are there and popular functionality but it's not necessarily that the Python code that you write that the list is different or the string is different or the function is different because the code that you saw looks exactly like regular Python which it is
it's just you know if you take any of these packages and you try and import I don't know collections you know or even RE as a regular expression you know you don't have the full RE and so you know they've made URE which has a lot of the functionality but it's limited so
you'd have to change a lot of your import paths and you'd have to be thoughtful I think I think if you know the code isn't so crazy and complicated then it's definitely worth it because like you saw the pip autocompletion it's not like 10,000 lines of code you know I think it was just
15 or 20 lines of code so you know if you have this kind of small thing if you're making something new I think it's definitely worth trying out alright thank you very much Marvin for this talk and with you later