We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Python in the Hardware Industry

00:00

Formale Metadaten

Titel
Python in the Hardware Industry
Serientitel
Anzahl der Teile
9
Autor
Lizenz
CC-Namensnennung 4.0 International:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
This talk will be about the usage of Python inside of Sensirion, a hardware company producing sensors. We will see where and how we rely on Python and how the usage evolved from collections of small Python scripts in each department to a stack of gerrit, Jenkins and devpi to develop, test and deploy Python packages to 100+ non software engineers in the lab.
HardwareFontPROMStandardabweichungSoftwaretestDesintegration <Mathematik>DigitalsignalInterface <Schaltung>SoftwareEinflussgrößeDreiecksfreier GraphComputerspielPrototypingProdukt <Mathematik>Divergente ReiheDatenanalyseProgrammbibliothekAnalysisInteraktives FernsehenTypentheorieSchnelltasteDateiformatIterationSkriptspracheUmsetzung <Informatik>FarbverwaltungssystemInformationsspeicherungEin-AusgabeCoxeter-GruppePlot <Graphische Darstellung>LeistungsbewertungWiderspruchsfreiheitInnerer PunktCodeTropfenWhiteboardProzessautomationMini-DiscComputerMultiplikationsoperatorDatenanalyseEinflussgrößePersonal Area NetworkRauschenBenutzeroberflächeProzess <Informatik>Plottersinc-FunktionCodeDifferenteStandardabweichungBitProzessautomationRechenschieberZentrische StreckungProdukt <Mathematik>Virtuelle MaschineFigurierte ZahlAnalysisEin-AusgabeElement <Gruppentheorie>Reelle ZahlWhiteboardComputerPrototypingInteraktives FernsehenNotebook-ComputerHardwareParametersystemSoftwaretestInformationsspeicherungSoftwareDatenbankSkriptspracheResultanteMusterspracheOrtsoperatorKategorie <Mathematik>Rechter WinkelFehlermeldungTypentheorieSchnelltasteProgrammbibliothekEinfach zusammenhängender RaumDateiformatGeradeSchnittmengeStichprobenumfangWort <Informatik>Software EngineeringMultigraphFormation <Mathematik>Ordnung <Mathematik>Graphische BenutzeroberflächeComputersicherheitWeb SiteDreiecksfreier GraphComputerspielElektronische PublikationMereologieATMKnotenmengeGüte der AnpassungSchlüsselverwaltungMagnetbandlaufwerkPi <Zahl>ComputeranimationVorlesung/Konferenz
HardwareProzessautomationDichte <Stochastik>SoftwaretestPrototypingCodeEinflussgrößePlastikkarteModul <Datentyp>Leistung <Physik>RechenbuchProtokoll <Datenverarbeitungssystem>HackerAlgorithmusProdukt <Mathematik>Physikalisches SystemPräprozessorDistributionenraumVirtuelle MaschineSkriptsprachePASS <Programm>VersionsverwaltungMereologieGruppenoperationGruppenkeimDrucksondierungGewicht <Ausgleichsrechnung>DatenverwaltungSoftwarewartungPunktZufallszahlenRhombus <Mathematik>SystemplattformFramework <Informatik>RPCInterface <Schaltung>AnalysisCodeSoftwaretestGruppenoperationDistributionenraumSkriptspracheVirtuelle MaschineVersionsverwaltungGewicht <Ausgleichsrechnung>SoftwarewartungLeistung <Physik>Physikalisches SystemSystemplattformProgrammbibliothekRandomisierungRhombus <Mathematik>Framework <Informatik>VerkehrsinformationProdukt <Mathematik>Prozess <Informatik>FunktionalSoftwareentwicklerOrdnung <Mathematik>Klassische PhysikUniversal product codeBimodulLaufzeitfehlerVerteilte ProgrammierungData MiningDatenverwaltungAusnahmebehandlungSoftwareZentrische StreckungMultiplikationsoperatorPunktSystemaufrufBitCASE <Informatik>AdditionRechenbuchFirmwareKartesische KoordinatenSchreiben <Datenverarbeitung>Wrapper <Programmierung>Interface <Schaltung>Overhead <Kommunikationstechnik>CodierungE-MailPrototypingDifferenteEinflussgrößePlotterBildschirmfensterMereologieEin-AusgabeAlgorithmusFunktion <Mathematik>PräprozessorPlastikkarteSchnittmengeProgrammfehlerWechselsprungElektronische PublikationDokumentenserverProtokoll <Datenverarbeitungssystem>ComputerServerDichte <Stochastik>t-TestInternetworkingFehlermeldungKonfigurationsraumSoftwareentwicklungTesselationSoftware Development KitMikrocontrollerNotebook-ComputerDeskriptive StatistikEinsVorlesung/Konferenz
CodeGewicht <Ausgleichsrechnung>SchnelltasteFunktion <Mathematik>DistributionenraumInstallation <Informatik>Gebäude <Mathematik>ImplementierungKoordinatenInstantiierungServerGruppenkeimIndexberechnungVersionsverwaltungWendepunktDatentypDateiformatInformationsspeicherungEmpirische VerteilungsfunktionStandardabweichungEinflussgrößeElektronische PublikationMetadatenMechatronikHypermediaSchreib-Lese-KopfPlot <Graphische Darstellung>SoftwaretestDatenanalyseBefehlsprozessorGlobale OptimierungBildschirmfensterDatenanalyseStabilitätstheorie <Logik>MultiplikationsoperatorCASE <Informatik>ServerGruppenoperationDistributionenraumAutomatische IndexierungBitMereologieMAPSoftwaretestDateiformatSuchmaschineRichtungMathematikFlächeninhaltDokumentenserverStandardabweichungCodeFront-End <Software>Elektronische PublikationInformationsspeicherungTypentheorieMetadatenBimodulSpider <Programm>EinflussgrößeKartesische KoordinatenWrapper <Programmierung>ProgrammbibliothekAggregatzustandEigentliche AbbildungInformation RetrievalFirmwareDatenverwaltungProgrammierspracheDifferenteGenerator <Informatik>PlotterMiddlewareSoftwareProzess <Informatik>FunktionalSchnelltasteFramework <Informatik>Wort <Informatik>BenutzerbeteiligungZentralisatorWellenpaketBenutzerfreundlichkeitGebäude <Mathematik>Installation <Informatik>Domain <Netzwerk>UmwandlungsenthalpieLeistungsbewertungVersionsverwaltungLokales NetzSchreiben <Datenverarbeitung>Microsoft dot netRechter WinkelExtreme programmingIdentitätsverwaltungPi <Zahl>VererbungshierarchieSichtenkonzeptsinc-FunktionPhasenumwandlungSoftware Development KitVorlesung/Konferenz
Interface <Schaltung>RechenschieberVorlesung/Konferenz
Interface <Schaltung>Physikalisches SystemFirmwareServerCodeSoftwaretestVersionsverwaltungStandardabweichungFramework <Informatik>Produkt <Mathematik>HochdruckMobiles EndgerätInterpretiererVirtuelle MaschineObjektorientierte ProgrammierspracheSoftwareMultiplikationsoperatorProzess <Informatik>HardwareBildschirmfensterVorlesung/Konferenz
Transkript: Englisch(automatisch erzeugt)
So, yeah, welcome to my talk. My name is Raphael. I actually started here at HSR in electronic engineering. And I work at Censorion since four years and extensively using Python since like two years ago. First, I will show you a few examples really fast,
because I have a lot of examples and not so much time. Then I show you some pains we had when Python usage was like exploding at Censorion. And then I show you some of our solutions that hopefully work. So first, how do we use Python? For that, I just need to give you a really, really short introduction to what
Censorion actually does. So basically, we design custom ASICs, these electronics, let them produce externally in a standard process like every electronic is manufactured today. These get delivered to us as wafers, so these shiny silicon things. And then we apply a lot of magic. This is actually where the real thing happens.
We are testing the stuff, cutting the wafer into the individual sensors, adding our sensor elements to it, and do lots of magic stuff, and calibrate the thing. So the end product, you get a tiny little sensor, which is fully calibrated, integrated, and you can talk with I2C or whatever, SBI,
depending on what product you order. So yeah, of course, we make them tinier and faster and stuff, or make them more robust, like the thing you see on the right. It's for automotive, so a little bit more stable. Yeah, so I guess you figure it,
we produce hardware, not software. But we use a lot of software to manage the production of the hardware. The most production critical software is actually written in C-sharp, but then for the R&D purposes, we use almost only Python. And this is mostly written by non-software engineers. So yeah, I will show you what that means.
This is kind of like a really short and horribly simplified lifecycle of when we develop a new sensor. I mean, there's some early experimentation. Down there, you have some prototype. You order the first silicon. You get the first wafer, need to qualify, test it.
Maybe you arrive at the first production ready thing, and then you have the final product. It's horribly simplified, and it's overlapping and stuff. But yeah, you get the picture. So during the step one to four, we actually do nothing in C-sharp, but all in the lab with mostly Python.
So I'll show you a few examples. One example is data analysis. You can do data analysis with Excel or with MATLAB or whatever, but we're trying to standardize to using Pandas as the data analysis library for Python. And if you use it for data processing,
use two Python notebooks to work interactively. Use PyQT or PyCite, which is just Python binding security, to create GUIs for if you have recurring analysis that you need to do. And yeah, this mostly works on two types of data. It's one which is for the whole wafer and data
which is from experiments with individual sensors. I will focus on the wafer data for this example. So if you analyze a wafer, you get data from many, many inputs. You have something that the supplier of the wafer gives you data in different formats, like Excel files,
comma separated values, or some JSON data, or whatever. And we also have our internal data, which is mostly stored in SQL database. Those formats change over time. So even if you have the same supplier, one day he sends you this file, the next day he sends you another file, or you requested some specific measurement.
So the first thing we do is we store it in standardized CSV formats and store it at a specific place. So this is a lot of dirty, quickly evolving Python scripts because you want to get access to the new data as fast as possible. But this enables us to separate the ugly code from the nice code.
And now I show you the nice code. This is actually a user interface where you can select which wafer you want to analyze, some parameters. I don't really know what this stuff means because I don't work there, but you can create nice looking plots. So you have the plot in the middle. It's interesting as you see your wafer.
And you see it seems to have some property which is specific to the position on the wafer. And in the plot on the right, you see two measurements. And how they correlate to each other. So here's another picture where you
see the measurement in the wafer in a lot more detail. And often, when you have a problem with a machine, we test always 32 devices at the same time on the wafer. And then you get patterns in your measurement results where you have the errors always in the same spot.
So yeah, so the conclusion, founders in my side are very powerful tools for doing data analysis. And it's important that you standardize your input data so you can reuse much of your plotting analysis code to get access to your data.
And if you standardize how you present the data, everybody can just look, or almost everybody can just look at the plots and understand them because they know how they were created. So another example is where we had a problem on certain hardware. We had a problem with noise, electric noise. And on the left, you see yellow is the signal.
And the pink one is like the FFT of the signal. And the left one is a good example. And the right one is a bad one. So we needed to find where this is. And this was on a production machine. So we had to do it quick so we can continue the production. And so we recorded the noise with more or less automatic
and then wrote some ugly code. Like you see here, I just go through it. So it's just kind of throw away code because what we really wanted is this one. So here you see on the x-axis, which channel of the 32
we connect simultaneously. And you see the noise is very specific to channel 11, 13, there's some pattern in there. And actually, we measured this. Everything took the machine offline for this to measure that, then could analyze the data outside of the production.
And this was really awesome. Then we could find the issue in the layout. Some sensor, some other connected device was giving us a lot of noise. On the bottom half, you see the noise without this sensor connected. So it really fixed the problem.
Yeah, we could fix the problem by changing the hardware. And this measure and analyze offline approach saves a lot of time. So we could just measure everything and analyze it offline where we had time and could figure out where the problem really, really was. So another example is automated testing.
A lot of time, you have a prototype of 10 pieces, and you need to test them all. But this is like, it's not the final product, so you have an autoc setup that maybe looks like this. They just have a power supply on the left, your main electronics in the middle, and some measurement instruments.
And you really just want to measure a few samples. And now it's tempting to do this manually. I mean, it's only five boards. It doesn't scale if I automate it. I just do it by hand. But as a software engineer, and you were probably a lot of software engineers,
you know tests should be automated. And I have actually the same slide as the one told before. Automate all the tests, yeah. And the cool thing is, lots of this lab equipment can be automated. It's either, if it's a little bit old, it has an AORUS 232 connection,
there's a UART, or USB, or what's an interesting standard that is upcoming is LXI. It's over Ethernet. And if you're lucky, it supports the IVI API, which is a standardization across different suppliers of lab equipment. And if you're really, really lucky,
then it's supported by the Python Ivy package. And if it's not supported, just look for a device with a similar name and try it, it may work. I mean, you may need to tweak some timeouts and settings, but mostly it works. So yeah, so just automate the whole test and put it in a 2-byte or a notebook.
This will then maybe look like that. So on the left, you see some code to get input from your lab measurements tools. So these plots are like directly from, you gather them from the tools via Ethernet. And then you can do some processing and further analysis and plotting
and actually end up with a very nice looking report which satisfies your manager's need or whatever. This is kind of a coworker of mine. He tried to put it on the top and just automate it completely. So just have to press one button, the whole test runs through and you get this nice looking PDF, how the device worked or not.
And this leads to reproducible measurements and it scales also for the next 10 prototypes you need to test, because almost always you find some error, you need to fix it and order 10 more of these devices and test again. And also you can store the test description, the instruction together with the code
and you can also give it to a non-trained engineer. Just use this notebook, press this button, follow the instructions and you can basically let an apprentice or some unskilled person do the test. And you don't have to fiddle with the settings of your measurement instruments which can be a pain.
Yes, another use case is we don't just produce our sensors, we also produce little modules. This is the smart gadget development kit, you can get one over there at the booth. Yeah, mostly it consists of a low power microcontroller,
some of our sensors hopefully and some other peripheral. So we use it to do further calculations with the sensor values and compensation for self-heating or whatever and to support additional protocols like Bluetooth in this case or just to have something you can show off to a customer because just a little tiny sensor you see here,
it's a bit boring just for itself. So yeah, mostly you have some reference signal processing implemented in Python or whatever which was developed by some R&D engineer in the lab and now you need to port it to the embedded system
using C, C++, you don't have a floating point unit, you have very low RAM, you could use micro Python probably but yeah, most of the time you just work in C because it should be low power. Now we need to verify that it still works the same. So last year there was the great talk
from Armin Rigo about CFFI, this really helps us because we now use it and yeah, we have a little hack I want to show is how to just get it with low overhead getting started with CFFI. It's just to make an all includes header file which contains the whole public API
of your embedded codes that you need to test then squashed it together with GCC by pre-processing everything and output this in a text file and then use CFFI to just parse this text file. This works mostly well. Then you need to compile the library of course and then you just can call the C code from CFFI.
Then you can do all your analysis in Python. This is some random data plotted with Python but it could be your embedded algorithm that you're testing and verifying if it still works correctly. So, whoops.
So let's go a step further. Yeah, Python use it was growing at Censorion because in the beginning it was very easy. It was 2008 around and we decided to, or they decided, it wasn't part of the company at that time,
to just use Python XY which is a Python distribution for Windows in the version 2.6 and install it on every machine that wants to use Python and it comes pre-installed with libraries. So every script runs on every machine, everybody's happy and you don't need to care about dependencies because everything is there in the distribution.
So, yeah, it run really well until it didn't. So the problem is this distribution ships with a whole lot of libraries for the same, sometimes for the same purpose. Some of these libraries are of rather low quality and if you have the different groups or people using different libraries, the exchange of code gets difficult.
So, and also Python 2.6 started to get outdated. I mean this, like jump forward four years, there was like Python 3.5 and Python 2.7, yeah. So individuals also required newer versions of libraries, like the newer Pandas version which fixed a lot of bugs and some special package only provided wheels
for Python 2.7 and not 2.6. So part of Censorion upgraded to Python 2.7. The others didn't. So this led to all of the sudden, all the benefits of having the same base installation were gone.
So you had like your code was only running inside your group or on your computer and it was pretty mess. Also it led to like custom Python installation scripts for every group. So you have this funny setup instructions like check out these S finder at this magic place,
copy the folder from there and set the Python path to this place, copy some .NET DLLs to this path, edit this config, add these values. This was different depending on the department where you worked. And it was just a collection of paths and paths and hack to get your base installation up to date.
And yeah, it was rather painful. And since we have this base installation, no dependency management, some people started to use SVN as a packaging and distribution system. So you ended up with things like that that you had to check out the whole SVN repository with all tags and the trunk.
And then you did funny thing on import time. You imported a specific tag, the specific version of this thing. It worked surprisingly good when I first, somebody show me this, this doesn't work at all. It works pretty well for us actually. And yeah, I was shocked. And it's of course a bit of a maintenance hell
because you have to check out the whole repository on every machine. You have to take care of that in tags, you only import from other tags and from trunk, you're allowed to import whatever you like. Yeah, then some other pain point was python.net. It's really awesome tool.
It allows you to call into any existing .NET code. And we have really lot of C-sharp .NET code which is used in production is of high quality and you can use all this power from Python except it's not so awesome if those .NET libraries have interdependencies. And you have this Python module which uses some .NET libraries, ships with them,
you check it out, your SVN in it, check out all the DLLs. And then you have this other module which uses another version of the same DLL and you have the classic diamond dependency hell. This leads to random behavior at runtime.
So yeah, the import order is suddenly important which library gets loaded first, that you have the newer version and stuff like that. It was really a lot of pain to debug this.
So yeah, another thing was this heavily reuse of production code to some massive, massive over-engineering. This is an example of a, it's an in-house developed test platform called Pilatus. The one before was called Rigi. It's just two mountains in Switzerland. And they're used both in production and in the R&D labs.
And we have to talk to this thing, this is an embedded system. We use an RPC framework. This is just you can call functions over the network. So it's connected to your TCP IP or ReasonNet and you just can call functions over the network. And we use a middleware called C or C, IS, which where you can define interfaces
and then it generates code for C sharp or C++ or whatever you like. And of course there was lots of C sharp code around which was used in production where this machine was used. And then yeah, let's use all this awesome code. I mean, it's existing, we should use it in the lab. So just put some Python on it.
Then we have this awesome C sharp framework and then we put an awesome Python wrapper which was very Pythonic on it. And then we can write awesome Python applications on it. So until you need to change something in the firmware and it needs to propagate all the way to stack up. So you add a new feature in the firmware on the embedded device. Then you need to update the C sharp framework.
You need to update the Python wrapper and then you can update your application and a week is gone and three people were busy. And actually in the lab you need low level access because you want to have low level access. 10 minutes, okay. So I call this like lasagna code. This is what lasagna looks like.
It's too many layers. I'm Italian so it's okay. Yeah, the solution is we have already have this network middleware which can generate bindings for different language. So we just generate the Python bindings for it and use it directly. So we have no interference with auto.net using libraries which were written in Python
and we have immediate access to every functionality which gets implemented in the firmware and it's as low level as you want. So yeah, the lesson learned, don't use a big Python instead distribution which is piles and piles of libraries but standardize your base install and keep it up to date so the users will actually use it and not just update by themself.
And if something is simple to implement in pure Python, do it. And if you want to have reusable codes, build a proper Python package. So now how do we try to achieve this? This is actually an ongoing process.
First, we set up a Python user group called PARC which experience Python users from every group at Censorion. And since it's called a PARC, we have a nice little mascot. Yeah, it's basically used to collect use cases to collect knowledge and everything else.
And also, we provide infrastructure for the other CUs, we coordinate the updates of the base installation and we collect the requirements and try to implement reusable packages for every group to use.
So for that, we need some packaging infrastructure. We use DefPy. This self-editizes PyPy server and packaging testing release tool. And what's super amazing about this, that it can mirror the whole of PyPy.org. So every time you fetch a package which was never fetched, it just downloads it from PyPy.org and if you request it the same time, it gets served from your local network,
which is way faster. And then we set up some staging and stable index per group where you just can upload your own packages. And there are some packages which are really difficult to compile, especially on Windows. There's NumPy, SciPy, and you can compile them with optimizations for your CPUs and everything to gain performance.
We provide our own wheels for this part to compile packages. The index looks a bit like this. The main thing is we have the Sincereon stable index, which just pulls packages from every group's stable repository and from the PyPy server.
And every group has their staging area where they can update and test their own packages and test it and push them to stable when they're ready. So this also needs some CI. So we are currently setting up GitLab. It's still in the evaluation phase, but the Python users are already using it for everything.
So where you can, every change in the code gets built, gets packaged for Windows and Linux and gets deployed to staging directly. And if you like it manually, you can then deploy it to stable. It looks pretty awesome for us. So the whole other thing is standardization.
Lots of engineers use comma-separated value formats for data storage. So we created our own internal standard for this kind of data. And it's basically just CSV with metadata, standardized metadata. And the awesome thing about EDF files
is you can just double-click it and open it in Excel and it shows up. This was a really important feature. And yeah, you can see some small examples. You have just the version, which is a must-have, the date, and it's strongly typed, which is important when parsing the CSV files. So you know what column has which type.
This improves performance and stability if founders can't recognize what type it should be. So yeah, we store this EDF files in a standardized storage place and index them with Solr. Solr is some kind of search engine. It's a Google for your home or something.
So we provide modules to save data in this backend called lab data, have the EDF crawler from Solr, which searches the metadata and indexes the metadata. Then if you want to retrieve data, you can just search through this index
and retrieve the data files. We'll quickly show you how this works. You have this data access module and you can search for keywords like dummy file type and training, and then you receive the files which match this metadata. And then you can, if you search for specific measurement,
you can search for a sensor ID. And in this example, it's training dummy. So you want every data starting from this date and then just pick the three newest items and yeah, you get the data directly. This really eases the retrieval of data you've stored.
So I'm basically finished. Yeah, Python is really awesome for automated testing in the lab, data analysis and creating wonderfully beautiful plots. And it's important to establish a common base of packages around your company, but you should keep it up to date.
And yeah, use proper Python packages for reusable code, because if you have quality issues and you need to redo an old measurement, you want to be able to actually restore in a virtual end for something, an old state of your libraries where you can do the same measurement. And it's also important to standardize your data format so that you can easily change your data.
So that's it. Thank you very much. All right, thanks a lot. Do we have any questions over there?
How did you convince the management and IT team to support this idea? I mean, you need a server, you need all the infrastructure to support it.
A lot of it is basically just do it, because now it's, I'm not actually sure. We had to do quite some lobbying to also that we could make this Python user group that we get paid time to actually create the group and get everybody together once in a while.
And yeah, it took some convincing, but I mean, I measure success in like support requests per week. And I try to keep this number low and it gets lower since we started the Python user group. So I get less complaints about not working Python code. I mean, this really pays off. And these benefits, you can not really measure them,
but you can communicate them. And it's convincing, yeah. Another question? I just had a very technical question, which is you mentioned that you were building your own wheels for some of the packages. Yes.
And I have the feeling that things have improved like vastly over the few years in this direction. It used to be really nasty. So can you comment on what specific challenges you're facing that pushed you to build your own wheels? Mostly if you just do pip install from Windows, it tries to compile some packages sometimes.
Mostly there are provided wheels in the PyPy.org, but also we enable lots of optimizations for modern CPUs and the packages PyPy.org do not enable these optimizations. And now if you have one package which has enabled the optimizations and the other one didn't, then they don't play well together.
So we need to provide it for SciPy, NumPy, and... Are you sure that this is still true? I mean, that was true for years ago, but... Maybe we need to check again. So I'm not really sure. I'd be interested. We just had this problem and solved it. We stuck to this. Maybe we should re-evaluate. It would ease a lot of stuff.
Yeah, see you later. Do we have more questions over there?
There's a question regarding the C++ Python interfacing. Could you comment on using Ice? What was the question again? I have seen on one of your slides
that you use for Python C++ interfacing Ice. It's not actually the interface to C++, but it's the firmware, which is an embedded system. It just runs a server, which provides an RPC interface. So lots of the testing code runs in the embedded system.
And yeah, so you just call it from the other's Ice RPC framework from outside or the network. Is this answer enough? Oops. Any other questions over there?
I was wondering, you talked about how you standardized the packages that the people are using. Have you also done anything for the interpreter itself
so that this gets deployed automatically or something that you use a standard installation? Yes, we have a standard installation which is based on WinPython, which is just like some portable version of Python.
And we have this software deployment system, or software on Microsoft Windows, which we use to distribute this WinPython installation. So we could theoretically push updates to all. But we don't do that. It's just you have to manually update, actually.
Any other questions? Yeah. Not Python-related question, or hardware-related. You mentioned that you're printing chips. And is it done by serial or outsourced? What? Printing on the wafers, is it done inside the company,
or is it outsourced? The production of the wafer itself? The production of the wafer itself is done somewhere in Taiwan, I think. No, I mean, not the wafer printing on it, like lithography. We do a lot of wafer processing in Switzerland, in Stauffer, yeah. I don't know the exact.
By this company? By the company? By Cincerion, yes. And what lithography machine do you use? I have no idea. I think they give them actually cute names. And one is called Bercha. That's all I know. Thank you. I guess we have time for one more question now.
All right, let's thank Rafael. Thank you.