We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

HPy: The Future of Python Native Extensions

00:00

Formal Metadata

Title
HPy: The Future of Python Native Extensions
Title of Series
Number of Parts
141
Author
License
CC Attribution - NonCommercial - ShareAlike 4.0 International:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Updating Python versions often forces us to update native extensions at the same time. But what if you need to update Python because of a security issue, but cannot (yet) move to a newer version of a dependency? Or you are running a proprietary binary extension that cannot easily be recompiled? The HPy project provides a better C extension API for Python. It compiles to binaries that work across all versions of CPython, PyPy, GraalPy. HPy makes porting from the existing C API easy and its design ensures that the binaries we produce today stay binary compatible with future Python versions. NumPy is the single largest direct user of the CPython C API we know of. After over 2 years of work and more than 30k lines of code ported, we can demonstrate NumPy running its tests and benchmarks with HPy. We will show the same NumPy binary run on multiple CPython versions and GraalPy. And we will discuss performance characteristics of this port across CPython, GraalPy, and PyPy.
114
131
OracleSoftwareCore dumpSoftware developerLocal GroupExtension (kinesiology)Open setAbstractionImplementationGoodness of fitSpeicherbereinigungPoint (geometry)Multiplication signExtension (kinesiology)Memory managementImplementationNumberPoint cloudCartesian coordinate systemThread (computing)Pauli exclusion principleUniform resource locatorGroup actionCapability Maturity ModelPulse (signal processing)Run time (program lifecycle phase)Software developerFlow separationPointer (computer programming)Object (grammar)Video gameSpeicheradresseCycle (graph theory)CodeField extensionData managementHuman migrationJava appletData structureArithmetic meanCore dumpExplosionField (computer science)EmailOverhead (computing)Phase transitionRight angleExterior algebraOpen setPerfect groupFitness functionMathematical optimizationBenchmarkProper mapProjective planeEndliche ModelltheorieNumerical analysisDifferent (Kate Ryan album)CompilerSemiconductor memoryRevision controlProgrammer (hardware)Electronic mailing listBitPiInterpreter (computing)Software engineeringCountingError messageBytecodeState of matterRoundness (object)Computer animationMeeting/Interview
Overhead (computing)Human migrationImplementationMacro (computer science)Fluid staticsFunction (mathematics)System callExtension (kinesiology)Binary codeInterpreter (computing)Interpreter (computing)Asynchronous Transfer ModeLatent heatSystem callField extensionFunctional (mathematics)Hybrid computerType theoryBitBenchmarkInteractive televisionRevision controlModule (mathematics)Field (computer science)Division (mathematics)Human migrationBuildingMacro (computer science)Parameter (computer programming)Overhead (computing)Electronic mailing listMathematicsAttribute grammarString (computer science)Different (Kate Ryan album)Cartesian coordinate systemObject (grammar)Position operatorData conversionContext awarenessTable (information)Compilation albumPoint (geometry)Computer configurationMeasurementError messageDifferenz <Mathematik>Arithmetic progressionGame controllerSinc functionCodeTheoryLine (geometry)Observational errorImplementationLibrary (computing)MultiplicationMultiplication signNumberRun time (program lifecycle phase)UnicodeSet (mathematics)MedianComputer animation
Hybrid computerOracleFood energyGodBinary codeRun time (program lifecycle phase)Structural loadCodeField extensionAdditionAsynchronous Transfer ModePoint (geometry)Context awarenessLeakIntegerBitPlanningData managementError messageTouchscreenSoftware developerGraph (mathematics)Interpreter (computing)Group actionWordRevision controlLine (geometry)MereologyEquivalence relationOcean currentMeta elementProjective planeImplementationMemory managementFlow separationFunctional (mathematics)Different (Kate Ryan album)Front and back endsProof theoryArithmetic progressionComputer configurationSocial classHybrid computerClosed setMathematical optimizationNumberDesign by contractMultiplication signModule (mathematics)MultiplicationExtension (kinesiology)Type theory1 (number)Point cloudSinc functionDemo (music)Tracing (software)Limit (category theory)Pointer (computer programming)Stack (abstract data type)Software testingComputer animationLecture/Conference
CodeWebsiteCore dumpTask (computing)System programmingDisintegrationLatent heatElectric currentPresentation of a groupServer (computing)Software developerAsynchronous Transfer ModeSoftware developerTask (computing)ImplementationFunctional (mathematics)Translation (relic)Keyboard shortcutWebdesignRepository (publishing)Right angleField extensionCompilerAreaSet (mathematics)Different (Kate Ryan album)Landing pageExtension (kinesiology)Computer animation
CodeWebsiteOracleCore dumpTask (computing)System programmingDisintegrationSuccessive over-relaxationWebsiteSlide ruleExtension (kinesiology)Field extensionModule (mathematics)Computer virusDifferent (Kate Ryan album)Single-precision floating-point formatRun time (program lifecycle phase)Session Initiation ProtocolInterpreter (computing)MereologyUniverse (mathematics)Computer animationLecture/Conference
Coma BerenicesInstallation artField extensionLink (knot theory)Module (mathematics)Exception handlingRun time (program lifecycle phase)Data compressionLibrary (computing)Prime idealLecture/ConferenceMeeting/Interview
Coma BerenicesProjective planeMereologyProcess (computing)Software testingRight angleField extensionBenchmarkAuthorizationConfidence intervalGoodness of fitLevel (video gaming)Lecture/Conference
Water vaporComa BerenicesPortable Object AdapterGoodness of fitExtension (kinesiology)Field extensionTask (computing)Module (mathematics)Type theoryPlanningLatent heatBoilerplate (text)Lecture/Conference
Food energyHuman migrationMemory managementType theoryModule (mathematics)Installation artBitField extensionMultiplicationMeeting/InterviewLecture/Conference
Stability theorySoftware testingSoftware maintenanceExtension (kinesiology)Lecture/ConferenceMeeting/Interview
Food energyComputer fontComa BerenicesPoint (geometry)Confidence intervalOpen setProjective planeVideo gameElectric generatorOpen sourceSystem callDivisorLecture/ConferenceMeeting/Interview
Coma BerenicesLibrary (computing)Right angleAsynchronous Transfer ModeModule (mathematics)BitData conversionCASE <Informatik>CodeMereologyLecture/ConferenceMeeting/Interview
Coma BerenicesSinc functionPauli exclusion principleGeometryAbstractionExtension (kinesiology)Thread (computing)Interpreter (computing)Lecture/ConferenceMeeting/Interview
Coma BerenicesWebsiteObject (grammar)Pointer (computer programming)Semiconductor memoryCore dumpPiAbstractionImplementationLecture/ConferenceMeeting/Interview
Coma BerenicesImplementationJava appletMechanism designData structureObject (grammar)Multiplication signCountingField (computer science)Lecture/ConferenceMeeting/Interview
Lecture/ConferenceComputer animation
Transcript: English(auto-generated)
Yeah, so good afternoon, everyone. So as we already have been introduced, I'm Florian, and this is my colleague, Stefan. We work at Oracle Labs. So we are software engineers there, and we are in the Graal team. In particular, we work on Graal Python, which is a Java-based implementation of
Python. And we are also HPi core developers. So we weren't in the group of the HPi founders, but we were joined very early. So we were there since almost at the beginning. So this talk is about HPi and tries to motivate HPi and introduces it.
And also we try to show you the benefits of it. And in the end, we want to convince you to use it. So what do we expect from the audience? I mean, it's not strictly necessary. But we think it's beneficial if you are an experienced programmer, Python programmer,
maybe even wrote C extensions. You should know C a little bit and the memory model and so on, and have some weak understanding of the C Python internals and how the C Python header looked like. But I think that the previous talks made a good impression on that. So I think it should be fine.
So let me quickly try to motivate HPi. So C Python, as we already heard this day, is the reference implementation for Python. And it is a byte code interpreter written in C. So there are several alternative runtimes, for example, GradPy, PyPy, Python.
You may know some of these. And most of them try to improve Python execution speed by using some optimization, like using a cheat compiler, having some different data structures, using a moving GC. And as it turns out, some of those projects were pretty successful in doing so.
So for example, here is a chart where we run the Py Performance benchmark on Grad Python, and we are on average 4.x times faster than C Python, most notably compared to C Python version 3.10. So the newest optimizations from Mark are not included yet.
But yeah, I need to note that Python Performance benchmark only contains Python code as far as I know. So the Python C API. Since Python became very popular and also numerical computations started to use it,
it quickly turns out that Python Performance is maybe not sufficient. So the first C extensions appeared because since C Python is also written in C, that's a perfect cheat, right? So C Python kind of started to allow C extensions, and there was not really a design phase to
do so. So what happened is that C extensions used existing APIs, and there are some problematic points with the C API as it exists. So for example, it exposes a lot of implementation details, as we heard already today.
So for example, it exposes data structures, fields of data structures, it exposes reference counting, the lifetime of objects is managed by using reference counting. It exposes that objects are referred to as point C pointers, which means that you know
the memory location of objects, and you can defer further exemptions, and all this is happening in C extensions, and that makes it very hard for alternative round times to implement and to support C extensions. So at this point, I want to refer to Victor Steiner's PEP 620, which really nicely summarizes a lot of problems.
So let's just pick one example of a problem, so reference counting. Since GradPy is written in Java, and Java has the most advanced and mature GC implementations, it's really bad that we can't really use our GC and do optimizations, or do its
proper work on C extensions, because reference counting is basically preventing the Java GC to be used, since reference counting means to a certain degree you do manual management of the life cycle, and the GC then cannot collect the garbage, since you need to define
when it's collectible. The GC is not only about collecting garbage, it's also a memory manager, basically. So state of the art garbage collectors can do super fast allocations, can do super fast C allocations, they have minimal to no pulses of the application, and can make use
of multi-threading, so the question is now why should you as a user care about this, and for this I have one number for you. So there's the gcbench.py, which is kind of a benchmark for the GC, and GradPy is
on this one, 17 times faster than CPython, and also most notably PyPy is also 25x faster, so it does make sense to have alternative implementations that can do stuff at some point, it would be a little bit better maybe.
Okay so let's now switch to HPy. HPy is a novel C API for writing C extensions, you basically do, instead of include Python.h, you include HPy.h, it's funded, the project is funded directly or indirectly via OpenCollective,
and HPy tries to be a more abstract API, it tries to hide the implementation details, it aims to be faster on alternative run times, and aims to be easier to implement on alternative run times, and also a dedicated goal is to be GC friendly, so we define, we as
should have zero overhead on CPython, because we know if just by switching to HPy, there is a performance penalty, you won't use it, so we really try hard to fulfill this goal, and to reduce the burden of switching to HPy.
Also, there needs to be an incremental migration path, because if we know that porting one big C extension at once to HPy is almost impossible, and very error prone, so you would also just give up on that I guess. So HPy tries to be faster on alternative run times as mentioned, HPy wants to provide
a better debugging experience, and a very important goal is HPy provides the universal API, which allows you to build one binary that can run on multiple interpreters, and the other way around, we also provide backwards compatibility, which means you
can run different HPy versions in the same interpreter. Okay, so how does HPy look? It's very simple, I hope you can read the example, so we, you start by just including HPy.h instead of Python.h, as I mentioned, and then here we write a very simple C function
that just creates a Python unicode string out of a C string and returns it. So in HPy, we use some, let's say, sugar to define all the methods, here it's a macro called HPy.def, which defines that there should be a method called, and the Python
attribute is called say hello, it takes no arguments, the implementation is by a convention, say hello underscore, and then we just need to register to the list of methods and to the module.
In the end, you use another macro to generate the module in it, and that's it. So very similar to C API, and that's by intention, of course. So on the other side, you see a little set up to Py, how to build HPy, so it's basically the difference here is that instead of using X modules, you register to HPy X modules,
and you need to depend, unfortunately, to HPy. At some point, if C Python maybe takes over HPy, then you can maybe just drop that, but let's see. So then in the end, you can just build, run this set up to Py, and there is an additional
option now available where you can choose the API mode you want to compile for. Okay, so how do we reach the zero overhead on C Python? There are multiple compilation modes, and so the first or the most important one is the universal API, which means, as I already mentioned, you can build one binary for
multiple interpreters. This works by having an HPy context, which is kind of a function table, and you do all the calls indirectly through this context, so this would be the path from here, you have your extension, you compile for the universal API, then you have some, we have our own API tag, and then you can run on the different interpreters.
So HPy also provides custom APIs. Custom API means we map HPy API functions to interpret the specific API functions. So for example, we did that for C Python API already, which means we map HPy API
calls to C API functions of C Python, so like, HPy Unicode from string will be PyUnicode from string. So there is no runtime overhead involved, and in the end, it's just a compile time dependency, and this would, just to show you the path, you compile your extension
in this mode, and then you get a C Python specific shared library, and you can only run, usually you can only run this on the one interpreter. And in theory, but that's not implemented yet, you can have APIs for other Pythons as well, like Rust Python.
There has been some work on that, but it's not finished yet. So last but not least, there is the hybrid API, which is used to, which is the mode you use when you do incremental migration. This means that you already use HPy, but still have some C API function calls, since you're incrementally migrating to HPy.
Okay, so how does the incremental migration work? So this is just catching the progress you usually do. You start by converting your module definition to an HPy module definition, and it's very similar, so it's really just, instead of Py module def, you use HPy module def,
you fill your fields, and you keep your functions, your legacy functions, so that's how it looks in C API, just as a legacy methods here in the definition, and that's the first step, basically. Then you already create an HPy module, but using legacy API,
and then you can continue with migrating the types to HPy types, which is very similar. You translate your Py types back to an HPy types back, and again, keep your functions as legacy methods and slots and members.
While doing so, you can always communicate with, or always use the legacy code as well by using these conversion functions, HPy as Py object, and HPy from Py object, and after each step, you just build your extension in the hybrid API mode, and you can test your whole application after each step.
Okay, so a few performance numbers. We already ported KV solver, and did run their only benchmark, I think, which is the suggest value, and the blue line is the C API version,
so like the original KV solver, and you see it's about 0.135, and in the C Python API mode, we are almost there. I mean, it's even a bit faster, but I wouldn't give too much on that, because there is some measurements error going on, of course, so it's basically, I mean, the difference is very low,
and then expectedly, the universal API is a bit slower, but it could also be just an error, but that already convinced us, or like gives us a good feeling that the C Python API mode is really the best,
gives you the best performance in C Python. So we also did migrate, started to migrate NumPy, and did some measurements there. It's not yet done, so there's still lots of benchmarks that use just C API, but some of those already use HPY, and this is an in-between step, but we are kind of observing it,
so while we migrate NumPy, we look at the benchmarks, and here you see the C Python API mode, the median of all the benchmarks is almost at one, which means it's basically as fast as the C API version, and in the hybrid API mode, which means we are doing the calls
through another interaction, we are a bit slower, like 2%, and also the chitter is a bit higher, but that is already also a good outcome for us, so we will continue to work on NumPy. Okay, so debug mode is the thing where we try to provide
a better debugging experience, and it can also be called the strict mode, so it's an optional runtime mode, which you enable at load time, basically, of the module, so you don't need to recompile anything.
It strictly enforces the HPY contract, and does additional bookkeeping of resources, so the goal is here to prevent unintentional mis-usage of the API, which happens to work on some interpreter because of some optimizations, and our debug mode is right now able to detect problems
like leaked handles, usage after close, lifetime of data pointers, like if you get the data point of a bytes object, that's also read-only, so it checks also if you write into this, and you may also not store the HPY context somewhere and reuse it, and we also check it, so that's the mode where you test
if your extension is ready for all interpreters. So the most useful function of the debug mode is the leak detector, I think, so I wrote here a simple example where we just create a Python integer from a C long,
and then we forgot, we just forget the close, and you can use the leak detector by importing leak detector from HPY debug, and then you run it as a context manager and just invoke your Python, your C extension, and then in the end you get a HPY leak error because it detects, okay, you created a handle,
but you never closed it, so that could be a problem, and if you enable the stack trace limit, then it will also show you the stack trace where this handle has been created. Okay, so let me, for the universal API, I want to quickly give a demo. So I hope this works nicely.
So here we have, I built NumPy with, sorry, I don't have it on my screen. I built NumPy with Python 3.11 in the hybrid mode, and here we have, ah, that's already graph Python, sorry.
I wanted to start with, again, sorry. Okay, so here I have 3.9, and I just run the Python, the hybrid, the 3.11 hybrid binary with some example
where we know that we don't trigger any problematic code path for now since it's, you know, it's an intermediate step. So and we can just run this binary on 3.9, and you also see that we should really use the H by zero, which is basically the H by version, and CP 3.11, so we use the one we built with 3.11.
So we can do the same for Python 3.10, so it also uses the same path and uses the same binary, and it works. Then we can do that for 3.11, which is the one I built it for, of course.
I mean, that's expected to work, right? And then, most notably, there is graph Python, which is like our tool, and we can also run this here, hopefully. Yes, okay. So that's already a bit impressive because we, ah, sorry,
we can now build one binary and use it on very different interpreters. I mean, graph Python is a completely different implementation. So some words about NumPy, Hpy. This one is a very hard one to migrate, of course, and we chose to do NumPy because we think if we can do NumPy,
we can basically migrate any package to Hpy. We already invested almost one year for full-time equivalent employee, and, yeah, NumPy is just huge. It has 180,000 lines of code using C, 80,000 lines of code of those are using the C API.
Then we changed 40,000 lines of code already, and 15,000 lines of code are fully migrated to Hpy. So there's still a lot of work to do. There is also, NumPy has its own C API, so we can use NumPy from other C extensions,
and there are 261 functions or entries, and we migrated already 118, so it's roughly half, so still work to do, but we think the hardest part has already been done. We migrated most of the types, or at least the hardest ones, and, for example, we needed the meta class support for heap types,
and we were in the group of initiating that for C Python, which is, by the way, already non-merged. So if you want to see our progress, please just check it out. It's all public on the Hpy project. So what's the current status? Hpy is currently at version 0.9.
We partially migrated several different packages, like ultra-chasing, multipliplip, pseotilt, skivi-solar, pillow, pico-numpy, which is an external contribution, and numpy, as I mentioned. We also plan to have a Cython backend.
We already created a proof of concept, and we will have a person working on that in the next half year, so I hope there is some real progress going on. So now I switch over to my colleague. So I will say a bit about future plans and about the Hpy community.
So right now we want to concentrate on the numpy Hpy work, and we want to do this step by step, working with the numpy developers. And after that, and once we think we figured out everything,
hopefully we will release our first release. Something about the community. So, Florian, can you help me here? I'm not a native Mac user.
Yeah, so I was going to say Hpy lives on GitHub. Hpy itself is basically a C Python extension. We first developed the functionality for C Python, so basically we are doing this translation from the Hpy API to the C Python API.
So the work that people do in this repository is the design of the Hpy API, and then the implementation for C Python. So if you know how to develop C Python extensions, you can already just find your way around this repository, I would hope, and you can already contribute.
There are issues there. There are some issues which can be quite suitable for a start, like a startup tasks. We also have documentation slash landing page,
but we are VM developers and compiler developers and so on, so we're not really good at web design and things like that. So that's also an area where we can use some contributions. And also, it would be great to extend the set of people
who contribute to the design of Hpy with people from different and other Python implementations and from other bindings like Python, Pybind 11, et cetera. Yeah, right.
Okay, how do I switch back to the slides? Okay, so that's basically from our side. Thank you for your attention, and we are happy to take questions or give over to Sushantji.
Thank you very much. We still have some minutes for questions, so please queue up. Maybe the first one in the back. Hello. I actually have two questions. Sorry about that. I wanted to say this was really impressive, so thank you for that. That was not a question. And now the questions.
When you have a single extension module that you can load in various interpreters, is there a runtime dependency on a package that is different, or is it compiled in? So for example, with SIP that is used in PyQt and stuff like that, you would have different extension modules with SIP,
and then other extensions modules would be universal and just use a different kind of that. Or is it a single, non-dependency thingy, and it includes all of the stuff? So the C extension itself is self-contained, usually. I mean, if the C extension links to some libraries,
some, I don't know, compression library, for example, then of course there is a dependency, but there is no runtime dependency to HPY, except of course on CPY, then you need the HPY module to have installed, because in CPY, HPY itself is another C extension. So you just need to install HPY on CPY, that's it.
So for GradPython, for example, we have intrinsic support for HPY, so there is no dependency. So I hope that answers the questions. I think it does, thank you. And the second question was, if you say you ported a large part of NumPy,
Pillow, and other projects, has this been merged to the projects itself, or is it just like a fork for now, for most of them? For now it's a fork, we plan to upstream that, but of course, I mean, these package authors want to have confidence that HPY is something that stays,
and we are in the process of convincing and getting users. So yeah, right now it's in forks. It's public, but it's in forks. And for NumPy, we have really good support, also Matti is one of the NumPy co-developers, he's on our side basically,
and also Sebastian Berg, which is one of the co-developers, is very interested, but it's still a lot of work to do, right? I mean, getting all the benchmarks to stay at a good level, and also getting the tests pass, yeah, there's still a lot of work to do. Thank you. Thank you.
Now in the front, please. To what extent would it be possible to build tools to port CPython extensions to HPY, assuming that the CPython extensions are reasonably well behaved, because they can... That's a good question, yeah. I think, or we think that to a large extent
you could automate the very simple boilerplate tasks. We already hired an intern that should have done a tool, but the one cancelled in the last minute, but we already planned to do so.
So like a simple tool that would convert your module specification, that would convert your types, and everything you can do, and then something is left, of course, and then you need to manually fix up. So that would be the plan, and I think we can do a lot. Maybe I would add that the very first step with the migration is like a migration to heap types, for example,
or migration to multiphase init module install. So some of the steps you have to take anyway if you kind of want to have your extension be based on modern CPython APIs. And those steps are a bit harder to automate,
but you would kind of want to do them anyway. Thank you. We have a question also in the back. So I wanted to ask how safe is it for maintainer of package with C extensions to migrate to HPY right now?
So should I wait until there's at least a 0.9 final? Probably, yes. So yeah, we are going for stable release, of course, and I would wait for that. But I mean, you can start now and test. So about the safety, it's hard to tell.
I mean, if no one ever is using HPY, it's hard to keep the project alive, of course, but I mean, we are working on it for four years right now, and it's still there, so I have confidence that it will stay. And the risk is you could also switch back again
because you can generate from HPY, you could generate CPY calls again, so you could just step back. Yeah, maybe one thing I would add for the risks, there are multiple projects involved in HPY, so I think that kind of should lower the risk factor.
It's open source, it's on GitHub, so we're trying to do as much as we can to lower the risk. And yeah, so we're aiming for some stable release, and at that point I would say it's safe to use, it should be safe to use, and at this point it would be very useful to get feedback.
Thank you. Thank you, now we have a question in the front. Thank you for your talk. Maybe a very pragmatic question. So imagine you migrate NumPy to HPY, right? And I have a library that depends on NumPy, let's say something like Pandas, and it's not yet migrated. Can I already switch out the dependency
and get benefits from your work? Ooh, yeah, hard to answer. I mean, if your module is still on CPY, then you are kind of bound to those restrictions, right? You can use NumPy HPY, because you can just have a little bit of glue code
that would just do the conversion as I showed. You can use this HPS and fromPy object, so they would just convert between those. But if Pandas is not on HPY, it's hard to make use of all HPY benefits, right? You could still, I mean, that's still possible.
You can use the debug mode for your dependency that is in HPY. So you can use debug mode in this case for NumPy and find some leaked handles, but you cannot use it, of course, for Pandas. So partially I would say you can make use of the benefits partially. Okay, thank you.
We have the last two questions. First, I will read one that we have remote. It's, how is the PEP 703 affecting HPY? So the node geo PEP? We discussed it a lot. So we try, HPY always tries to be prepared for that. Since it's not yet implemented,
it's hard for us to do anything. But yes, HPY should be, and we try to introduce it, should be ready for the node geo PEP. This is being recorded, you know, so... Yes, I'm not sure where it can be. Maybe I would just quickly add to that, because of the design of HPY, which is GC friendly and abstract things,
I think HPY would be actually, if CPY was using only HPY right now, all the extensions were in HPY, this PEP would be in much, much better place because they could do much better things and even implement GC instead of reference counting, which would be better with free threading.
And for multiple interpreters as well. So we have now a question in the back. Hi, thanks for the interesting talk. I would like to ask a maybe simple question. What does the H in the HPY name mean? Yeah, it's on the website, so I assume you didn't visit it so far. No, it means handle. It stands for handle.
So it's handle PY, basically. The core idea is that instead of PY, object star pointers to actual memory where the objects lie, you work with handles, and those are abstractions. You shouldn't be able to see through them into the implementation details. So quickly, the last question.
Okay, thanks. I think you've answered this before, probably, but just to clarify, with the Graal PY implementation, are you able to leverage the Java GC mechanisms, or are you locked into reference counting still? So for HPY, no, we are not locked into reference counting. In HPY, we have... I mean, I didn't go into detail too much here,
but we have another kind of handle, which is the HPY field, which you use if you store in your C structure some other object that is tied to the lifetime of the own object, and that's something that the GC knows and can use. So yes, absolutely, we make use of it.
Okay, thank you. Cool, so we are running out of time. I'm really sorry, but maybe you can catch them afterwards. I guess you will be available during the conference, so if you have any other questions, maybe you can ask them. So let's thank them again. Thank you.