We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Long-term verifiable multi-party computation

00:00

Formal Metadata

Title
Long-term verifiable multi-party computation
Title of Series
Number of Parts
18
Author
License
CC Attribution 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
System programmingMultiplicationFormal verificationInformation privacyCryptographyChemical equationAuction theoryInformation privacyFormal verificationNeuroinformatikEnvelope (mathematics)Computer configurationResultantProcess (computing)Computer programmingBitFunctional (mathematics)Set (mathematics)Computer animation
System programmingMultiplicationFluid staticsPerformance appraisalDigital electronicsCommunications protocolComputer configurationFormal verificationMathematical singularityRevision controloutputFunctional (mathematics)Communications protocolTerm (mathematics)Function (mathematics)Digital electronicsInformationSecret sharingLimit (category theory)Logic gateInformation securityBoolean algebraShared memoryTheoryInformation privacyNeuroinformatikComputer animation
Digital electronicsPerformance appraisalCommunications protocolData structureServer (computing)Function (mathematics)Computer configurationSystem programmingMultiplicationInstance (computer science)Digital electronicsSoftwareNeuroinformatikComputer hardwareStandard deviationCodeExtension (kinesiology)Formal verificationNumberLimit (category theory)Data structureCommunications protocolRepresentation (politics)Staff (military)Multiplication signDynamical systemParticle systemAlgorithmNetwork topologyPoint (geometry)Functional (mathematics)Branch (computer science)AreaServer (computing)Computer animation
System programmingProof theoryLogical constantFormal verificationInformation privacyOverhead (computing)Computer configurationInternet forumMultiplicationoutputCommunications protocolNeuroinformatikProof theoryFormal verificationSet (mathematics)Function (mathematics)VideoconferencingComputer animation
Digital electronicsPerformance appraisalCommunications protocolData structureServer (computing)Function (mathematics)Computer configurationSystem programmingMultiplicationProof theoryLogical constantFormal verificationInformation privacyOverhead (computing)Limit (category theory)NeuroinformatikProof theoryMultiplication signSet (mathematics)Overhead (computing)Information privacyComputer animation
Information privacyData integrityFormal verificationoutputCryptographyCommunications protocolTelecommunicationServer (computing)TopologyIntelArmSystem programmingMultiplicationInformation privacyCryptographyTerm (mathematics)VotingInformation securityFormal verificationNeuroinformatikInformationBuildingTheoryComputer hardwareWordServer (computing)Proof theoryResultantCASE <Informatik>CoprocessorCartesian coordinate systemSet (mathematics)Function (mathematics)Mechanism designDecision theoryoutputAliasingEncryptionProcess (computing)Physical systemExtension (kinesiology)Revision controlSelf-organizationINTEGRAL2 (number)BitWhiteboardConstraint (mathematics)Identity managementNumbering schemeWeb 2.0Standard deviationView (database)CiphertextComputer animation
System programmingMultiplicationoutputoutputCASE <Informatik>NeuroinformatikFunction (mathematics)BitComputer animation
Text editorProof theorySystem programmingMultiplicationComa BerenicesOpen setWhiteboardCommitment schemeEncryptionFunction (mathematics)View (database)Functional (mathematics)Open setProof theoryoutputBulletin board systemBuildingGraph coloringInformationCommunications protocolLevel (video gaming)Computer animation
Group actionSystem programmingMultiplicationFormal verificationCommitment schemeComa BerenicesEncryptionCryptographyAxiom of choiceProof theoryCommunications protocolInformation securityProof theoryRange (statistics)Commitment schemeAdditionPairwise comparisonOpen setMessage passingGroup actionDifferent (Kate Ryan album)Independence (probability theory)Nichtlineares GleichungssystemBitPower (physics)EncryptionNeuroinformatikFunctional (mathematics)Public-key cryptographyCryptographyMereologySigma-algebraElectric generatorRevision controlExponentiationoutputImplementationMultiplicationElement (mathematics)Bilinear mapCommunications protocolWikiParameter (computer programming)WhiteboardBeta functionPrimitive (album)Computer animation
MultiplicationSystem programmingCategory of beingInformation privacyProof theoryInformation securityTerm (mathematics)Information privacyCASE <Informatik>Computer animation
Group actionSystem programmingMultiplicationComa BerenicesProof theoryInformation privacyCategory of beingKeyboard shortcutInformation privacyTerm (mathematics)Commitment schemeProof theoryNeuroinformatikInformation securityMathematical analysisoutputResultantDifferent (Kate Ryan album)CryptographyCASE <Informatik>Cartesian coordinate systemDecision theoryConnectivity (graph theory)Sigma-algebraGame controllerOpen setCategory of beingComputer animation
WhiteboardSystem programmingComa BerenicesProof theoryRange (statistics)Client (computing)MultiplicationClient (computing)BitSystem of linear equationsBulletin board systemGroup actionoutputMatching (graph theory)Operator (mathematics)NeuroinformatikProof theoryConnectivity (graph theory)Cartesian coordinate systemCommitment scheme1 (number)SurgeryCASE <Informatik>Electronic mailing listRevision controlNumberComplex (psychology)Open setDomain nameAlgorithmOrder (biology)Different (Kate Ryan album)Optimization problemBuildingComputer animation
outputMultiplicationLinear mapPhysical systemSystem programmingClient (computing)Coma BerenicesInvertible matrixConnectivity (graph theory)Nichtlineares GleichungssystemOptimization problemFunctional (mathematics)Channel capacitySoftwareComplex (psychology)AreaPotenz <Mathematik>Key (cryptography)Open setMatrix (mathematics)Cartesian coordinate systemCommitment schemeAlgorithmPower (physics)Physical systemGaussian eliminationNeuroinformatikOperator (mathematics)NumberClient (computing)Formal verificationOverhead (computing)Function (mathematics)Category of beingComputer animation
System programmingMultiplicationClient (computing)Bellman equationPerformance appraisalProof theoryAlgorithmGraph (mathematics)ImplementationPrototypeGraph (mathematics)PrototypeImplementationCryptographyCASE <Informatik>Observational studyElliptic curveComputer animation
ImplementationLinear mapPhysical systemPrototypeSystem programmingMultiplicationCodeNichtlineares GleichungssystemProof theoryFormal verificationAlgorithmKolmogorov complexityoutputStatisticsClient (computing)Graph (mathematics)Formal verificationCountingSystem of linear equationsNeuroinformatikRow (database)Multiplication signMatrix (mathematics)CryptographyCASE <Informatik>Observational studyImplementationClient (computing)Proof theoryElliptic curveQuicksortSummierbarkeitComplex (psychology)Replication (computing)Instance (computer science)Element (mathematics)Linear equationComputer programmingSet (mathematics)Computer animation
Function (mathematics)19 (number)System programmingoutputTerm (mathematics)Information privacyServer (computing)Web applicationAlgorithmKolmogorov complexityFormal verificationProof theoryInformation privacyProof theoryCategory of beingMereologyFormal verificationResultantoutputMathematical optimizationNeuroinformatikTerm (mathematics)ImplementationCASE <Informatik>Chemical equationComputer programmingDifferent (Kate Ryan album)Independence (probability theory)Computer animation
Transcript: English(auto-generated)
Thank you very much for the introduction. So a longstanding concern for cryptographers is to find interesting balances between correctness or verifiability of computation and privacy. So if we want to illustrate this with a simple example, we can take the example of auctions.
So we could have an auction in this room where everyone in the room just submitted bits loud and clear, everybody can listen, everybody can take notes. And then at the end of the auction, well, we know who made the highest bid and we can give this beautiful painting to the winner. So then we have something that is 100% verifiable.
Everybody could listen to everything, but we have no privacy at all. So if you decide to organize the sealed bit auction instead, you would have all the parties submitting their bits in a sealed envelope. I would take all these envelopes
and then I will, well, allegedly open the envelopes and see who made the highest bid or maybe I will just destroy everything, but you will not know. And then I will declare, well, this guy won and he will have the beautiful painting. So in that process, we have 100% privacy. Please, nobody knows what happened,
especially if I destroy everything without opening the envelopes. We have a result, but obviously the result has very little chances of being correct. So we have 0% of verifiability. And privacy, cryptography is really trying to reconcile those two goals and to have something in between those two settings.
The most traditional solution that cryptographers propose to this problem is probably to use secure multi-party computation, something proposed in the early 80s. Here you will have all parties, they will have their own secret inputs. They would like to know something, a function of those secret inputs.
And they would like to learn just one thing, the output of the function of those secret inputs. So those parties will just go together, have cryptographic protocols implemented. And in the most traditional way, if you want to have information theoretic, long-term privacy,
what you will do is start with some secret sharing protocol. So all the parties will share the secret with the others in an information theoretically secure way. So everybody will know a share of each secret, but that doesn't give any information about the secret itself. And once we have secret shared versions
of all the party secrets, we will represent the function that we want to evaluate on top of the secrets as a circuit. And then we will run protocols between all the parties to evaluate secret shared version of each of the wires of the circuits. So we'll go one wire after the other,
until we reach the wire that contains the output of the function that we would like to evaluate. And there we'll just gather the shares that have been computed and open and recover the output of the function. So in terms of correctness and privacy, that's really interesting. We have trust that is completely distributed
between the parties. And typically, as long as the majority of those people are honest, we are guaranteed to have correctness, to have privacy. So that's really nice. And in terms of efficiency, we saw really tremendous advances during the last 10 years, with people really improving those protocols that are executed between the parties
up to the possibility of evaluating circuits that contain millions of Boolean gates fairly efficiently. This still has some important limitations. One important one is that, well, you can improve as much as you want.
The cost for each of those parties for running the protocol will remain proportional to the size of the circuits. So they will need to run protocols repeatedly. A number of instance of the protocol that is proportional to the size of the circuits, and also to the number of parties that we have here. So if you want to have something that is run like an auction between hundreds of people,
that can become extremely expensive in the end. Another difficulty is that those protocols are really made for evaluating circuits. So you need to decide which circuit you evaluate in advance. So this means that it's very difficult to make data-dependent branching.
If you want to say, depending on the value of x1, I would like to perform this computation or that computation, well, if those computation do not have the same circuit representation, that will become extremely visible. So basically, the only thing you can do if you want to hide that value x1, depending on which you decide which circuit to evaluate, well, you basically need to evaluate both circuits.
So that may be feasible for one branching. If you have dozens of branching, you will have an exponential explosion that will not be feasible. So as soon as you don't have a function that can be easily represented with a circuit, if you have something that would be much more efficiently solved with dynamic data structures like trees, for instance,
well, this can become extremely complicated. And even for very simple problems like sorting, well, if you try to use most of the standard efficient sorting algorithm, they will include a lot of data-dependent branching and well, they will not work.
Last important limitation is, so this is really a theoretical fundamental limitation. This one is more something practical, a difficulty that people who really try to run and use this in practice faced is that, well, you will have a bunch of parties and if you want to have something like a really distributed trust,
well, you need all those parties to have an independent verification of the code that they are using to have their own server. So that basically means that each of those parties need to hire their own IT staff, have their own hardware and well, that becomes extremely expensive. So in practice, most of the time, people will just, well, have a distributed computation,
but they will use software and hardware that is provided by a single party. So that kind of defeats the point of having something completely distributed, at least to some extent. So difficulties, so you can go to the other extreme, the area of verifiable computation,
which attracted a lot of attention during the last 10 years also. Here you have a different setting. You always have multiple parties, but now their inputs become public values and they will have a worker and each of the party will provide its input to the worker and the worker will perform the computation
and will provide the output of the computation together with a proof that the computation has been performed correctly. So many recent protocols. The good thing is that you can have very short and very efficiently verifiable proofs that the computation has been performed correctly.
And in particular, the verification is typically much faster than the computation contrary to what happened here where all the parties need to compute as much as the computation that needs to be made. But the obvious limitations of this approach is that well, you essentially gave up on privacy.
And another practical that we can hope to improve with time, another limitation that we can hope to see improving with time is that for the worker, the overhead of building the proof is usually fairly large. So we can only do that for fairly limited amounts of computation.
So that brings us to a third possible setting which we'll try to explore in this work which is privately verifiable computation. So we keep the general idea of verifiable computation in the sense that we still have parties, a single worker.
But now we would like to have these parties keeping their input secrets. So each party has a secret but now this worker will be trusted at least to some extent. So we'll agree that we will trust this guy for privacy but not for integrity. So we assume that this guy is a trusted party
to some extent. We assume that he will not reveal our secrets but we want that even if it happens that we misplaced our trust, even if this guy fails to provide the trust that we want him to provide, well, we still have the correct results. So we still would like to have a proof
that the result of the computation is correct. So either this worker will be a trusted party or it could be some hardware solution. You could decide to use Intel SGX technology and have something like an enclave in an Intel processor. And well, if Intel does these processors correctly
and if SGX has been designed correctly, well, you know that you will have correctness and privacy but well, maybe there has been some problem in the design of those processors and eventually, well, even if that happens, you can still be sure that you have the correct results but maybe you will have problems with privacy.
So what kind of benefits do we get from that approach? So one thing is that we can manage to have the proofs that the computation has been performed correctly to be perfectly hiding. So the idea is that, well, you give all those data
to this trustworthy worker and you now have a decision to make. So either you say, okay, I trust this worker completely and he will provide the output and no proof at all because well, that may be a problem. The proof may disclose something or you say, well, I really would like to have correctness evidence so I would like to have proofs
but maybe I'm afraid that the proofs will provide some risk of security breaches. So the kind of thing that we do is that we have, we can use some verification techniques where we have information theoretic privacy, long-term privacy. So this is a very practical concern. Well, I guess if you are here,
then you share the view that we need to have long-term privacy. A practical example could be the ICR. So ICR is using Helios for its internal elections and in the standard design of Helios, the idea is that voters will encrypt their votes and then you will have a public built-in board
where we have the name of the voters and next to the name, we'll see an encrypted version of the vote. So the idea is that this provides verification mechanism that you can use to verify that your vote is there, that the other votes have been submitted by legitimate voters and that their result is correct. But ICR was concerned with long-term security
and saying, okay, we have the name, we have a ciphertext encrypting the vote and maybe in 20 years, we will break the encryption scheme and we'll know how everybody voted. So ICR decided, well, we'll loosen a bit our constraints on verifiability and we'll just have anonymous aliases next to the ciphertext so that even if in 20 years,
well, cryptography is broken, people will just see a noted built-in board containing anonymous aliases and well, encrypted data that they can't decrypt but they will just reveal the votes, not the identity of the voter. So, but the cost is that you decrease
the verifiability of the system. Nobody can anymore, except for the election organizer, check who voted. So this would provide a solution. Second thing is that, well, in that setting, we can still have a verification process that is much faster or at least faster than the computation. That will actually depend on the setting
on the application that we have in mind but we will see that in many natural applications, it will be the case. Another good thing is that we can have in terms of cryptography that is used, something that is fairly simple, at least for the parties submitting their inputs. Basically, it's not much more than encryption.
We have something that is constant rounds, so not much to do, no need for the parties here to run servers. They can just submit data to a web server and check all the data on that web server after the end of the computation. And so in terms of deployment,
that can be a much easier setting. So how do we build something like this? So if you consider that the worker is perfectly honest, well, things become extremely easy. We can just say that each party encrypts its secret inputs, send that secret input to the worker,
the worker decrypts perform the computation and since, well, everybody assumes that the worker is honest, we can just trust the output, so that's okay. So that's kind of the trivial case and where things become a bit more interesting is when, well, we want to have this safeguard in case the worker is corrupted.
So there, the way we proceed is by saying that all the parties will use a public built-in board. So I will use the color blue for what is public and they will all start by publishing commitments or their secret inputs on this board. So those commitments can be information theoretically hiding,
perfectly hiding, and when all those inputs are available, well, the parties can send encryptions of openings of this commitment to the worker. So from this encrypted opening, the worker can decrypt all those things, obtain the openings and obtain all the Xs.
So now the worker can compute in clear very efficiently, maybe using a private algorithm, the output of the function and it can also compute a proof that this output of the function is consistent with the commitments that are displayed there.
And so those two blue things would be posted back on the public built-in board. And now the parties or any third party auditor can check those blue things and see, okay, this output of the function is consistent with those commitments that were displayed before.
So this is the high level view of the protocol. So we implemented this. We had to choose specific cryptographic primities. So we decided to work in a traditional pairing based cryptography setting, three groups, G1, G2, and GT. And we assume that we have a pairing E that takes two inputs from G1 and G2 and goes into GT.
That's a bit bilinear map. Then we need to select how we compute the commitments. And we basically compute Peterson commitments. So we take two generators, two independent generators of G2. And when we have the secret message or secret inputs M,
well, we just compute these perfectly hiding commitments on that input by selecting a completely random R and doing this multiplication. So G to the R perfectly hides this value here. It's really secret forever. So this really is a Peterson commitment.
The thing is the traditional way of opening Peterson commitments is by providing the value for R and for M. For M, well, maybe if M is small, it's not completely necessary, but R, well, R needs to be large. And so we need to provide it if you want to open the Peterson commitment.
Having this R is actually not very convenient if you want to make proofs of correctness. So we decide to do something different. And now the opening for this commitment is instead G1 to the R. So we use the R, but we take a generator of the other group G1 and the opening will be that value. So now if I want to claim that I can open this
using G1 to the R and M, well, from the M, I can remove that part of the commitment. And I just need to check that this pairing equation is satisfied. So since the pairing is a bilinear operation, both sides are equal to E of G1, G2 to the power R.
So a different way of opening Peterson commitments. And now we have something where openings are group elements, which is much more convenient. So we can use sigma proof to make sure that this is non-malleable and that people cannot build a commitment as a function of the commitment of other people.
Next, so this is what we published on the bridging board, this blue thing. Next, we need to provide an encrypted version of the opening of the commitment to the worker. And now since the opening is just a group element, we can use simple ElGamal encryption to encrypt this G1 to the R using an H1 that would be an ElGamal public key into G1.
So simple ElGamal encryption, the work can decrypt that and well, recover G1 to the R, that's the opening, and gets the message M. Now we need to see how the worker computes the proofs that he performs the computation correctly.
And then again, since all the interesting values are in exponent with public basis, we can use traditional proof techniques, sigma protocols, which are perfectly hiding. And we implement it, well, we have something that is additive here, so we do not need to implement anything for addition.
We implemented multiplication, we implemented comparison and range proofs, and well, ways of composing all those proofs into things that can be more complex. So we tried to see how to prove the security of something like that. Well, if we decide that we are in the case
where the worker is honest, well, we assume that the worker is honest, so correctness, well, it basically follows from the assumption that the worker is honest. And privacy, well, we care about all those blue things. We want to make sure that they do not raise
any difficulty in terms of privacy. And now it follows from the fact that we have commitments that are perfectly hiding, and that those sigma proofs that we use to prove the correctness of the results, there are perfect zero-knowledge also. So they both are perfectly hiding components, we can compose them, we keep something perfectly hiding, so everything is fine.
So now if we move to the corrupted worker case, so now the worker can behave arbitrarily. So in terms of privacy, well, we don't know anything anymore, since this corrupted worker can decrypt all the commit, well, the openings, he will see the inputs of all the parties,
he can do whatever he wants, we cannot control that. So in terms of privacy, well, we don't know anything anymore. But in terms of correctness, well, we still have commitments that are binding, computationally binding, and we have zero-knowledge proofs that are sound. And so from those two properties,
we can obtain guarantees that even if the corrupted worker behaves arbitrarily, if we have something that the parties accept because they see that the proofs are correct, they can trust that the result is correct. So long-term privacy, and in terms of correctness, well, we have commitments that are computationally binding,
we have zero-knowledge proofs that have computational soundness. So maybe in 20, 30, 40 years, people will be able to prove that maybe the result of the computation could have been something different. Basically, we don't care. Well, we know that after 40 or 50 years,
this cryptography will be broken. So, well, nobody will believe those other proofs. And well, in typical application, having a different result after 40 years, it will not matter because of the decision, depending on the right result, will already have been taken and it will be impossible to change anything even if somebody wanted to trust that.
So this is for the security analysis. We implemented this and we tried to make some typical use cases. The first one is related to auctions. So we tried to sort data. So we had a group of clients. Here, I just aggregated all of them into one,
but you need to imagine that these are, well, n different parties. So they will start to submit commitments or on their inputs to the public building board. They will send openings to the worker. So the worker will know all the values X1 to Xn.
And now the worker can do the sorting of all those values and obtain them in sorted order. And if you take the best algorithms that are known today, you will have a complexity of this operation that will be in O of n log n. So we can do that really fast because we are in the plaintext domain, but still we have a complexity like this.
And then the second step for the worker is to prove that this is really the sorted version of all of those commitments. And so, well, here we can take all the matching commitments from the list and he will need to provide proofs that he can open all these commitments in a way that guarantees the ordering.
But now we just need to compute n minus one cryptographic proofs. So the complexity of the work for computing the cryptographic proof will be O of n. So of course those operations will be more expensive ones, but still synthetically we want something.
We moved from n log n operations if we needed to do this with multi-party computation to something that is linear. So one very simple case where we can hope for some gain at least when we have a very large number of inputs. Second application, solving linear equation systems.
So this is a component of many optimization problems. You can imagine parties while sharing components of a network and they know the capacity of each edge of the network. Well, some, they will know the capacity of some edges of the networks and they do not want to share this capacity
to the rest of the parties controlling the network. So if you want to solve this kind of equation system, you would have all the As that will be secret values from the parties. Maybe the Bs would be secret or not. That would be application dependent. And the goal is to find the Zs
that are the solution of this equation system. So again, all the parties will commit on the As and then the worker will solve the equation system. So if you use something that is typical for matrix inversion like Gauss-Jordan elimination, this will have cubic complexity.
If you go for the best known algorithm, you will have that kind of complexity, but typically the overhead will be so large that you will probably stick to something like that for a reasonable size. And then, well, the worker will publish the Z, which is what the output of the function is.
And then, well, thanks to the homomorphic property of the commitment scheme, it's extremely easy for the verifiers to just check that they take all those commitments, they raise them to the powers of Z and check that they have an opening to B and that everything works fine. So now the amount of computation
that is required from the clients is basically going through all the committed values in this matrix and raising them to exponents that are the solution of the system. So the number of operations would be quadratic in the number of unknowns in this equation system.
Again, you can see how we win some complexity there. Well, maybe I will skip that. We made a third application, which is finding the shortest path in a graph. Again, all the edges are shared between parties.
Well, we can do similar things. We made a prototype implementation of this. It's available on GitHub. We implemented all those case studies, so the cryptography in there. So it's everything in Python, simple to read. We made implementations in Python of the elliptic curve
and the programming cryptography. We implemented the three case studies and so we could get some timings. And well, these are our three case studies. We can go for sorting, for instance, we have 10 values to sort, 100 values to sort, 1000 value to sort.
And here we see that the amount of work from the worker is basically roughly the same as the amount of work for the client. So we do not win a lot. And essentially the reason is that, well, sorting 1000 values in the clear, that's basically something that is instant. So this doesn't really count in those timings.
And so these are really the times that our implementation takes for computing the end proofs and verifying the end proofs. So, well, we have something, timings remain reasonable for a reasonably sized auction, but no big win for the clients compared to the worker.
If we move to the problem of solving linear equation systems, so we again took a matrix of size, well, 16 rows, 16 columns, 256 rows and columns and 4,000 rows and columns. So this is already a fairly large matrix, 16 millions elements.
Now, the amount of work for the worker, well, it increases fairly fast and they're solving a linear equation system of that size starts taking some time. And this is where we see that the gap for the verification starts counting
because now we see that the verification time becomes considerably lower than the computation time. For the shortest path, well, this is a case where we basically have the same complexity for computing and verifying. So we did not find any good way of improving the verification compared to the computation.
So, well, that's an interesting open problem, I think. So to sum up, well, we have this setting where we find a kind of different balance between correctness and privacy. We have guarantees that the result would be correct
independently of any trust assumption in anyone. The fact that we want to have correctness, that we want to provide proof has really no impact on privacy. All the data that we provide for verifying the correctness are perfectly hiding in the long term.
So verification comes with no extra risk in terms of privacy of the inputs. In terms of deployment, we have something that is fairly simple. As you saw the workers just need to submit their encrypted inputs. And well, this could be made just using a website.
If you don't want to perform the audit yourself, that's good enough. We have a setting where the proofs is really depending on the way you want to verify the computation. And in many cases, it's independent of the way you perform the computation. So if your business is into running
some kind of sophisticated optimization algorithm, and that's really an important part of your intellectual property, you can still use your secret implementation as long as you can prove that the result is correct using another technique that would be public. So that can be convenient in many cases.
Um, yeah. And that's essentially it. Thank you. Thank you.