We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

​​Encrypted computing in Python using OpenFHE

00:00

Formal Metadata

Title
​​Encrypted computing in Python using OpenFHE
Title of Series
Number of Parts
131
Author
Contributors
License
CC Attribution - NonCommercial - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Fully Homomorphic Encryption (FHE) is a privacy-enhancing technology that enables performing computations over encrypted data. FHE has recently seen a lot of progress, and commercial applications of FHE are now available. One of the main application domains for FHE is privacy-preserving machine learning. We introduce a Python interface for OpenFHE, a popular open-source FHE C++ software library that supports all common FHE schemes. OpenFHE is a NumFocus-sponsored open-source project that has been authored by a community of well-known FHE cryptographers and software engineers.The talk provides a high-level introduction to FHE and its applications, and then provides an overview of the Python API. Several examples are presented to both illustrate FHE concepts and show the practicality of the technology. More information about the OpenFHE project: * Main website: openfhe.org/ ; * OpenFHE discourse forum: openfhe.discourse.group/ ; * Main OpenFHE repository: GitHub: openfheorg/openfhe-development ; * OpenFHE organization: GitHub: openfheorg ; * Main OpenFHE design paper: eprint.iacr.org/2022/915
Design of experimentsEncryptionApproximationNumbering schemePersonal digital assistantUsabilityInformation privacyMachine learningCloud computingHomomorphismusOperations researchStapeldateiProcess (computing)Keyboard shortcutSoftware developerCollaborationismVirtual machineMultiplicationMilitary operationCommunications protocolPerformance appraisalSymmetric matrixEncryptionLibrary (computing)NeuroinformatikOpen setCryptographyHomomorphismusFocus (optics)AreaKey (cryptography)Numbering schemeLink (knot theory)ApproximationImplementationVirtual machineSystem callInformation privacyCASE <Informatik>Cartesian coordinate systemUsabilityRemote Access ServiceOperator (mathematics)CollaborationismMultiplicationMereologySoftware developerInformation securityDatabaseForm (programming)Differential (mechanical device)SummierbarkeitSubsetGame controllerStapeldateiCommunications protocolExtension (kinesiology)Block diagramRight angleResultantFile formatBit rateEndliche ModelltheoriePerformance appraisalProgramming paradigmNumbering schemeAdditionoutputDataflowProcess (computing)Level (video gaming)Moment (mathematics)Term (mathematics)Data structureDensity of statesDivision (mathematics)Type theorySlide ruleMachine learningOpen sourceMaximum length sequenceYouTubeWrapper (data mining)SoftwareAlgorithmNoise (electronics)NP-hardVirtuelles privates NetzwerkCloud computingComputer animation
EncryptionHomomorphismusMultiplicationAdditionRSA (algorithm)Type theoryInformation securitySingle-precision floating-point formatVirtual machinePerformance appraisalCartesian coordinate systemPoint (geometry)Functional (mathematics)Multiplication signMultiplicationTable (information)Vector spaceEncryptionKey (cryptography)ApproximationBitNumbering schemeRange (statistics)AlgorithmoutputPairwise comparisonLinearizationNeuroinformatikEndliche ModelltheorieType theoryDifferent (Kate Ryan album)HomomorphismusBlock diagramMachine learningAdditionDifferenz <Mathematik>Line (geometry)InfinityCalculationCryptographySummierbarkeitMechanism designElectric generatorForm (programming)Hybrid computerInternet service providerNonlinear systemImplementationOperator (mathematics)Term (mathematics)Gastropod shellCombinational logicPartial derivativePolynomialContext awarenessCharacteristic polynomialMatrix (mathematics)Computer animation
CryptographyComputer hardwareCompilerSteady state (chemistry)outputInformation securityCiphertextMultiplicationStapeldateiParameter (computer programming)ParallelverarbeitungMessage passingHausdorff dimensionElement (mathematics)Open setLibrary (computing)Parallel portCASE <Informatik>Different (Kate Ryan album)Social classKey (cryptography)Metric systemObject (grammar)GoogolContext awarenessEncryptionNumbering schemeConnectivity (graph theory)Wrapper (data mining)Process (computing)Form (programming)Ring (mathematics)Internet forumNeuroinformatikNoise (electronics)CryptographyParameter (computer programming)Operator (mathematics)AdditionMultiplicationDimensional analysisOpen sourceStapeldateiMessage passingApproximationAbsolute valueScaling (geometry)Real numberType theoryHomomorphismusMaxima and minimaComplex (psychology)Numbering schemeInformationSingle-precision floating-point formatSequenceMultiplication signBootstrap aggregatingAsynchronous Transfer ModeSkeleton (computer programming)Computer programmingSet (mathematics)Modul <Datentyp>Cartesian coordinate systemGroup actionComputer hardwareVirtual machineSoftwarePosition operatorPerformance appraisalCodeInformation securityMathematicsMusical ensemblePerspective (visual)FreewareMUDInteractive televisionLine (geometry)Covering spacePartial derivativeLevel (video gaming)Computer animation
Group actionCryptographyObject (grammar)Parameter (computer programming)Block (periodic table)StapeldateiDemonHomomorphismusEncryptionSubsetCommunications protocolServer (computing)TheoryDomain-specific languageStatisticsInformation privacyTelecommunicationObservational studySystem identificationDigital electronicsComplex (psychology)Polymorphism (materials science)Element (mathematics)James Waddell Alexander IISoftware frameworkControl flowOrder of magnitudeBit rateImplementationAssociative propertyScale (map)Slide ruleYouTubeContinuous functionAlgorithmVertex (graph theory)Endliche ModelltheorieComputer programARPANETVirtual realityData modelRun time (program lifecycle phase)Computer hardwareInformationSoftwareCNNInferenceLinear regressionLogistic distributionMultiplicationBellman equationKey (cryptography)EncryptionOperator (mathematics)Scalar fieldPublic-key cryptographyComplex numberProcess (computing)Endliche ModelltheorieBlock (periodic table)CodeDecision theorySkalarproduktraumInternetworkingStapeldateiVideoconferencingObject (grammar)AlgorithmInformation securityFunctional (mathematics)Artificial neural networkLibrary (computing)Line (geometry)Performance appraisaloutputHomomorphismusCryptographyData conversionOpen setWrapper (data mining)Parameter (computer programming)Internet forumAssociative propertyContext awarenessObservational studyLinear regressionElectric generatorWebsiteInferenceCartesian coordinate systemLogistic distributionMedical imagingPresentation of a groupLengthSet (mathematics)Vector spaceMathematical analysisSoftware developerWeightYouTubeReal numberLetterpress printingResultantAuditory maskingLinearizationCorrespondence (mathematics)Software repositorySoftware testingIntercept theoremPairwise comparisonStructural loadRight angleSingle-precision floating-point formatMachine learningModul <Datentyp>Wave packetNormal (geometry)ComputerField (computer science)PlanningDifferent (Kate Ryan album)Repository (publishing)MereologyAdditionVariable (mathematics)Euclidean vectorComputer fileNumbering schemeOpen sourceSummierbarkeitTable (information)Condition numberMusical ensembleTelecommunicationSupport vector machineComputer configurationCASE <Informatik>DampingLink (knot theory)Mechanism designCore dumpExpected valueRandomizationWritingSelf-organizationComputer animation
Repository (publishing)Link (knot theory)WebsiteSelf-organizationInternet forumLocal GroupNetwork topologyDecision tree learningCalculationLibrary (computing)Matrix (mathematics)Linear regressionLogistic distributionCartesian coordinate systemOpen setComputer animation
Link (knot theory)WebsiteRepository (publishing)Self-organizationInternet forumLocal GroupNetwork topologyHand fanSicFunction (mathematics)Vector graphicsApproximationPolynomialInferenceEmailComputer animation
Personal area networkRouter (computing)SatelliteRight angleOpen sourceDefault (computer science)MereologyAlgorithmCodeNeuroinformatikPhysical lawEncryptionClient (computing)Information privacyUniverse (mathematics)CASE <Informatik>Level (video gaming)Parameter (computer programming)Slide ruleLibrary (computing)Latent heatCryptographyHomomorphismusQuery languageResultantNumbering schemeMusical ensembleMultiplication signControl flowLimit (category theory)Software frameworkElectronic program guideMachine learningPoint (geometry)Context awarenessDataflowWritingTerm (mathematics)Entire functionGroup actionMultiplicationPresentation of a groupConstructor (object-oriented programming)Process (computing)Lecture/ConferenceComputer animation
Transcript: English(auto-generated)
Today, we'll be talking about encrypted computing, but predominantly encrypted ML using the library OpenFG. So, just what is the library about? It's a full homomorphic encryption library developed by a community of cryptographic researchers and engineers.
And it provides implementations of the full homomorphic schemes, including the CKK scheme for approximate arithmetic and encrypted data. I just wanted to say that at the end of the slide, there are a few resources and links that would be shared. So, if these are not sufficient at the moment in terms of explanations, feel
free to go through those papers or YouTube links as well to gain more understanding. And of course, you can reach out to me and the team as well. This is an open source library sponsored by NumFocus. It is designed for efficiency and ease of use with the focus of enabling
practical applications predominantly in the full homomorphic encryption area and privacy-preserving machine learning. Some of the key features are it supports multiple FHE schemes, including the CKKs, BGVs, BFPs and TFHEs. It is optimized for batch processing and SIMD operations as well.
It is originally built in C++, but also has a Python and Rust wrapper as of today. And it has active development and collaborations. Being a part of the team, I can tell you that we are still going and developing a few more features as well. There would be a lightning talk and there is a call for contribution as well in case anyone is interested.
Some of the use cases are privacy-preserving machine learning and data analytics, secure multi-party computations and federated learning and encrypted database and cloud computing as well. So now, this particular topic falls under the umbrella of privacy-enhancing technologies.
So we have been, as software developers, we have been doing privacy -enhancing or pursuing privacy-enhanced technologies in various forms or the other. Some of them are like access control, differential privacy, adding noise to the data. Some hard privacy technologies would be onion routing, VPNs.
And also now we hear about homomorphic encryption as an extension of that, full homomorphic encryptions and also secure multi-party computation. So typically based on that privacy-enhancing technologies, a few more are there. And the UN has documentations of pets, right?
And a subset of that is privacy-preserving machine learning. Typically, the techniques that are used are differential privacy, secure multi-party computation, homomorphic encryptions and federated learning as well, which is the paradigm of distributed computing. But it also enables distributed encrypted computings on various nodes and MLs on various nodes, right?
So, moving on. So what is typically homomorphic encryption? So this is actually a cryptographic method. It is also a protocol with one layer of extra operations, which is evaluation. So this allows for mathematical operations to be performed on encrypted data without decrypting it.
So this actually enhances or enables the operations of private ML or ML on encrypted data. This further means that computation can be carried out on the encrypted data and the results can be obtained in an encrypted format itself without the decryption key.
So homomorphic encryption enables data to be processed and analyzed and therefore maintains the confidentiality of the data and also the ML model for the results on that data, right? So this is a typical, I would say, block diagram of how the flow works.
So first is encryption. So the input data, the plaintext data is encrypted using an encryption algorithm, which results in a ciphertext. Then the encryption process ensures that the ciphertext remains in the same structure as the plaintext, therefore allowing for mathematical operations to be performed on it.
And then the homomorphic operation, which is homomorphic encryption scheme, are designed to preserve the relationships between plaintext and the ciphertext, enabling mathematical operations to be performed on the encrypted data. The encrypted data then undergoes addition, multiplications, or both.
So open FHE is not supporting division, but it can also perform multiplication. And depending on the type of homomorphic encryption used, these operations are performed directly on the ciphertext itself without the need for decryption. And then the last stage is decryption. So after the desired operations have
been performed on the encrypted data, the ciphertext is decrypted using a decryption algorithm. The decryption process transforms the ciphertext back into the original plaintext, revealing the results of the computation. So different types of homomorphic encryption, partial homomorphic encryption, somewhat homomorphic encryption, and full homomorphic encryption.
I'll just put one line to it. So partial homomorphic encryption supports an infinite number of either addition, subtractions, or multiplications. Somewhat homomorphic encryption supports a limited number of these calculations, and full homomorphic encryption is capable of supporting unlimited number of these kinds of operations.
So now, full homomorphic encryption. So this is a form of encryption that allows computation to be performed, again, on encrypted data. This is just based on the homomorphic encryption and builds on top of it. So this means that data can be processed and analyzed while it remains encrypted, ensuring privacy.
And some of the mechanisms here are key generation, encryption, homomorphic operations, evaluations of computations, and then finally decryption. So it is based on the previous block diagram that we just mentioned, all the steps. And some of the encrypted ML approaches with FHE. So there is an approximate approach, or also known as the
CKK scheme, the hybrid or a blend of the approximate and the lookup table approach, and a lookup table approach. So just to put it in a nutshell, the CKK scheme or the approximate approach is the fastest for most machine learning applications, especially for larger problem sizes.
The hybrid one, which is a combination of approximate and the lookup table approach, is significantly slower for the approximate approach for most machine learning applications. And the lookup table approach is the slowest of all. And these different approach provides for implementations of FHE in machine learning setup.
So just a bit in depth about the approximate approach, also the CKK scheme.
So this is usually a technique for full homomorphic encryption for encrypted machine learning. It involves approximating complicated functions using polynomials and evaluating these functions over SIMD, which is single instruction multiple data vectors.
And some of the key features of this are approximations of complicated functions, minmax method, as well as SIMD vector evaluations, and of course it supports bootstrapping, which is an important technique for approximate approach. Due to time, I will not get into the depths of all of these key features, but if
you want me to, you can ask me questions later on and we can come back to this. The hybrid of approximate or lookup table approach. So in this context of the hybrid approach, what we do is we use the lookup table and the approximate approach together.
Some of the key characteristics are approximate method for polynomial or matrix arithmetic. Then lookup table evaluations for non-linear functions, and this is also useful for comparison, which is the hybrid approach is particularly useful for performing comparisons between the
encrypted values, and comparison often involves non-linear functions as well, and the lookup table evaluations. This also helps in piecewise polynomial evaluations, so this approach can also be used for addressing the input issue range and polynomial approximations by dividing the input range
into smaller intervals and using different polynomial approximations or the lookup tables for each interval. This also allows scheme switching, depending on the type of computations of functions being evaluated, and this also
allows for checking of the lookup table evaluation bottleneck, and this approach could be used overall if you can compromise a bit on the speed and you need more efficiency than the approximate approach as well. Finally, the lookup table approach. So this is one method, again, used for FHE for evaluating deep learning algorithms. In
this approach, non-linear functions in the deep learning algorithm are evaluated using lookup tables, and there are some drawbacks associated with this approach as well, particularly in terms of performance and the support of the single instruction multiple data SIMD evaluations.
So some of the key points are this helps in deep learning algorithm evaluation, there is a performance drawback, as mentioned, and it lacks the SIMD LUT evaluations. So now the open FHE software stack, the library in action. So this is an open source library.
At the very bottom layer, we have the hardware back ends, then on top of that, we have the crypto capabilities, and on top of that, we perform metrics arithmetic for different kinds of machine learning applications. And then on top of that, we have the SDKs, Python SDK, Node.js, Thrust, Google Transpiler is also a use case
of open FHE, and then on top of that, the application support. We'll come to all of these, one after the other. So some of the key classes that are used in while coding in open FHE
or while using the library is the crypto context, the cipher text, and the plain text. The crypto context class is a wrapper that encapsulates the scheme, crypto parameters, encoding parameters, and keys. It provides a unified API for all kinds of homomorphic encryption schemes supported by the open FHE.
And crypto context class also allows user to define the cryptographic context, including the encryption scheme to be used, the parameters for encryption and encoding, and the keys required for encryption and decryption operations. This also serves as the central component for managing the cryptographic context and performing various operations on the encrypted data.
The cipher text class is used to store the cipher text polynomial generated during the encryption process. It represents the encrypted data of the plain text data and contains the encrypted coefficients or elements. The cipher text class also provides methods to perform operation on encrypted data such as homomorphic addition or multiplication.
And this class also allows the user to perform computation on encrypted data without revealing the underlying information. The plain text class is used to store the plain text data both in its raw form and encoded form. It represents the original data that is encrypted using the homomorphic encryption scheme.
This class also provides methods for encoding and decoding the plain text data, allowing it to be encrypted and decrypted correctly. This also allows user to work on the original data in a secure manner, ensuring privacy and confidentiality. So some of the must-know crypto parameters for the CKK schemes are
the multiplication depth, the ring dimension, the scaling mod size, and the batch size. So the multiplication depth basically determines how many computations can be performed before the noise overwhelms the cipher text.
It is also important to choose an appropriate depth based on the complexity of the computation you need to perform. The ring dimension determines the size of the cipher text. Setting a large ring dimension provides more security. The ring dimension determines the number of slots available in the cipher text where each slot can hold a value.
For example, let's say if the ring dimension size is set to n is equal to 8192, the cipher text can hold n by 2, which is 4096 value. The scaling mod size determines the precision of computations in the CKK scheme.
This also represents the maximum absolute value that can be represented in the cipher text. A large scaling mod size allows for more precise computations. It is important to choose an approximate scaling mod size based on the desired precision of the computation. The batch size specifies the number of cipher text messages that can be processed in parallel.
This is a little different from the deep learning batch size that we generally use. How many of you use deep learning? This is different from the deep learning batch size that we use. Here, we are specifying the number of cipher text messages that can be processed in parallel. By encrypting multiple messages into a single cipher text, batch processing can be performed efficiently.
And this size determines the number of slots in the cipher text that can hold individual messages. The next one is parameter. The next parameter is multiplicative depth. So this denoted by d plays a crucial role in bounding the number of mathematical operations that can be performed on cipher text.
It determines the maximum number of sequential multiplications that can be executed before reaching a limit. The multiplicative depth directly impacts the size of the cipher text and the computation time required. Now, impact on noise, it is also important to consider that the noise growth when performing arithmetic operations in homomorphic encryptions.
Each operation, such as addition or multiplication, increases the noise of the cipher text. However, from a noise perspective, multiplication is much costlier than addition. This is something to keep in mind. And then also there is cryptographic bootstrapping. To address the issue of noise accumulation and maintain the correctness of computations, cryptographic bootstrapping is used.
This process involves decrypting the cipher text, performing necessary operations on the plain text, and then, recrypting it to obtain a new cipher text. This method is known as the interactive method and is used to reset the noise in the cipher text.
The ring dimension, we spoke about it briefly. Again, if you have any questions, feel free to get back. I just have ten minutes, sorry. The scaling mode size as well and the batch size. So the open efficient library in action, I just wanted to cover this. This is a kind of skeleton of the program that you'll be using. Later on, I have an SVM example as well.
So the first thing that is happening is, well, we're importing the necessary library. And then we are setting the three, in the next three lines, the depth and scale mode size and the batch size, we are setting the parameter values, as discussed previously.
And then we are setting the parameters objects, the schemes that are used as parameters. This represents the parameters for the cryptographic context. This class also specifies to the CKKA scheme, which is used for real number arithmetic in OpenFHE.
The set multiplicative depth method is called on the parameter object to set the multiplicative depth. This determines the number of multiplication operations that can be performed on encrypted data before the noise level becomes too high. The set scaling mode size, this method is called to set the scaling modular size.
This parameter affects the positions of the computation performed on encrypted data. The set batch size method is called to set the batch size. This parameter determines the number of plain text elements that can be processed in parallel. The CC, or the cryptographic context, object is created using the GenCryptoContext function,
which generates a cryptographic context based on the specified parameters. The enable method is called on the CC objects to enable the partially homomorphic encryption and leveled homomorphic encryption schemes. These schemes allow for different types of computations to be performed on the encrypted data.
The keys object is created by calling the KeyGen method on the CC object. This generates the secret and the public key for the cryptographic context. Finally, the evalMultKeyGen method is called on the CC object to generate the evaluation multiplication key using the secret key.
This key is used for performing multiplication operations on the encrypted data. The second step is packing, encrypting and computing. First, step one, data preparation. We are taking a random array and specifying the values. Then, in the next step, we are doing plain text encryption using the X1 and X2.
This converts into packed plain text objects using the makeCKK's packed plain text function. This function prepares the plain text value for the encryption. The next step is encryption. In this step, step three, the packed plain text objects are encrypted using the public keys with the encrypt function.
This results in two cipher text objects, C1 and C2, which we will use for later. C1 and C2 generated will be used for the homomorphic operations. Step four, we are performing homomorphic operations such as addition, subtraction, scalar multiplications and normal vector multiplications.
The last step is decrypting. This particular code block is related to the OpenFH library and involves decrypting the cipher text and manipulating the plain text.
So, what is happening in the first four lines is in this part, the first four lines of code, the decrypt function is used to decrypt the cipher text. C add, C sub, C scalar and C mult using the secret key or key secret key parameter.
The decrypted plain text are stored in the variables, PT add and PT sub and the remaining as well. The next four lines of code, the set length function is used to set the length of the plain text and batch size. This is slightly done to ensure the plain text that have the same length for further processing and analysis. And the last four lines of the code, the get real packed value function is used to retrieve the real values of the plain text and print them.
This function then returns the real part of the plain text as packed values. So, now we have an SVM example. So, here, so, yeah, I have the GitHub repo later on.
But what we have done is we have coded the SVM in general in plain machine learning. And then we have saved the model and then we are recalling the model to perform this in the homomorphic encryption setup, right? So, the first block is, of course, we are importing the library. And then we are loading the dataset, credit approval test and the corresponding test scores that we have saved previously using pandas and amp-py.
It also loads the pretrained SVM model parameters, weights and intercepts from the files, weights.text and intercept.text. Then in this, we are setting the crypto context, the crypto context in the central object of the OpenFH library as we are discussing.
And this particular piece of code where we are setting the crypto context with the CKK's scheme, specifying the multiplicative depth, scaling, modular size and batch size. The crypto context is then enabled for various operations such as PK, which is public key encryption, key switch, leveled SHG and advanced SHG.
Next is the key generation. So, this, the first set, the key generation, this code generates public key pair using the key gen method for the crypto context. It also generates evaluation keys for the CKK scheme, which is required for performing homomorphic encryption.
Then the next set, which is the encoding and encryption step. So, this input data, which is a feature vector X, weights and bias are encoded in the CKK's plain text object using the make CKK's packed plain text method. This feature vector X is then encrypted using the public key resulting in the cipher text CTX.
And then finally, we are performing some evaluations and decryption. So, what's happening in the evaluation step? So, this is, this performs the linear SVM inference on the encrypted data using homomorphic encryption. It computes the inner product between the encrypted feature vector CTX and plain text weights.
A mask is applied to the result to select the first element and the bias is then added to the result using the eval add method. The decryption and output, the final cipher text CT underscore res is decrypted using the secret key and the result is printed. The expected test score, white test score is also printed for comparison. Some of the successful applications of homomorphic encryptions can be found in this link.
Okay. So, I have like one and a half minutes. So, one of them is the Google transpiler. Intel also uses open FHE for the internal one library.
And then, of course, it's used for federated learning and homomorphic encryptions. We have given a tutorial on AAAI on single node, single party federated encryptions. But we are also doing it on multi-party. We will soon be releasing tutorials on that as well. And then, encrypted inference on pretend models, previously preserving mechanisms.
So, right now, it works on tabular data. But our plan is also to explore further down on different kinds of data as well. And, yeah. So, some of the selected papers. So, this is the logistic regression training paper. There is a genome-wide association studies that was also done in open FHE. Now, healthcare is a very volatile field when it comes to sharing personal data.
So, these kinds of computations actually comes into comes very handy and very much needed. And we also have a ResNet 20 deep neural network evaluation on this. And this is the secure genome-wide association studies. There is a paper on that. And also a webinar, a Palisade webinar on this.
So, if you go to YouTube and search for secure large-scale genome-wide association studies, you should be getting this video. And then also, this was also sponsored by DARPA for one of their projects. And these are some examples. And also some of the successful algorithms are k-nearest neighbors and anomaly detections,
deep neural network, CNN for inference and decision trees. And some of the resources that you can find on the internet. Of course, the open FHE development repository, which is this one.
The open FHE GitHub organization, which also has a lot of other repositories, including the open FHE Python repository, which is the Python wrapper for the original open FHE library. There is a discourse forum as well. Feel free to join and engage in communications and conversations. And this is the open FHE design paper. This is the website. And we also have a Docker.
So, there are two options. If you go to the open FHE Python repository, you can either build a Docker from scratch, or there is also a pre-built image on Docker Hub with the link. So, this concludes my presentation. Also, I just would like to announce that the matrix arithmetic library of the open FHE that is already existing
to perform matrix calculations on ML applications, like linear regression and logistic regression, we are looking for contributors to do that in Python. So, if anyone of you is interested, feel free to reach out to me at, well, this is, my email is smundle at openfhe.org.
So, yeah, this would be my one sec, sorry. Yeah, I mean, the first round, this would be my email. So, if you want to take a screenshot of that. Yeah, that concludes my presentation. Thank you for listening.
Thank you so much, Sukanya. So, if anyone has any questions, please come near this mic and you can ask. Yeah. So, it seems quite involved to implement these methods. Do you know if the major, let's say, ML frameworks like PyTorch, TensorFlow, et cetera,
are they planning to support this to make it, let's say, more user-friendly to perform ML on encrypted data or do we have to do it ourselves, basically, to construct everything from, you know, this multiplication edition, et cetera?
Yeah, so, as of now, there is no collaboration, right? So, one way of doing that is either you code it in scikit-learn or PyTorch, right? And then you need to still import the OpenFH library to, you know, set all the cryptographic context
because that is solely a part of the OpenFH library. But thanks for bringing this point up. I'll take it back to the team and if at all we can collaborate. Because in terms of federated learning, so TensorFlow has their own TensorFlow federated, right? And a lot of other libraries, even including open source libraries, what they do is they support the examples or the codes that are built in TensorFlow and PyTorch
to federate them and put it in a federated learning setup. But when it comes to homomorphic encryption or FHE, full homomorphic encryption, there are certain schemes and cryptographic parameters that are not by default a part of ML algorithms that we generally code, right? So, for that, you need to call the library.
I mean, as of now, this is the way out. Thanks for asking this question. Thank you. Thanks. So, I'm wondering if data is encrypted in transit and at rest. So, when is this specifically useful? Is it some governmental or specific business requirements that do that? What are the most use cases?
What are the clients that request this? Yeah, so we have a lot of university people using this more for encrypted data. Another example at the industry level is Google Transpiler. That was mentioned in one of the slides. And Intel is also using it for, you know, so this is encrypted computing,
so the data can actually stay at rest itself and be encrypted and then the entire ML process can be done in an encrypted manner. And at the end, when it is required to unveil the results, then it can. So, another use case is the genome-wide because our DNA sequencing is very sensitive
and very, you know, private. So, that's one of the use cases, but mostly at this present point of time, this is used, so DualityTech is the company, right? So, they are also using for their internal purpose use case for privacy-preserving machine learning and also at research level.
And a few, very few limited use cases at the industry level as well where there is sophisticated privacy requirement as well, so yeah. Thank you so much. But if you have any use case, feel free to reach out and, I mean, that's what we did at AAA as well. A lot of people reached out and then we hooked into the right team, so yeah.
So, thank you Sukanya. So, I cannot take more questions because of lack of time, but I'm pretty sure Sukanya will be happy to answer your questions in the coffee break. So, we have 30 minutes coffee break. Please welcome Sukanya, first of all. Thank you. Thank you for listening to this patient.