We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

GraphBLAS: A linear algebraic approach for high-performance graph algorithms

00:00

Formal Metadata

Title
GraphBLAS: A linear algebraic approach for high-performance graph algorithms
Title of Series
Number of Parts
490
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
There is increasing interest to apply graph analytical techniques to a wide array of problems, many operating on large-scale graphs with billions of edges. While graph algorithms and their complexity is textbook material, efficient implementation of such algorithms is still a major challenge due to a number of reasons. First, the irregular and unstructured nature of graphs leads to a massive amount of random data access, which makes it difficult to use typical caching and parallelization techniques. Second, to optimize their code, developers need to be aware of the nuances of the underlying hardware which, at the very least consists of multiple CPU cores but often also incorporates heterogeneous components such as GPUs or even FPGAs. During the last decade, a number of graph programming models (such as Google's Pregel) have been proposed but most of these focused defining high-level abstractions for distributed execution environments and introduced a significant runtime overhead. A potential approach for defining efficient graph processing algorithms is to exploit the well-known duality of graphs and sparse adjacency matrices, using matrix operations to capture algorithms. Surprisingly, only a few recent research prototypes have used this model with little consensus on the set of necessary building blocks. The GraphBLAS initiative (launched in 2013) aims to define a standard to capture graph algorithms in the language of linear algebra - following the footsteps of the BLAS standard which, starting four decades ago, revolutionized scientific computing by defining constructs on dense matrices. In this talk, I give an overview of the GraphBLAS standard and its key components. First, I illustrate how matrix operations on various semirings correspond to the steps in graph algorithms. I then use these operations to present fundamental graph algorithms such as breadth-first search, shortest paths, and the clustering coefficient. Finally, I demonstrate the scalability of the GraphBLAS-based algorithms with the LDBC Graphalytics benchmark. The presented implementations are available open-source as part of LAGraph, a library built on top of GraphBLAS to demonstrate how to design efficient algorithms in linear algebra. Intended audience: Developers interested in implementing high-performance graph algorithms. Expected prior knowledge: Familiarity with linear algebra helps plus but we only use basic concepts such as matrix-matrix multiplication
Algebraic numberLinear mapAlgorithmGraph (mathematics)Operations support systemMassOffice suiteRoutingForm (programming)Product (business)Graph (mathematics)AlgorithmCollaborationismProcess (computing)Computer animation
ArchitectureGraph (mathematics)Process modelingConnectivity (graph theory)Parallel computingLinear mapData structureHierarchyCache (computing)ComputerRandom numberBefehlsprozessorElectronic mailing listStack (abstract data type)Group actionData structurePower (physics)Run time (program lifecycle phase)Video gameNP-hardTerm (mathematics)Process modelingSystem callEntire functionElectronic mailing listComputer architectureNatural numberGraph theoryTheory of relativityKey (cryptography)Form (programming)Optical disc driveWhiteboardGraph (mathematics)Connectivity (graph theory)Complex (psychology)Process (computing)Reverse engineeringParalleler AlgorithmusResultantPopulation densityComputerLinear algebraCache (computing)Set (mathematics)AlgorithmLinearizationNetwork topologyOperator (mathematics)Computer animation
Graph (mathematics)Linear mapProcess (computing)Linear algebraAdjacency matrixDirected graphMatrix (mathematics)Virtual machineAlgorithmGraph (mathematics)Medical imagingFlow separationCASE <Informatik>Matrix (mathematics)Sparse matrixSemantics (computer science)Row (database)File formatData structureRepresentation (politics)BitAdjacency matrixOffice suiteVector graphicsExpressionPosition operatorData storage deviceComputer animation
Graph (mathematics)MultiplicationMatrix (mathematics)Operations researchAlgorithmPosition operatorAlgorithmVector spaceGraph (mathematics)Semantics (computer science)ExpressionMultiplication signTraverse (surveying)Matrix (mathematics)Dressing (medical)Physical systemRule of inferenceComputerComputer animation
Process modelingGraph (mathematics)Linear mapLinear algebraAlgorithmKepler conjectureComputerMathematical analysisFormal languageImplementationLinear algebraProcess (computing)AlgorithmGraph (mathematics)Physical lawRoutingRegular graphSeries (mathematics)ArmComputer animation
Flow separationComputer hardwareArchitectureBLASPopulation densityExtension (kinesiology)Sparse matrixGraph theoryBuildingStandard deviationBlock (periodic table)Graph (mathematics)AlgorithmFamilyFigurate numberQuicksortRight angleVideo gameInterface (computing)Element (mathematics)Decision theoryGoodness of fitMusical ensembleGraph (mathematics)Sign (mathematics)Operator (mathematics)Multiplication signMultiplicationComputer hardwareCodecAlgorithmCartesian coordinate systemComputerFinitismusField programmable gate arrayMatrix (mathematics)Linear algebraBasis <Mathematik>BuildingOperations support systemComputational scienceGraph theoryPhysicistExtension (kinesiology)Population densityStandard deviationComputer architectureSoftware developerBlock (periodic table)Analytic setComputer animation
Graph (mathematics)Matrix (mathematics)MultiplicationSeries (mathematics)Line (geometry)Open sourceCondition numberMatrix (mathematics)Operations support systemGraph (mathematics)ComputerMultiplicationMultiplication signProduct (business)Row (database)RandomizationRule of inferenceRegular graphComputer animation
Matrix (mathematics)MultiplicationAdditionOperator (mathematics)Well-formed formulaGraph (mathematics)Operations support systemComputerMultiplicationCategory of beingArmComputer animation
Algebraic numberData structureCommutative propertyTime domainOperator (mathematics)Binary fileMultiplication signAdditionIdentity managementAssociative propertyResidual (numerical analysis)Regular graphAssociative propertyPhysical systemGroup actionBasis <Mathematik>Mathematical singularityVisualization (computer graphics)Machine visionGraph (mathematics)BuildingAdditionDistribution (mathematics)Operations support systemMultiplicationRule of inferenceElement (mathematics)Identity managementSet (mathematics)Computer animation
AdditionMatrix (mathematics)MultiplicationPositional notationIntegerDefault (computer science)Operations support systemPower (physics)MultiplicationMatrix (mathematics)Algebraic structureAdditionIdentity managementIntegerFamilyPositional notationElement (mathematics)Regular graphResultantWordGraph (mathematics)Real numberInsertion lossComputer animation
IntegerLTI system theorySemantics (computer science)Matrix (mathematics)MultiplicationTime domainElement (mathematics)Multiplication signAdditionSpacetimeNumerical analysisOperations support systemMatrix (mathematics)Identity managementMaxima and minimaBoolean algebraIntegerBitCASE <Informatik>LogicMultiplicationLevel (video gaming)InformationGraph (mathematics)Representation (politics)Ferry CorstenCycle (graph theory)ResultantPower (physics)Machine visionForcing (mathematics)Graph coloringPosition operatorRegular graphAreaGreatest elementWebsiteLattice (order)Local ringPoint (geometry)Workstation <Musikinstrument>Insertion lossForm (programming)Computer animation
AlgorithmGraph (mathematics)Directed graphSingle-precision floating-point formatShortest path problemBellman equationMultiplicationShortest path problemAlgorithmMereologyResultantVacuumGraph (mathematics)Matrix (mathematics)State observerMultiplicationOperations support systemAverageFactory (trading post)Video gameComputer animation
Shortest path problemAlgebraic numberBellman equationIdentity managementAdjacency matrixWeightGraph (mathematics)Loop (music)MultiplicationImaginary numberSeries (mathematics)1 (number)Machine visionForestMedical imagingData storage deviceCondition numberComputer animation
Bellman equationShortest path problemAlgebraic numberEmailTraverse (surveying)Position operatorForcing (mathematics)Graph (mathematics)Numerical analysisComputer animation
Shortest path problemAlgebraic numberBellman equationVector spaceFunction (mathematics)Mathematical optimizationAdjacency matrixoutputDirected graphVertex (graph theory)Multitier architectureMatrix (mathematics)Slide ruleRevision controlAlgorithmComputer animation
Graph (mathematics)TriangleCountingAnalytic setVertex (graph theory)CoefficientAlgorithmBlock (periodic table)TriangleBuildingChemical equationRoundness (object)CoefficientGraph (mathematics)Symmetry (physics)Set (mathematics)Cycle (graph theory)HypermediaRule of inferenceComputer animation
Element (mathematics)Multiplication signDiagonalMatrix (mathematics)Boss CorporationEngineering drawing
Mathematical optimizationMatrix (mathematics)Sparse matrixMultiplicationElement (mathematics)TriangleSummierbarkeitMatrix (mathematics)MultiplicationSummierbarkeitGraph (mathematics)2 (number)Real numberNumerical analysisMathematical optimizationTriangleComputerComputer animation
Artificial neural networkPopulation densityElement (mathematics)LTI system theoryLimit (category theory)Operations support systemSocial classMultiplication signTriangleAlgorithmCountingUniform resource locatorOperator (mathematics)Auditory maskingGraph (mathematics)Point (geometry)ComputerOffice suiteEngineering drawing
AlgorithmMatrix (mathematics)Vector spaceFunction (mathematics)TriangleMathematical optimizationAdjacency matrixoutputCountingGraph (mathematics)Bound stateAsymptotePositional notationKolmogorov complexityLengthDirected graphNetwork topologyMaxima and minimaAbelian categoryDijkstra's algorithmBellman equationGreedy algorithmAlgorithmLine (geometry)Web crawlerVideo gameMachine visionClassical physicsLinear algebraLinearizationComputer programmingTerm (mathematics)Category of beingComputer animation
Matrix (mathematics)MathematicsComputer hardwareMultiplication signImplementationArmCausalityComputer animation
AlgorithmSparse matrixExtension (kinesiology)Operations researchMultiplicationImplementationBefehlsprozessorVirtual machineGraph (mathematics)DatabaseSuite (music)Linear mapLinear algebraFormal languageParsingWrapper (data mining)ImplementationRow (database)1 (number)Computer programmingGame theoryUsabilityNumerical analysisComputer animation
BenchmarkMenu (computing)Suite (music)AlgorithmMereologyComputing platformGraph (mathematics)ImplementationGroup actionShortest path problemWeb pageRankingConnectivity (graph theory)TriangleNumerical analysisBenchmarkGraph (mathematics)Analytic setCollaborationismMultiplication signPower (physics)Computer animation
Slide ruleRevision controlTriangleCoefficientOverlay-NetzConnectivity (graph theory)Graph (mathematics)Relational databaseLinear algebraWeb pageRankingAlgorithmElectronic mailing listSoftwarePointer (computer programming)Linear mapAbstractionComputer programmingProcess modelingComputer hardwareTheoryGoodness of fitLinear algebraGraph (mathematics)Term (mathematics)AlgorithmExpressionPhase transitionMatrix (mathematics)Total S.A.Computer hardwareHomomorphismusTheory of relativityLimit (category theory)ImplementationPower (physics)Well-formed formulaSlide ruleMultiplication signMathematicsComputer programmingSoftware developer2 (number)Student's t-testAdditionResultantBit rateMetropolitan area networkMusical ensembleAreaVideo gameService (economics)View (database)Single-precision floating-point formatTheoryDifferent (Kate Ryan album)Computer animation
Graph (mathematics)MathematicsComputer hardwareAlgorithmElement (mathematics)LTI system theoryLimit (category theory)Operations support systemShortest path problemAlgebraic numberBellman equationData structureComputerSoftware developer1 (number)Similarity (geometry)Graph theoryProcess (computing)NP-hardEncryptionGodCondition numberDependent and independent variablesComputer animation
Point cloudFacebookOpen source
Transcript: English(auto-generated)
OK, welcome, everyone, to this next session. In this session, Gabor is going to talk to us about high-performance graph algorithms. So please welcome Gabor.
Thanks for having me. My name is Gabor Sarnas. I'm a researcher based in Budapest, Hungary. And I do all kinds of research on graph processing problems with an emphasis on the performance of graph algorithms. And this is how I came across GraphBLAST. So GraphBLAST is a community effort.
This is not my own work, but this is a large informal collaboration that I've happened to join about one year ago. So the whole notion of GraphBLAST is that we should try to make graph algorithms high-performance, and we should make this accessible to the masses. So what's the fuss about the performance of graph algorithms?
Well, it turns out that graph processing is very difficult in terms of performance. And often, you take a textbook algorithm, you just naively implement it, run it on a mid-sized data set with a few million nodes, and you find that while the complexity isn't all that high, the actual runtime is very long. And the algorithm that you have implemented
is often very slow. There is a great paper by Microsoft Research, which appeared about three years ago, and this identified three key problems with graph processing algorithms. The first one is the curse of connectedness. This is the inherent rich connected nature of graphs. By now, I think everyone knows that social networks tend
to be quite densely connected. So if you start from a person in the social network, and then you're looking for three-hop neighbors, four-hop neighbors, or five-hop neighbors, you can basically reach an entire continent with such a short reversal. So you get billions of intermediate results. That's obviously a problem.
The other problem is that the modern computer architectures that you can buy were designed to process linear data structures like lists, stacks, or trees at best. But you cannot just go to a shop and buy a processor for graph computations. These are today on the drawing board. There are people working on this, but this is not something that's accessible.
So that's obviously a problem. And the third problem that they identified is that these computer architectures are particularly bad at supporting caching and parallelization. So all the typical latency hiding techniques that you can employ when doing dense linear algebra or relational operations just do not work here.
So it's very difficult to extract parallelization from graph algorithms. And obviously, we cannot do much with the first aspect. That's just a given of how graphs are structured. That's their inherent value. But we might be able to do something about the other two. And one approach is to try to formulate graph algorithms
in a way that they are more digestible by machines. And by this, I mean, let's try to formulate them in linear algebra. There is the widely known data format called the adjacency matrix. This is quite a simple representation. You take a graph of n nodes and you represent it with a matrix which is a square matrix of n rows and n columns.
And the semantics of the matrix is that you put a one value where there is an edge. For example, here, there is an edge from node seven to node four. And this is denoted by a one value in row seven and column four. And you fill the matrix with these one values and the rest is just zeros.
And it's kind of immediately obvious that this is a very sparse matrix. So most nodes in a graph are not directly connected to each other. So we can easily compress it into a more concise data structure. So we can just remove a bit of clutter and keep the one values. There are many techniques to achieve this.
And once we have this data structure, we can use vector and matrix operations to express graph algorithms. For example, if we would like to do a graph traversal, let's say we're interested in traversals originating from node seven, we can start with this vector where the seventh position of the vector
contains a one value and the rest is just zeros. And if we compute v times a, the adjacency matrix, we will get one values in positions three, four, and five, indicating that we can reach these nodes in the graph. And if we do a two-node resource, which is computed with v times a squared,
we get values in one, three, and six. And six actually has a value of two, which means that we could find two ways to get node six from node seven. So this has kind of tangible semantics. And at this point, you might say that this is all textbook stuff. And indeed it is. There has been like half a century of literature,
textbooks, and research papers on how to do linear algebra for graph processing. But unfortunately, there are very few practical implementations. There are some research prototypes, but there aren't that many implementations that as a regular user, you can just put your hands on and implement some graph algorithms.
So the graph law standards set out to fix this problem and to understand the etymology of this name, graph class, we have to traverse back to late seventies. Late seventies has seen a large increase in scientific computations, but the hardware back then was very heterogeneous.
And this was a problem because if you're a physicist or a technical engineer and you would like to do some finite algorithm modeling, simulation, or anything else that requires linear algebra, you don't want to spend your time on coming up with a matrix multiplication operation that is efficient on the hardware that we have currently gotten. And if you buy new hardware,
you would like to extract the performance of that hardware without rewriting all your underlying matrix multiplication operations. So BLAS, which is an abbreviation of basic linear algebra sub-programs, aimed to fix this. And in multiple iterations, it released a standard that captures the essence of dense linear algebraic computations.
And this is great because now the concerns of the scientists and engineers who do the numerical applications are separated from those who do the matrix operations and those who design the underlying hardware architecture. So this is a very nice decoupling and this has essentially revolutionized how scientific computing over dense linear algebra is done.
So this is great, but this is not something that we can directly use for graphs. And this has been realized by BLAS developers as well and there was an extension to the BLAS standard called SparkBLAS in 2001. But it didn't really take up, maybe the timing was wrong.
It wasn't really a successful venture. But 12 years later, something called the GraphBLAS was released, which is an effort to build building blocks for graph algorithms in linear algebra. And it has the very same idea. Let's pick up all the concerns of the graph analytical application developers from those who design the underlying hardware
and the matrix multiplication operations that serve as the basis of graph computations. And the timing is perfectly right now because we have heterogeneous hardware, we have GPUs, we have FPGAs and many other ways to do efficient graph computations. One of the cornerstones of these graph computations
is the seminary based operation on matrices. So just to recap, matrix multiplication is an operation that can be performed on two matrices provided that the middle dimensions align. So A here is a four by two matrix
and B is a two by three matrix. And if we are to compute something like the second row and the third column of the resulting matrix C, we essentially compute a dot product from the second row of A and the third column of B. So here we do two times five, three times four and then we use the additional operator
to sum these together, hence getting 22 in that cell. So this is obviously pretty random. And the idea in graph-less and seminary-based graph computation in general is that there is nothing to stop us from taking out this plus operator and the multiplication operator and replacing them with something that has provided
that the operation that we are doing still makes sense in some way. So let's try to generalize these matrix multiplication. When I said that it still has to make sense, basically means that it still has to satisfy some basic mathematical properties
and this is captured by the notion of a seminary. So a seminary is formally this quadratic rule with a domain, an additional operator, multiplication operator and an identity element denoted with zero. And we have pretty relaxed requirements. The requirement is that the addition has to form a commutative monoid,
which is actually pretty simple. It has to be commutative, associative and zero has to act as an identity. And multiplication has to confirm even more relaxed set of requirements. We only need that it's a closed binary operator. So unlike the mathematical summary, we don't actually require distribution
and we don't require that the multiplication is a monoid as well. So it's pretty easy to satisfy. And everyone in the room already knows many common summaries. For example, the integer arithmetic that we have performed the matrix multiplication on
is such a seminary. The real arithmetic is a seminary as well. And if we take more interesting cases, we can have something like a logical seminary when the identity element is false and the addition operator is the logical or and the multiplication operator is the logical and. And we have power fields, power sets, et cetera.
So it's kind of a rich family of algebraic structures. And the notation is that we put the addition operator first, then the dot and then the multiplication operator. So we can say like A or and B and that's the matrix multiplication on that seminary.
So how does this work in practice? Well, to stay within the comfortable level around of integer arithmetic, we can compute the number of paths in a graph as I have shown. So in this example, we start the traversals from node four and node six.
And if we do V times A on the integer traditional seminary and we keep an eye on column three, we will see that one times one is one, one times one is one, and then this is aggregated by the addition operator of semilord that we can traverse from these two nodes to node three on two paths.
So this is nothing new. But how about the case when we're not interested in the number of paths, we would just like to compute reachability. In this case, we can use the Boolean segue and this is important because we can save a lot of space by only storing true or false values.
Well, only the true values actually because we can hide the identity and the false under the node one elements. So if here we compute V times A on the seminary, we learn that true and true is true, true and true is again true and if we aggregate them
with a logical OR operator, then we get true here. So this is a more concise representation and we lost the information about the number of path but it requires a bit less memory. So let's see something more interesting. There is a summary called the min plus seminary, which is kind of tricky because we take the addition operator
and replace it with a minimum operator and we take the multiplication operator and replace it with the plus, the addition operator. And now the identity element becomes plus infinity. So I've updated the example a bit. Now we have some ways, for example, we know that we have reached node four
with a cost of 0.5 and node six with a cost of 0.6. And if we compute the V min plus A matrix multiplication, we can learn that from node four, we can go to node three with a cost of 0.5 plus 0.4, which is in total 0.9.
And from node six, you can go to node three with a cost of 0.6 plus 0.5, which is 1.1 in total. And now these are aggregated by the minimum operator, which selects the cheapest or shortest path from these nodes. So we actually learned that we can go to node three
with the minimum cost of 0.9. Let's make some good use of this seminary. One of the most essential textbook algorithms is the single-source shortest path algorithm. This can be stated that given from a start node, find the shortest path to every other node
in the graph that is reachable from this node. And there is a back-on algorithm, the back-on-forth algorithm, which does repeat the relaxation of the edges in each step, and it can be proven to find the shortest path using at most n minus one step for an n-node graph. And the observation is that this relaxation step can be very conveniently captured
with this vacuum matrix multiplication operation that I have just shown. So how does this work? Well, we have to update the definition of the adjacency matrix of it. So we store, obviously, the weights between the nodes in the matrix, and we also add these imaginary loop edges with zero cost.
So we just fear that I have a node with zeros because each node is reachable from itself with a cost of zero. And now we do the multiplications. Let's suppose we start in node one, so we put a zero value, like explicit zero cost that we are in this node,
and each multiplication step will traverse to extend this and relax the edges one by one. So if I remove the cluster, we only get the non-zero values, and the first step will lead us to node one, two, and four, and the next one will lead us to, additionally,
node three, five, and seven. Then we will go to node six, and it's worth keeping an eye on node three because that has just been updated from 1.2 to 1.1, and again from 1.1 to one because the other will find the actual shortest path, which is one, two, five, six, and three.
So it found the shortest path from node one. And now it has reached a fixed point because now it's actually finished the traversal. So this can be summarized with a single slide. So we have this adjacency matrix, and we have a four-line algorithm.
There are some other optimizations, like switching between the transposed version of the matrix and the non-transposed version, but that's about it. Okay, let's see another algorithm. This is node-wise triangle account, which is a very important building block in many algorithms, such as computing clustering coefficients
or finding communities. The definition of the triangle is actually pretty simple. It's a set of three mutually adjacent nodes in the graph. In other words, it's a three-length cycle or a three-length closed path in the graph. And here we can see that node one is participating in two triangles because one, two, four,
and the one, four, two, four triangles due to the symmetry. So this is actually a very difficult problem to compute. Let's try to solve it in the late day so that you see why it's actually difficult. So here the idea is that we should just enumerate all the three-length paths, and let's check whether one of them closes into itself.
The thing is, we compute A times A times A, and then check the elements in the diagonal because being in the diagonal means that it started from node I and it ended in node I, so it's closed, three-length path. But the problem is that this much matrix
is no longer sparse. It's actually become very dense, and it's very expensive to compute. So that's not really a feasible solution for the real size problem. One of the optimizations that we can employ is that we replace the second matrix multiplication with an element-wise multiplication that closes these two-length path edges into triangles.
And then we cannot just conveniently take the diagonal, but we can actually do a row-wise summation to get the number of triangles in each node in the graph. So this looks a bit better because now we could just compute A times A and then do the row-wise summation,
but it's still not really ideal because A times A, as you can see, is very dense. So here we can employ the technique called masking, which has been introduced in Graph class. This is an old idea in computing science, and the whole masking is designed to limit the scope of given operations.
So here, instead of computing A times A and doing the element-wise multiplication, we have to just compute the values in the locations that were denoted and then colored in gray. So the triangle count algorithm is actually pretty simple.
It's just two lines of code, and this is actually a high-performance algorithm. You can do bit-subscribing to remove the duplicates, but it's actually pretty easy to implement and you get high performance. So at this point, you might be asking, how about other algorithms? Well, actually, it turns out that during the last decade,
many good folks have been working on it, and they reformulated lots of the classic textbook algorithms in terms of linear algebra, and in all kind of program categories that you can think of, there is always at least one algorithm that can be captured in linear algebra efficiently.
All right, so do I have one minute or five minutes? It's because I thought it's 25. It's 25. It's 25 in total. All right. So if you use the full 25, then we won't have time for questions. Okay. So, basically, now we can speed up. Here is the GraphPlus CA API and the implementations.
There is a very elaborate CA API and sweet-sparked GraphPlus as the main reference implementation that's actually pretty high performance on CPUs, and for those of you who don't like C programming or just to do productive prototyping,
there are Python records on offer, so there are not two Python records which can be used to concisely capture the algorithms. The ones I've shown are on the right, so it's actually pretty easy to program this. We did a number of benchmarks. We will publish these later this year. There is one with IDPC Graph Analytics, which has shown that it can scale
up to billions of nodes, and there is another one with the GAP benchmark suite, which is an ongoing effort with collaboration of other graph analytical tools. So it's actually also an ongoing writing, and the paper will be submitted in the next quarter. So this slideshow just goes on and on.
So if you're interested, there are 200 plus slides in total. I will put them up to the first time slide, and I tried to formulate all the graph algorithms as concisely as possible. And to sum up, I think this is a really sweet spot in terms of graph computations, because linear algebra is a very powerful instruction
with good expressive power, but also with conciseness. It's not trivial to learn, but it's fairly concrete algebra, so it's actually very tangible. And well, the problem is there are only a few graph implementations yet, but this seems to be changing. We had many interesting talks
relating to graph-less in the graph development yesterday. So overall, I think it's a very promising program for them, and it's a very timely contribution, because we are in the internet-related use of hardware again. Thank you.
We do have time for questions. Which slide? One more, please. One more, please, sure. So this is it. It's a collection of graph-less related resources. How easy is it to apply linear algebra
in relation to temporal graphs? So the question was, how easy is it to apply linear algebra in relation to temporal graphs? I don't think it has been attempted yet,
but probably applying the formula isn't the problem. The problem is that you have to find some way of versioning this, or some way of representing time, and this is probably subject to research. Yes? Do you know if there is any limitation
about any algorithm to check graph homomorphism? So graph homomorphism was the question. There are interesting terms like doing for coding techniques to check graph isomorphism, and checking homomorphism could be done,
but I don't think that's the sweet spot for graph-less, but that's also something that's being actively worked on. Yes? First, thank you for the presentation, it was very good. And second, in your example, you have shown all the phases of non-directive graphs,
so the matrices in those phases are symmetric, so maybe it's better for performance and anything that you have built is done, maybe? And the second question was, how do you try to apply that in direct to graphs? So the question was about direct and non-directive graphs.
This is actually a directed graph, in a sense, so it works pretty well. There isn't all much of a difference, so provided that you can transpose the graph, then it works just as well with the directed ones.
So the question was whether you need to change your data structures, so the way the whole graph-less API is thought out is that you have all-paid data structures, so you trust graph-less to represent your data in the best way that it can,
which, of course, means handing over the responsibility, so it's kind of difficult to take existing data structures, so you have to have all-paid data structures and run graph-less on top of that, you usually need an expensive loading step. That said, Dever tends to allow graph-less computations on existing similar data structures,
for example, lots of people have data in NumPy or SciPy, and it would be great if they could just use those data structures, so this is something that's going to be the agenda of the graph-less developers.