Growing Pinecones for P2P Matrix
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Subtitle |
| |
Title of Series | ||
Number of Parts | 287 | |
Author | ||
Contributors | ||
License | CC Attribution 2.0 Belgium: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/56895 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
FOSDEM 2022118 / 287
2
4
6
8
12
17
21
23
31
35
37
41
44
45
46
47
50
62
65
66
67
68
71
73
81
84
85
86
90
92
94
100
102
105
111
114
115
116
117
118
121
122
124
127
131
133
135
137
139
140
141
142
145
149
150
156
164
165
167
169
170
171
172
174
176
178
180
183
184
189
190
192
194
198
205
206
207
208
210
218
220
224
225
229
230
232
235
236
238
239
240
242
243
244
245
246
249
250
253
260
262
264
267
273
274
277
282
283
287
00:00
Overlay-NetzComputer networkMatrix (mathematics)Connectivity (graph theory)Network topologyMatching (graph theory)Computer networkMultiplication signOperator (mathematics)Overlay-NetzPolygon meshAreaLocal ringInternet service providerMatrix (mathematics)Peer-to-peerDiagramEngineering drawingComputer animation
00:51
Connectivity (graph theory)Computer networkOverlay-NetzMatrix (mathematics)Bridging (networking)GradientCharge carrierSocket-SchnittstelleFirewall (computing)Router (computing)Numbering schemeTable (information)Asynchronous Transfer ModeNetwork topologyPeer-to-peerDirected setKey (cryptography)Line (geometry)RoutingMessage passingMultiplication signTheory of relativityCodeMetropolitan area networkSource codeRippingAddress spaceWorkstation <Musikinstrument>Software developerCASE <Informatik>Right angleView (database)Medical imagingTwitterConnectivity (graph theory)Type theoryCartesian coordinate systemQuicksortSingle-precision floating-point formatNetwork topologyComputer networkDifferent (Kate Ryan album)Line (geometry)Range (statistics)State of matterUniverse (mathematics)Level (video gaming)Instance (computer science)SpacetimeNumbering schemeCommunications protocolPeer-to-peerAsynchronous Transfer ModeInheritance (object-oriented programming)Physical systemTable (information)Key (cryptography)Shift operatorWebsiteRoutingMathematicsNumbering schemeDivisorSequenceGoodness of fitInformationComputer clusterDependent and independent variablesFrequencySign (mathematics)Mobile WebClosed setMessage passingLaptopData storage devicePublic-key cryptographyArithmetic meanRootGraph (mathematics)Process (computing)1 (number)Suite (music)ResultantGroup actionCondition numberNatural numberLatent heatAdditionIterationAlgorithmMatrix (mathematics)Uniform resource locatorLengthFreewareOverlay-NetzTask (computing)Link (knot theory)Demo (music)Ocean currentCoordinate systemoutputAndroid (robot)Transportation theory (mathematics)Moment (mathematics)InternetworkingDiagramCore dumpDirection (geometry)Variety (linguistics)Insertion lossSemiconductor memoryRevision controlHash functionSet (mathematics)Router (computing)Computer animation
09:38
Asynchronous Transfer ModeConnectivity (graph theory)RoutingKey (cryptography)Computer networkNetwork topologyDirected setKeyboard shortcutDijkstra's algorithmMatrix (mathematics)Direction (geometry)Ring (mathematics)SimulationResultantGraph (mathematics)RoutingNumbering schemeMathematicsGroup actionBackupKey (cryptography)SpacetimeUniform resource locatorContent (media)LogicHash functionInformationControl flowShape (magazine)Numbering schemeState of matterMultiplication signProcess (computing)Computer networkComputer simulationBootstrap aggregatingMessage passing5 (number)7 (number)Line (geometry)MetreSoftware testingDistanceLink (knot theory)Communications protocolINTEGRALMatrix (mathematics)NeuroinformatikLimit (category theory)Mobile WebPublic-key cryptographyLengthSet (mathematics)Price indexUser interfaceVirtualizationSuite (music)Network topologyRandomizationGoodness of fitAverageSampling (statistics)Perfect groupKeyboard shortcutDirection (geometry)Peer-to-peerSlide ruleTable (information)Arithmetic meanCore dumpCASE <Informatik>ProgrammschleifeDecision theoryAdditionRule of inferenceRippingDesign by contractStability theoryNetwork operating systemSummierbarkeitPoint (geometry)Uniformer RaumSimulationEntropie <Informationstheorie>Stress (mechanics)Grass (card game)MeasurementForm (programming)Total S.A.Insertion lossWorkstation <Musikinstrument>Power (physics)Task (computing)Source codeDisk read-and-write headQuicksortGreen's functionHand fanReading (process)NamespaceData managementRandom graphRight anglePlanningDistribution (mathematics)Ideal (ethics)Ferry CorstenReal numberRevision controlComputer animation
18:25
Asynchronous Transfer ModeData typeCoordinate systemBootingSimulationSource codeSuite (music)Software testingMatrix (mathematics)InformationComplete metric spaceWikiRepository (publishing)Form (programming)Random graphPeer-to-peerView (database)Computer networkMultiplicationNetwork topologyMultiplication signCodeSource codeElectronic program guideBitMechanism designLevel (video gaming)Condition numberClassical physicsPoint (geometry)WindowComputer simulationCommunications protocolProcess (computing)Link (knot theory)RoutingBootstrap aggregatingRouter (computing)ImplementationGoodness of fitScaling (geometry)Computer fileValuation (algebra)Irrational numberDependent and independent variablesSpacetimeString (computer science)TouchscreenMedical imagingRoundness (object)Moment (mathematics)Program flowchartComputer animation
20:44
Goodness of fitPoint (geometry)Right angleMeasurementSpeech synthesisVideo game2 (number)Connected spaceMatrix (mathematics)Computer animationMeeting/Interview
21:30
Software developerPoint (geometry)Connectivity (graph theory)Connected space1 (number)Matrix (mathematics)Multiplication signMeeting/Interview
21:55
Matrix (mathematics)Multiplication signDemo (music)Client (computing)QuicksortService (economics)Meeting/Interview
22:20
Fluid staticsServer (computing)Client (computing)Asynchronous Transfer ModeAndroid (robot)Modal logicoutputLogicMatrix (mathematics)Group actionProfil (magazine)Configuration spaceConnected spaceSpeech synthesisMeeting/Interview
23:04
Message passingLevel (video gaming)Demo (music)Connected spaceMatrix (mathematics)Communications protocolMeasurementElement (mathematics)Service (economics)Moment (mathematics)Metric systemMeeting/Interview
23:35
Moment (mathematics)Matrix (mathematics)Server (computing)Task (computing)FrequencyMultiplication signDemo (music)QuicksortMathematicsState of matterService (economics)Wind tunnelEvent horizonMetric systemMeasurementInstance (computer science)Range (statistics)Chemical equationMeeting/Interview
24:35
Multiplication signComputer networkConnected spaceCellular automatonKey (cryptography)Speech synthesisRaw image formatLabour Party (Malta)Meeting/Interview
25:07
Computer networkPhysical lawConnected spaceCASE <Informatik>Control flowNumbering schemeBand matrixPresentation of a groupReal numberSequenceDifferent (Kate Ryan album)Data structureRoutingStandard deviationRoundness (object)AreaBitAsynchronous Transfer ModeMultiplication signDrop (liquid)1 (number)RootCommunications protocolComputer simulationMessage passingKey (cryptography)Meeting/Interview
26:32
Computer networkBand matrixActive contour modelParameter (computer programming)Maxima and minimaConnected spaceGoodness of fitAreaConnectivity (graph theory)InternetworkingMeeting/Interview
27:25
Network topologyGraph (mathematics)InternetworkingPairwise comparisonPoint (geometry)Power (physics)PlanningInheritance (object-oriented programming)Computer networkMeasurementTable (information)ResultantConnected spacePeer-to-peerRoutingServer (computing)QuicksortPublic-key cryptographyPolygon meshMatrix (mathematics)Meeting/Interview
28:34
Graph (mathematics)Point (geometry)Computer networkCone penetration test2 (number)Address spaceRight angleActive contour modelMeeting/Interview
29:10
Meeting/InterviewComputer animation
Transcript: English(auto-generated)
00:15
Hello, well, thanks for attending this talk today. My name is Neil and I work on the peer-to-peer matrix project.
00:23
Over the last year, I've spent quite a lot of time working on Pinecone, a new overlay network that will hopefully become the foundation of peer-to-peer matrix. Pinecone is an overlay network that's designed to provide end-to-end reachability between devices in a mesh fashion.
00:40
Regardless of whether you're operating over the internet, over a local area network, or using proximity-based networking like Bluetooth, it should be possible for any two nodes participating in the same mesh to communicate with each other. One of our core goals is to minimize the amount of information that a Pinecone node needs in order to operate. So, they typically don't store much state.
01:02
They only need information about the spanning tree, their direct peers, and network paths through the current node in order to work. And this typically means that Pinecone nodes only have a few routing table entries and therefore they don't need much memory to run. So, why are we doing this? Ultimately, we want peer-to-peer matrix to be able to operate over a variety of transport
01:24
and we want those transports to be basically invisible into the application layer. It shouldn't matter how we connect to other nodes. The goal is ultimately the same. We just want to be able to address a node by its public key and to exchange traffic with it. This also means that we can bridge different transports together into a single network.
01:43
Nodes can have peers with different link types, and the application should be able to operate without any real concern for how those links were set up. And finally, there is a very sad trend at the moment of more and more internet users being placed behind carrier-grade NATs and restrictive firewalls, who don't stand a chance of being able to accept incoming connections
02:03
without the assistance of either some kind of relay technology or an overlay network. So, this time last year we introduced Pinecone, and it's a re-implementation of many of the same ideas that were in development on the Yggdrasol network at the time, including tree routing, path finding, source routing,
02:22
and a new experimental routing scheme which we have now formed into SNEP. Our initial pass at doing this was quite scratchy, to be honest, and in the end we also ended up just performing a complete rewrite of the Pinecone router code, which not only made it much easier to reason about and to document,
02:41
but it also meant that we solved a number of performance issues and race conditions that we were seeing at the time. In addition to that, we've also been maintaining builds of the peer-to-peer matrix demos for iOS and for Android. Now, these have gone through various situations over the last couple years using various different routing protocols, and the latest builds are now using the latest Pinecone protocol and code.
03:05
And these are available today to play with. They're linked from the topic of the peer-to-peer matrix room, which is linked in the slide. Although, to be honest, the experience of using the demo still isn't fantastic yet, there are still a number of dendrite and matrix-specific issues that we need to work on, like how we improve the federation experience and cope with devices then coming and going,
03:25
which are fairly agnostic problems to the routing algorithm that's used. The main design highlights of Pinecone are that it gives everybody a public key. Their public key is their address on the network, and their address only changes if their keys change.
03:41
It gets very close to the concept of mobile IP, in that you can move around the network and roam, and yet your address stays the same. All nodes on the network are considered to be pretty much equal players. That is, that everybody is forwarding traffic on behalf of nodes around them, and that allows us to, you know, reasonably arrive at the promise that all nodes can contact other nodes.
04:04
Fast convergence times are also really important, especially in a network where we're dealing with devices that move around a lot, like mobile phones, tablets, and laptops. So the network must be able to update quickly when changes happen, and preferably with the smallest amount of disruption to the traffic that's already in transit on the network at that time.
04:24
And we do this using proactive routing updates, which actually get us very close to that goal. As I mentioned before, we also have very small routing tables, so it doesn't really take a lot of resources to run or to embed a Pinecone node, and we're able to operate over pretty much any transport
04:41
that guarantees both reliability and ordering. So we've successfully proven Pinecone working over TCP, over WebSockets, and over Bluetooth Low Energy, or in the PCP and Matrix demos, and we should reasonably be able to support any other link type that provides those same guarantees. Each Pinecone node is participating in a global spanning tree.
05:04
The root node of the spanning tree is chosen by an iterative process, where the node with the highest public key wins. If that node disappears, the node with the next highest public key will be chosen to take over the role, and so on. And this means that the node that's chosen for the task will feel pretty much arbitrary a lot of the time.
05:24
Now, the root node has exactly one special job. They must send out root announcement packets to all of their directly connected peers with their public key and a sequence number. And other than that, the root node is just like any other node. Now, the sequence number that we send out must be increasing.
05:43
If the root node sends an update that looks strange in any way, for example, the sequence number wasn't increasing, it was actually lower than the last update, or if the root node just gives up sending root announcements altogether, then the next strongest key will take over the root role after a timeout period.
06:00
All nodes in the network will receive root announcements from their peers, and they must select a single parent before repeating the announcement from their chosen parent to all of their peers. And this means that the root public key and the sequence number are effectively flooded to all nodes in the network.
06:20
Given that each node needs to sign the root announcement from their parent before passing it on, this means that every node on the network learns a verified path up to the root for free with the spanning tree. So, as the root node is always the node with the highest public key, this means that we always have at least one worst-case ascending path up through keyspace.
06:42
Using the information in the root updates, this allows nodes to determine their own coordinates on the spanning tree relative to the root node. Now, the tree rooting algorithm is simple. We just take our coordinates and the destination coordinates, we subtract the number of hops to the common ancestor, and then we add the lengths together. And the tree rooting generally finds paths that are quite direct.
07:05
So you might be wondering if nodes can assign themselves a locator from the spanning tree, the coordinates, and the spanning tree forwarding algorithm provides fairly direct paths. Isn't that sufficient? Sadly, the downside to this system is that
07:20
if the coordinates are always relative to your parent node on the spanning tree, that means if that parent disappears, or a new better parent comes along, or worse, the root node changes and causes a global shift on the network, you get a new set of coordinates, and therefore you're no longer reachable at the previous coordinates that you had. And there's also the question of how you associate
07:42
a public key of a node with a set of coordinates. Now, earlier versions of the Yggdrasil network worked in exactly this way, and they mapped keys to coordinates using a distributed hash table, but it was difficult to avoid the temporary disruption and packet loss that would happen if the tree shifted.
08:01
That's where SNEC comes in. Now, the tree rooting gives us a suitable topology to exchange setup messages with other nodes, which we only need to do periodically, so it doesn't really matter if the tree shakes too much otherwise. The idea of the SNEC topology is to arrange nodes on the network into a virtual line,
08:21
where each node has a relationship with its keyspace neighbors, that is, nodes with the next key down from ours and nodes with the next key up. Here we show the same network in two different views. On the right-hand side, you'll see a network of nine nodes and how they are physically connected to each other.
08:41
So node 2 is connected to nodes 1, 4, and 8. Node 4 is connected to nodes 2, 3, and 7, and so on. On the left-hand side, we show the SNEC topology, that is, all of the nodes on the network arranged into a line, ordered by their public keys. In this instance, the highest keys are to the left,
09:00
and the lowest keys are to the right, so 883E is a higher key than 701E, and 9B67 is a higher key than 883E, and so on. To illustrate how SNEC works, we're going to take a pair of nodes, 5 and 7, which are highlighted on the diagram to the left.
09:21
These are keyspace neighbors. There are no other nodes on the network with a public key that falls in the middle of those two nodes' keys, but you'll notice from the right-hand view that they aren't physically connected to each other. There are multiple nodes between those two nodes. So what happens is that the node with the lower of the two keys,
09:41
in this case node 7, sends a bootstrap message into the network. Bootstrap messages are forwarded to the node with the key that's closest to our own, so in this case, that's node 5. Now it's possible that node 5 already has a neighbor relationship with another node, but that's okay. If our new bootstrap message is from a node
10:02
that's closer to node 5's key than its existing neighbor, we'll just replace that relationship with this new one. The result is that node 5's descending node entry, that is, the node with the next lower public key, becomes node 7. Likewise, node 7's ascending node entry,
10:21
that is, the node with the next higher public key, becomes node 5. So both nodes now reference each other. To set up this relationship, they had to exchange path setup messages, and the nodes that are in the middle, nodes 1, 2, and 4, had to forward those messages. The intermediate nodes populate their routing table
10:43
based on those setup messages, so those three nodes will add routing table entries accordingly based on that setup. As a result, they've learned about a new path between node 5's public key and node 7's public key. Now if you imagine that every pair of nodes on the network
11:00
is doing the exact same thing, as nodes set up paths to their key space neighbors, the routing tables of other nodes on the network are populating with more and more information about which directions they can travel in order to reach certain keys. So the routing logic after that is quite straightforward. When we want to forward a packet, we just pick the routing table entry
11:21
that has the key that's closest to the one that we want to talk to, and we forward it there. In this regard, it's not really dissimilar to how a distributed hash table relays messages in order to get closer to a target content ID. In order to keep the routing information up to date across the network, we do need to react quickly when a path breaks.
11:43
So in the previous example, if one of the nodes in between nodes 5 and 7 disappeared, we need to notify everybody else that cares about that path that it's now a dead path. And we do this by sending tear-down messages along the path, resulting in the routing table entries for that path being deleted.
12:01
The key space neighbors then bootstrap again using a different path if possible. And this means that the network can heal very quickly and quite often without disruption to traffic that's already in transit. If a more direct path appears, there's still likely backup paths via less direct routes available, so we'll just forward using those instead.
12:21
And because the node's public keys didn't change as a result of the new topology, we don't need to scan the network in order to find the new location of that node. We just trust that we can keep forwarding until we eventually reach the node in its new location. For this to work, and to not introduce routing loops while we do it,
12:40
the core rule is that the next hop for any routing decision must strictly take the packet closer to the destination public key. We will not backtrack, nor will we forward anything to a node whose key is further away from the destination key than our own key is. To make a routing decision, we use information from both the spanning tree
13:01
and the snap routing table. In addition to that, we'll also look at the public keys of our direct peers and their ancestors in the spanning tree. And this gives us a pretty good spread of public keys to select from to make a routing decision. So hopefully we can find one that's closer to the destination. We'll also try our hardest to take shortcuts if we can.
13:21
For example, if we notice that the destination key is actually one of our peers' ancestors or it's very close to it, we will shortcut via that peer instead. And in some cases, this helps us to shorten the path lengths that are taken in order to reach the destination. To examine how SNARK performs at finding paths,
13:41
we measure the stretch. Now stretch is how many times longer the path is compared to the real-world Dijkstra shortest path through the network. A stretch of two means that the path is twice as long as the shortest real path, therefore lower numbers in a scenario are better. To do this, each node effectively runs a trace route
14:01
to every other node on the graph, and we measure how many hops it took. The numbers displayed on the slides are mean averages, and the tree paths are used to set up SNARK-neighbour relationships, but really we're much more interested in the length of the SNARK paths, since those are the paths that actual normal traffic will take,
14:22
as opposed to protocol traffic. On the left, we have a network of nodes that is generated completely at random. There is no logic at all to how the nodes are connected to each other, and in this particular network simulation, we see that the SNARK typically found paths that are on average 1.7 times the length
14:40
of the real shortest path. In the middle, we have a network that's designed to be completely uniform. It's a perfect grid, and here, SNARK achieved paths on average about 1.57 times the length of the real shortest path. Finally, on the right, we have a real-world network,
15:00
which is based on the Thry-Funk and Leipzig using data from the MASHnetLab graphs, and this is a sample of a real-world network topology, and here we achieved a path length that is on average only 1.18 times longer than the real shortest path. So real-world networks are actually yielding better results, which is very encouraging.
15:23
Now, stretch varies a lot with network topology and with key distribution, but the data tells us that even in networks that aren't ideal at all, we still typically manage to find paths that are no more than double the length of the real shortest path, and moreover, we do this with nodes typically only having a handful of routing table entries.
15:44
To get a sense of how the routing scheme handles mobility scenarios, we use a number of tests within the MASHnetLab suite. Now, these spin up a bunch of nodes in virtual network namespaces, wire them together appropriately, and then run scenarios on them. One of the more aggressive tests in the suite
16:01
for seeing how a protocol handles nodes moving around is the mobility two tests, which we've displayed here. It starts by randomly placing all of the nodes in a square one kilometer by one kilometer grid, and then it starts moving them around. Between each set of movements, it runs pings from every node to every other node
16:20
to determine if they're all still reachable to each other after the movements have occurred. And this should be a reasonably good indicator of how well the network converges after links come and go. The test starts by making smaller movements. So to start with, they only move at 50 meters, but during the test, this increases the step distance up until we're moving as high as 400 meters,
16:44
and that would significantly disrupt most wireless networks. Now, as you can see from the graph, Pinecone in green performed significantly better than Batman Advanced in orange and Babel in red. Batman, while holding quite strongly, still regularly loses up to 30% of the pings through the test.
17:04
Babel performs significantly worse than that once the step distance increases over 50 meters. The MatchNetLab tests do only test a rather limited number of scenarios, and due to the fact that each Pinecone node has to run in its own process there,
17:21
that can increase the overheads and require quite a lot of compute resources just to run fairly small simulations. Our answer to this is an integrated Pinecone simulator in which we can run large numbers of Pinecone nodes within a single multi-threaded process and to be able to examine the internal state of those nodes and the network as a whole.
17:41
Devin, who joined our team recently, has been spending quite a lot of time working on improving the simulator and adding new features to it. The web interface allows us to see how the network is forming in slow motion. We can load any shape network into the simulator with any reasonable number of nodes and view the various different topologies,
18:00
and this one is showing in slow motion how the network is arranging itself into a snack. And then we'll add an additional node to show how that also merges in. In reality, this is happening in under a second for a network of this size, but the fact that the network converges on a single line means that all of the nodes on the network
18:22
found their Keyspace neighbors successfully. We can also simulate much bigger networks, and this is a totally random graph of 250 nodes with a whole bunch of peerings that are randomly set up between them in the spanning tree view instead of the snack view. The spanning tree forms almost instantly,
18:41
which gives enough information to the nodes to start bootstrapping the snack and communicate with other nodes to do so. By the time we switch to the snack view, which at this point fairly well resembles the Windows 95 screensaver, we can see that most of the convergence has already happened, but a few more Keyspace neighbor relationships
19:02
are still being built. Being able to visualize what the network is doing in this way is actually really valuable. It gives us a lot of insight into how the network converges and responds to certain conditions. There's still quite a lot for us to do with the simulator here, but our goal is to be able to use it
19:20
to research and design mitigations against classic Eclipse and Sybil style attacks, as we'll be able to simulate how the network responds to those kinds of malicious actors. We also plant our mechanisms for detecting packet loss, particularly of protocol level traffic, in the hope that we'll be able to route around nodes that deliberately cause disruption and to make the bootstrap process more robust overall.
19:46
Our source code for Pinecone and the simulator is available on GitHub in the Pinecone repository linked here, and there is also a wiki on the same repository which contains quite a lot of information about how the protocol itself works. It does fall short of being a complete implementation guide at the moment,
20:02
but it does give a pretty good overview and rationale of how the spanning tree and the snack are set up and how they function. I've also spent quite a bit of time cleaning up and commenting the router code as well, so hopefully that would be reasonably readable too. I've also referred to the MeshNetLab multiple times, and that's linked here too.
20:20
It's a really useful tool for evaluating how routing protocol converges and scales, and to be able to plot those results against other existing routing schemes. And finally, there is the peer-to-peer matrix room, where you can come and talk to us, and also to find the links to the latest peer-to-peer damage. That's all from me today.
20:41
Thank you so much for listening.
21:01
All right. I don't know if people can hear us already. It looks like there is a wobbly 20, 25 seconds that we don't know if we are live or not. So in the meantime, I'm not going to ask a significant question, but you can see me dancing, so you are not bored. And it looks like we are live, at least.
21:21
So the first question, of course, Neil. When is P2P going to be live? We want P2P matrix right now, please. So I need to stress the point that P2P matrix is still very early in development at this point. Working on PineConus, one of the ways that we're solving connectivity issues in disjoint networks,
21:42
ones that aren't necessarily reliable, or ones that aren't necessarily internet connected. But there's still a lot of things that we need to solve before we'll be able to really put the stamp on P2P matrix and say, yeah, this is done, this is usable. All right, so we don't have P2P matrix ready yet. And when we say P2P matrix, most of the time people have in mind the cool demos
22:03
that you have already showed on the iPhones, for example, or on Android, so mobile P2P. But we do have an interesting question. Is it possible to get P2P matrix for the home servers, for existing home servers? And does it make sense? Yes, so this is something that's possible.
22:22
When we talk about the iOS and the Android demos, what they're actually running is an embedded dendrite home server built into them. And effectively, all that happens there is that your client is just talking to a home server that's running on the device. Now, there is no reason that we can't do this with static home servers that are running in data centers. And indeed, we do actually have a dendrite profile
22:41
that can run in that configuration. So you can just run a dendrite process, but run it in P2P mode with PineCon. We believe that probably that will be the start of how we transition over to P2P matrix in the future, is that we start to provide an embedded PineCon node or the necessary logic for P2P matrix
23:00
into static home servers. All right, and the next thing you have been showing us, it's just for the transport and the connectivity of the different nodes. Do we have challenges at the matrix protocol spec level left to solve, or is it working all day?
23:21
Because in the demo you show, we can see messages being sent using matrix, using a specific element, but it's not connected to the rest of the federation. So do we have things to solve? Yeah, I mean, there's quite a few problems that we still need to solve with how matrix federation as a whole works, because at the moment, matrix federation
23:40
is still very much designed around the precedent that servers are online most of the time. And of course, PSPM matrix just takes up assumption, completely racks it. So for instance, if your server has been offline for a period of time, you need to catch up with missing state events, missing auth events, anything like that. That at the moment would rely on having other servers
24:01
available in the room to ask. So our main sort of challenge here is to, how do we change matrix federation so that it's less reliant on other servers being online all of the time? And that's the biggest challenge. You'll notice this is a problem if you use the P2P demos for any period of time and you then chuck the task into the background
24:22
and go off and do something else and then other people are chatting away in a room and then you come back later. The way that it reconverges at the minute is still not great. So that's the biggest matrix problem, definitely. All right, so if we want that to work, we need to abolish channels, we need to have network coverage all the time and we need to have very large batteries
24:41
so the phone don't die. Yeah, and the idea is we need to avoid that. We need to try and do this in a way that has the least impact on people's batteries and cellular connectivity. Definitely. We had a question from Ravel Lebr who asked, what happens if a malicious actors generates a key
25:02
that is highest but refused to properly participate in the network? That's a really interesting question, actually. And the thing is that refusing to participate properly in the network is a vague thing. There's a lot of different ways in which you could choose not to participate. The main one being that you could just take on the root node role by having the highest key
25:22
and then just not send out root updates or send out updates which are just obviously not correct. Like we were saying in the presentation about sequence numbers not being correct or anything like that. What typically happens in that case is that other nodes on the network will just ignore those updates and eventually they will time out on that root node specifically
25:43
and then the network will rearrange around something else. Another big failure mode and one of the ones that we're trying to work on particularly when it comes to adversarial nodes is what happens if nodes stay online but just choose to selectively drop traffic, drop protocol messages, drop whatever.
26:00
And that's something that we're having to spend quite a bit of time and effort on how we fix that problem basically because we need nodes to try and identify when a malicious actor is dropping protocol messages or trying to do something obviously bad and root around it if possible. And that's going to be one of the big areas that we're looking at
26:20
especially now that we have the pine cone simulator up to a standard that we can actually run these kinds of scenarios without having to go out into the real world with hundreds of thousands of devices. Okay, so it's quite a complex network and there is a question which, so the answer I guess is it depends on many parameters
26:40
but I'm still going to ask it anyway. What do the latency and maximum bandwidth in a snake world look like? That really depends on the connectivity on the path that you're taking. So if you have a bunch of devices that are strung together on very fast connections and you're following the path
27:00
that uses very fast connections, good bandwidth, low latency, you will get very good bandwidth and very low latency. If you're following paths that are definitely slower or much worse quality then obviously that will be what happens to traffic following that path. And that's another area that we need to look at as well as if we can specifically avoid
27:20
taking very slow paths if we can manage it. All right, we had a question from Timo which was very interesting. Yes, the examples for physical topologies look like each node only has a few edges but not most node is connected to each other in the internet.
27:41
I'm going to interpret this question as if you were sort of talking about the matrix federations that works today which is that the matrix federation is pretty much full mesh. And that means if you're federating in a room you are talking to all of those servers once and you can globally route to all of those servers. In a pine cone network
28:01
obviously that's not necessarily true. A node can have one peering it could have 17 peerings it could have hundreds of peerings and what we need to be able to do is to route traffic based on just whatever connectivity we have available so that if you decide you need to talk to some other server with whatever public key
28:22
you just use the path that are available. So I'm not really sure that there's a straightforward comparison to the internet there. Right, you showed a graph at some point comparing pine cone to Batman Advanced and Babel.
28:42
Do you know if people who are working on this would be interested to contribute to pine cone and if they are interested in using pine cone? Admittedly I have never spoken to any of the people involved in developing these routing algorithms. I would love to think that work that we're doing around pine cone
29:00
and also around the address or network because that's a very big player in this. I'd like to think that that work will eventually spread outwards. All right and we have just a few seconds to say that we could not ask the question about why this nickname but there is a question and answer room which is going to open shortly so you can ask all your questions to Neil.
29:23
See you in the next talk!