Systems Thinking for Participation and Security
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Part Number | 117 | |
Number of Parts | 188 | |
Author | ||
License | CC Attribution - ShareAlike 3.0 Germany: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/20598 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
HypermediaSystem programmingWeightComputerSource codeInformation securitySet (mathematics)Likelihood functionMereologyMultiplication signCodeDigital rights managementFirmwareNeuroinformatikLaptopBitProcess (computing)Virtual machineSystem programmingJSONXMLComputer animation
02:00
Trigonometric functionsSimilarity (geometry)Multiplication signForm (programming)Context awarenessSystem programmingProcess (computing)Computing platformSet (mathematics)Goodness of fitLecture/Conference
02:49
GodSystem programmingData structureCategory of beingInteractive televisionHypermediaComputing platformLecture/Conference
03:29
System programmingWebsiteBuildingException handling
04:14
Finite element methodSystem programmingSystem programmingGame theoryBuildingSoftware developerVideo gameCuboidLecture/Conference
05:10
Maxima and minimaSoftware testingMetropolitan area networkWorld Wide Web ConsortiumBuildingSystem programmingCategory of being1 (number)Arithmetic meanComplex (psychology)MathematicianContext awarenessQuicksortSet (mathematics)Complex systemLecture/ConferenceComputer animation
06:01
System programmingSoftware developerBoundary value problemSingle-precision floating-point formatCodeFigurate numberLecture/Conference
07:00
Value-added networkDensity of statesMetropolitan area networkWordArmInformation systemsSystem programmingCodeBoundary value problemData structurePerspective (visual)Equivalence relationChainProgrammer (hardware)Lecture/Conference
07:42
Software crackingCategory of beingInformation securitySystem programmingSoftware bugCodeContext awarenessCharacteristic polynomialLecture/Conference
08:22
Gamma functionMetropolitan area networkInformation securityCondition numberContext awarenessArithmetic meanSlide ruleSoftware bugCurveCategory of beingSystem programmingLecture/ConferenceMeeting/Interview
09:06
Multiplication signSoftware bugCategory of beingSystem programmingInformation securityMereologyKeyboard shortcutMultilaterationProduct (business)Lecture/Conference
10:20
Maxima and minimaNewton's law of universal gravitationInformation securityOptical disc driveSystem programmingCodeCircleLecture/Conference
10:55
System programmingView (database)Information securityDifferent (Kate Ryan album)Domain nameLine (geometry)NP-hardSystem administratorNeuroinformatikBit rateMultiplication signMereologyCategory of beingLecture/Conference
12:13
Information securityMereologyMiniDiscNegative numberSpacetimeData structureMoment (mathematics)Fundamental theorem of algebraLecture/Conference
12:50
Open setSystem programmingInteractive televisionInformation securityQuicksortGroup actionNeuroinformatikState of matterFacebookBit rateLecture/Conference
13:39
Metropolitan area networkKnotSystem programmingTwitterMereologyInteractive televisionUtility softwareSound effectLoop (music)Closed setMeasurementLecture/Conference
15:03
Value-added networkMetropolitan area networkSystem programmingBit rateInformation securityBitCodeLecture/Conference
15:48
System programmingInteractive televisionCrash (computing)TheoryPixelComplex (psychology)Information securityBuildingCategory of beingComputer animation
17:15
Programmer (hardware)System programmingMultiplication signCategory of beingInteractive televisionLimit (category theory)Band matrixBuildingCombinatoricsComplex (psychology)Decision theoryComputer animationLecture/Conference
18:05
Structural loadMetropolitan area networkDefault (computer science)Level (video gaming)NP-hardSystem programmingCoroutineGradientMultiplication signProcess (computing)Computer animationLecture/Conference
18:48
Structural loadInteractive televisionProcess (computing)Information securitySet (mathematics)Similarity (geometry)Neighbourhood (graph theory)Loop (music)Data storage deviceDecision theorySpacetimePosition operatorOrientation (vector space)Equivalence relationGroup actionModel theoryPlanningRight angleForcing (mathematics)Lecture/Conference
19:54
Metropolitan area networkStress (mechanics)Value-added networkNoise (electronics)System programmingTask (computing)Multiplication signIntegrated development environmentInterface (computing)Stress (mechanics)Lecture/ConferenceComputer animation
20:59
System programmingAsynchronous Transfer ModeStress (mechanics)Information securityGroup actionDynamical systemContext awarenessMereologyLecture/Conference
21:38
Power (physics)HierarchyData structureNormal (geometry)Information securityMultiplication signDecision theorySystem programmingProcess (computing)Interactive televisionLecture/Conference
22:24
NumberTask (computing)Set (mathematics)Service (economics)AverageInformation securitySystem programmingMotion captureTouch typingLecture/Conference
23:28
Metropolitan area networkInteractive televisionIntranetInstance (computer science)Different (Kate Ryan album)Group actionDynamical systemInternetworkingInformationWeb pageRight angleLecture/ConferenceMeeting/Interview
24:11
Ideal (ethics)Web pageDifferent (Kate Ryan album)Content (media)Cycle (graph theory)Similarity (geometry)Set (mathematics)WebsiteTheory of relativityMultiplication signLecture/Conference
24:53
Data structureInformationFlow separationFrame problemInteractive televisionInformation securityRight angleSurfaceTheory of relativityLecture/Conference
25:33
Hydraulic jumpInformation securityProgrammschleifeSystem programmingDataflowTask (computing)Form (programming)Context awarenessSet (mathematics)Phase transitionLecture/Conference
26:06
Context awarenessSoftwareMultiplication signPhysicalismInformation securityMereologyLevel (video gaming)System programmingDigitizingDomain nameLecture/Conference
26:44
Touch typingRight angleLevel (video gaming)Information securityDomain nameSystem programmingSet (mathematics)AuthorizationContext awarenessInvariant (mathematics)MathematicsComplex (psychology)Multiplication signTask (computing)Different (Kate Ryan album)InformationCondition numberGroup actionBuildingMeeting/InterviewLecture/Conference
28:13
BuildingSystem programmingData structureInformation securityMereologyCodeMathematicsPattern languageTerm (mathematics)Task (computing)Instance (computer science)Video gameInvariant (mathematics)Lecture/Conference
29:10
Pattern languageMereologyRule of inferenceSystem callSystem programmingMultiplication signLibrary catalogInformation securityCategory of beingData structureStudent's t-testRight angleLecture/Conference
30:25
WebsiteGoodness of fitBitFitness functionMultiplication signSystem programmingGame theoryProcess (computing)Instance (computer science)Expert systemContext awarenessBenchmarkLecture/Conference
31:36
Curve fittingValue-added networkProcess (computing)Content (media)Data structureMultiplication signDynamical systemQuicksortSet (mathematics)Solid geometryReal numberTask (computing)System programmingContext awarenessGoodness of fitLecture/Conference
32:54
Metropolitan area networkSquare numberSystem programmingNumberPhysical lawTable (information)InternetworkingQuicksortRight angleLecture/Conference
33:41
Chi-squared distributionMetropolitan area networkJava appletIcosahedronCuboidContext awarenessSystem programmingSet (mathematics)DataflowProcess (computing)Interactive televisionInformation securityRight angleLecture/ConferenceComputer animation
34:38
Value-added networkMetropolitan area networkSystem programmingInteractive televisionProduct (business)Lecture/ConferenceMeeting/Interview
35:17
Software testingQuicksortResultantInteractive televisionInformation securitySet (mathematics)Invariant (mathematics)Game theoryLecture/Conference
35:54
Value-added networkMaxima and minimaGamma functionReal numberHeat transferSet (mathematics)Multiplication signDistanceAreaSystem programmingLecture/ConferenceComputer animation
36:37
Different (Kate Ryan album)Category of beingBitSoftware testingInformation securitySoftware developerProcess (computing)Interactive televisionMultiplication signLecture/ConferenceComputer animation
37:37
Software developerInformation securityOperator (mathematics)Process (computing)Software testingData structureSet (mathematics)Multiplication signBit rateWritingFreewareLecture/Conference
38:55
Video gameSystem programmingCodecInternet der DingePlanningPosition operatorGroup actionProcess (computing)Dependent and independent variablesType theorySet (mathematics)SoftwareInternetworkingLecture/Conference
40:20
GoogolData managementPlastikkarteSystem programmingLecture/Conference
40:59
Video gameInferencePlanningGroup actionVirtual machineSystem programmingGoodness of fitMereologyComputer hardwareLecture/Conference
41:51
Metropolitan area networkMaxima and minimaModel theorySystem programmingLecture/ConferenceMeeting/Interview
42:25
Maxima and minimaMetropolitan area networkInformation securitySystem programmingCategory of beingDifferent (Kate Ryan album)Universe (mathematics)Right angleDynamical systemCore dumpMereologyMultiplication signMessage passingClient (computing)CASE <Informatik>Lecture/Conference
44:05
Arithmetic logic unitLecture/ConferenceJSONXML
Transcript: English(auto-generated)
00:19
So I spend a lot of time hanging out
00:24
with both the security community and the design community. And they don't always get along very well. They both seem to think that they know how the world works. And in part, this is a little bit of a plea for these communities to get to know each other better. But I think that there's actually a lot of really interesting stuff that can kind of come between them.
00:40
So let's dive into it. A lot of people have this really unfortunate belief that security is about computers. Security actually has very little to do with computers. Most of the time, what we actually care about when we're caring about something about security isn't anything to do with what the computer is actually
01:03
doing. It's something that's happening in the world. We don't really care what code is running on our machines. If you did, you'd all be rejecting the closed firmware and all the DRM code that's running on your devices because you have no idea what it's doing and all of this code that you don't actually trust, you'd be really worried about that, but you're not.
01:22
OK, like the two of you in the back, sit down. Security is a set of activities that reduce the likelihood of a set of adversaries successfully frustrating the goals of some set of users. This is the goals that they have in the world, not what code is running on their laptop, but rather can they actually get their job done.
01:43
What we're actually talking about when we talk about security is efficacy. We're not talking about is the system secure. We're talking about can the person get the thing done that they thought that they were trying to get done because that's what everyone actually cares about. Now, there's a really interesting and kind
02:03
of similar, in fact very similar, set of misconceptions about participation and what it means to participate in the process of using a system. One of the problems that we have right now is that in many contexts people think that spending a lot of time using a system
02:23
means that there's a lot of participation going on, that it's a good system. How many people here have spent a lot of time filling out your tax forms? Is that a good participatory experience? Yeah. One of the other things that people tend to assume sometimes
02:41
is that when you engage with a lot of people and you interact with a lot of people, that means that you are having a positive experience on a platform. That's great when they're being nice, when you wanted to engage with them, when you had any desire to talk to any of these people ever, and you're not just trying to get away from all of them, oh my God.
03:01
There are no non-participatory systems. This is one of the other things that people who aren't designing for like, oh, well it's not a social media platform, so clearly it's not participatory. That just means that all of the participation which is happening, which is to say, all of the places where human beings are interacting with these systems in their lives and talking about what those interactions are,
03:22
you're just not designing any of those. Those are just happening somewhere off in the world and you have no idea what the participation structure of your system is. One of the other things here, ads, if they're effective, are always poisonous to participation because no one goes to a site with the possible exception of, I don't know,
03:41
like a site that is showing advertising awards. No one goes to a site wanting to participate with an ad. No one ever wants to engage with a brand. So you can tell yourself that that's what your site is doing, but you're just lying to everyone then. And this has really interesting consequences. So one of the things that many people think
04:02
is that they're building systems that people are going to want to use. That's, again, never really true. People are building systems that let them accomplish things in the world that they are very interested in accomplishing. And if those systems let them accomplish things in ways that they can't otherwise accomplish, they may really enjoy using those systems.
04:21
But the goal is never to use the system. The goal is to get a thing done in the world. Even if you're designing game systems, the person's goal is to be entertained, not to sit behind the Xbox, unless they're like a video game reviewer. And again, people don't really care about what they're doing in your system.
04:41
They care about what they're doing in the world. Often, mostly, developers talk about the people who use their system as the users. But that puts the developer in the center of the picture. The developer isn't actually in the center of any picture other than their own. They're not your users.
05:02
You're their tool smiths sitting off in the corner building something that they don't really care about that much. If you build tools properly, you make both better users and better communities of people. And the way that you need to do that
05:20
ties in very directly with the things that you need to do to build secure systems in certain ways. So, no one is interested in abstract systems. Again, there's two mathematicians in the room who are very interested in abstract systems. No one else is interested in abstract systems. We're really interested in systems in the context that they exist in the world,
05:42
because that's where all of the human meaning starts coming in. One of the really interesting things about big, complex systems, like most of the ones that we're building, is that none of the properties in those systems can actually be designed in a meaningful way. They all just sort of show up out of whatever set of things that we happen to build, some of which we intended to build,
06:01
some of which we built and then totally forgot about, and you get developers going, wait, they're using that feature for what? But that was a debug thing that we meant to turn on. Oh, and now it's our biggest okay. You know, this is way more common than anyone would ever hope. Figuring out what is important in a system is very hard,
06:21
and it's especially hard if you're not looking at the things that that system is doing in the world. It is very useful when you're trying to analyse a system to dig in and be like, okay, what does this one piece of this system do? I'm going to dive really deeply into what this single piece of code does, and then you realise that that doesn't tell you anything
06:43
about what the system as a whole does, and that you always have to jump back out and figure out, oh, well, that means this thing here and this thing means this thing here, but the whole system means something else. If you, you know, boundaries are useful, boundaries are very useful, but if you don't understand what your boundaries are doing
07:01
and the thinking and the work that they're doing for you, you are going to fail to understand the whole system. This is very clear when we look at security systems. I think that we are starting to see the ways that this is clear in participation systems, but it's less obvious. The most critical skill is basically learning how to reason
07:24
across structural boundaries. Now, if you've got a bunch of code over here and a bunch of code over here and you're looking at it from the perspective of code, it's pretty easy to kind of chain across these things, and like any programmer or, you know, the equivalent for any designer ends up learning how to do those kinds of leaps across systems boundaries.
07:43
But it's very difficult, it turns out, to go from code to design or from design to legal, and that's where the bugs start getting interesting. That's where the cracks get interesting. So, a lot of people, and these are mostly people in the security community,
08:00
are like, oh, security is so special, security is so different. You know, there's a reason why we have to be these cowboys who, you know, do whatever. Security is just another property of a system, like whether it's performance or reliability. Any property that emerges from a whole system has fairly similar characteristics.
08:20
You can only evaluate them from a given context and from a given standpoint, or in a given context and from a given standpoint. You know, secure for whom? Secure with respect to what? Secure under what conditions? Sufficiently fast for whom? Sufficiently fast under what conditions? And this is all always just a means to an end.
08:43
No one cares about performance. We care about, can I get my work done? No one cares about security. We care about, can I get my work done? One of the other things that we find, and this is true, again, of any whole system's property, is that they're incredibly expensive to retrofit. One of the slides I don't have in here,
09:00
there's this great curve of how much it costs to fix a bug versus when it gets introduced. And it's somewhere, depending on whose data you look at, between 30 and 100 or 1,000 times the cost to fix a bug immediately when it's introduced versus once it's out in production. It's very difficult to retrofit whole system emergent properties
09:24
into a system after the fact. Now, with participation, it's also kind of interesting. You have this really complex double bind because you also can't understand very easily what a system means to design in participation
09:41
until it's out in the world. We'll get to that later. So one of the things that people in security often really want to say is, like, well, okay, we've got our system, and then there are these adversaries. You know, the bad guys over there in the glasses and the hoodies, they're not part of the system.
10:01
But of course they're part of the system. They're interacting with the system. They're users of your system. They're not users you want. They're not the users you'd like to have interacting with your system. But they are users who are interacting with your system. And if you're going to understand what you do and don't want them to do, then you need to include them in the system.
10:20
Abuse historically has fit into systems in very odd ways and into security in very odd ways because nobody, it doesn't really fit. It's not a traditional security problem. It's just, like, a bunch of people being dicks. But that's not a security problem even though it's a security problem for the users, but, like, there's no code getting run,
10:40
and people kind of think themselves in circles. But as soon as you start thinking about, well, what are the users trying to do on our system, not what code is being run, but what are they actually trying to do in the world, and how can we figure out, you know, how to improve that experience, you get a very different view of things.
11:00
There is no hard line between participation problems and security problems. They're not in different domains. They're really the same thing. And a note on the politics side, anything that your system does in the world is, you know, potentially a security problem for your users.
11:20
If you are Airbnb and the work that you do in the world is making it impossible to afford an apartment in New York or Amsterdam, then, hey, you've created a giant security problem for your users, one that you totally didn't think was part of your system, but it turns out it is.
11:41
Security is a performance, and this is also true of participation. You can't just set up a system and claim that it's secure, right? You have to continually work to make it secure. For a lot of other kind of emergent systems properties like performance, like reliability, we've automated out a lot of those issues, right?
12:01
No one spends very much time now thinking about, oh, well, what do I have to do to keep this computer running quickly unless you're a sysadmin professionally? But it used to be that you'd spend a bunch of time with, you know, cleaning up disks and all this kind of stuff that you had to do
12:20
or you were going to have these problems. We've eventually managed to automate that stuff. This can't be done as easily for security for a bunch of interesting reasons that we'll also get to later. But part of it, and the most fundamental thing of it, is security is always transient, right? You're secure in a given moment. You are secure as long as you continue
12:42
performing the structures that keep you secure because security is part of participation. Specifically, security is the negative space around participation, right? Every system that has humans using it is fundamentally participatory. Everything that you want,
13:00
all of the actions that you're designing to enable, those are the participation actions. All of the actions that you're not designing to enable, those are the security interactions. And you can't really consider one without the other, but because we had this problem where we thought that security was about computers for so long, we sort of forgot to look at this stuff.
13:22
One of the places this comes out is the notion of engagement, right? So engagement is how we measure participation, obviously, right? If you're Facebook, you're trying to optimise for engagement. How many people here have read a book, Seeing Like a State? Anyone? Anyone? I highly recommend it. It's, okay, a couple.
13:41
It's one of my favourite books for understanding what's wrong with the 21st century. If you don't know why people care about your system and people like your system, it will fail. There are surprisingly few companies that make social systems that really understand why people like their system.
14:01
How many people here use Twitter? How many people here think that Twitter corporate knows why Twitter users like Twitter? Yeah. And it's really obvious. We all get, we all know very well from our interactions with the system that they don't understand why we care about this thing.
14:22
They don't understand what the real utility is for people. And part of that comes from things like this notion of engagement. If you, because of monetary reasons, have to tell yourself a different story about why your users like your system other than, you know, why they actually like it,
14:40
your system will eventually fail, right? Your users are all going to realise that, like, oh, you're designing for the money, you're not designing for us, and as soon as something which is less horrific comes along, they will go and jump over there. One of the really interesting things about emergence and especially about emergence when it relates to these kind of, you know, closed-loop systems,
15:03
is that you end up getting whatever you measure, right? And just the fact that you're measuring a certain thing will shape the system in certain ways, right? You end up doing an experiment and being like, oh, well, this is good, so I'm going to get more of this, but, you know, you don't really know the thing
15:20
that you should have been optimising for. And this is just as true for security as it is for anything else. But so let's say that despite all of this, you would like to try and build systems, despite it being, you know, a terrible and complex problem. We need to talk about humans a little bit and the way humans actually work before we can understand either security or participation.
15:43
You know, no-one would ever write code for an API that they don't understand, but people seem perfectly happy to build systems for people that they don't understand and people whose lives they don't understand, which is kind of surprising. One of the things that you need to understand
16:00
before you start building a system is what do the people using the system actually care about? Again, this is not always obvious. This is often painfully non-obvious in systems that are trying to design for security because one of the things about security people is that they think that everyone should care about security.
16:20
It's also true. One of the things about design people is that they think everyone should care about design. So you get these systems that optimise for every little pixel placement, when, really, people would just prefer that they didn't crash as much so that they could, like, I don't know, interact with their bank in a meaningful way. And the same thing is true for security. We end up optimising for things that are not actually useful for people.
16:43
In theory, I mean, whatever, you know, you build a bunch of stuff, but it turns out that, you know, there are always trade-offs, and then nobody uses your system and then you run out of money and, you know. You need to know what they're actually trying to do, and often this is something that you will figure out
17:01
after you've built your system and started using it and people have started using it because you don't really understand what you're building. There are a bunch of things that people are very bad at, and this is especially interesting for the kind of complex emergent properties. People are really bad at tracking
17:20
how different pieces of a system interact together. This is one of the things that, you know, programmers have to spend a bunch of time learning, oh, well, if I set this thing over here and this thing over here and this thing over here, all of a sudden the kitchen is on fire, but none of those were, yeah.
17:40
So, any time you're building systems that involve these kind of big, complex, open-ended combinatorial things like, say, trust, chances are that your users are going to get it wrong and you need to understand where the limits of your users are and how they think, you know, and how they actually make decisions
18:00
before you're going to stop making those mistakes. Brains have limited bandwidth. This is true of any system. Brains degrade in very weird ways that don't, you know, that don't map to the way kind of hard systems degrade and people default to routines, they default to habits.
18:21
They build habits very quickly so they can stop thinking. Any time that you have something really important that you really want people to think about, they're going to do everything they can to not do that because that's not the thing that they want to think about. There are times that you can use those routines in your favour,
18:41
but you have to be very careful then about designing that process of routine acquisition. This becomes something very important, you know, when does a user first see a thing? How do they learn that process? What does this interaction look like if you want them to not just do the broken thing again and again and again?
19:00
So this is called an OODA loop. One of the things that we care about in security is specifically how do people plan in the presence of adversaries because they're not just planning I'm going to the store. I mean, maybe that's in the presence of adversaries. I don't know what your neighbourhood is like, but they're planning, and there's someone else planning how to possibly harm them.
19:21
So this originally came out of the Air Force in the US, but observe, orient, decide, act, right? Any equivalent way of modelling how users think is going to go through a similar set of steps and with a similar set of names. It doesn't really matter what they are. You know, understand your position in the world,
19:40
orient yourself within that possibility space, decide on a course of action, do it, watch what happens. If you want to add security features to a system or change the participation space of a system, you are adding noise in here, right? You are making it more complicated for your users
20:01
to orient themselves within the system because now they have all of these extra shiny things that they have to touch. You are making it harder for them to decide on what the thing that they should be doing is. You're slowing them down. They're trying to do a bunch of other stuff which has nothing to do with the thing that you're trying to add in there.
20:21
They're trying to go on about their daily lives, accomplish whatever task they started using the system for. So you need to think very carefully about, you know, is this complication, is this tool worth it? Stress is terrible for people thinking coherently. I've spent a lot of time working with high-risk users
20:41
in very complicated environments. One of the things that we find in those places is that all of the users are under a lot of stress, a lot of the time, and that makes interfaces that might be easy if they're sitting at home in a very simple environment and not worried about the safety of their children or any of these kinds of things, all of a sudden these interfaces are much more complicated.
21:02
So anything that you can do to make a system less stressful during kind of everyday use is going to pay off when it actually matters. And, again, when you're talking about either participation or security, both of your failure modes are quite stressful. So anything that you can do to reduce that stress is going to help people.
21:23
Group dynamics. This is in part, you know, this starts getting us into actual participation design, but there are a bunch of things to just think about in your given context. How do groups interact? How do people work together? You know, what are the traditions? What are the norms?
21:40
Are you dealing with hierarchical power structures? Are you dealing with non-hierarchical power structures? What are you actually designing for? One of the other things that a lot of time, especially in security, and also even in participation design, people forget is that the people that we don't want to interact with the system
22:00
are also humans. You know, if you want to design against trolling, if you want to design against abuse, one of the ways that you can do that is by understanding, like, what are the, you know, what's the return on investment for a troll? What are they actually getting out of this process? What are they trying to do? What is their decision structure around
22:22
when they're picking victims or picking what their interactions are going to be and disrupting that? You know, and you can design against these things as well as designing for them. So let's say you still like to help and you want to build things. One of the places to start is figuring out what you're going to build.
22:42
Again, this turns out to be somewhat complicated given the number of tools that people build which aren't actually wanted by anyone. There are a bunch of fairly traditional tools which I'm not going to go into in any kind of depth here, looking at things like personas and scenarios, right? Let's look at what are the average set of users who are going to be using the system.
23:00
Let's get ourselves little sketched introductions of them. You know, let's look at the scenarios under which we think people might be interacting with the system, you know, task breakdowns, service touch points, scenario flows. You kind of have this traditional toolkit for design which, you know, captures most of what you want, but there are a few things for both participation
23:21
and security that really doesn't capture. So personas are great as far as they go, but one of the things that they do not do is capture interactions. They capture individuals, they capture some of the tendencies of individuals, but the dynamics of how groups of people interact.
23:41
So, for instance, one of the examples I use with this stuff is you've got a corporate internet, right? You've got a bunch of different people who are all just normal users of the corporate internet. There might be one or two personas at most between them. They all have the same roles and the same privileges and permissions on the site, but some of them are going to be the people
24:01
who start new pages, right? Who are just like, oh, that's a great idea. I'm going to go write this up, and some are going to be the people who are never going to start a new page but are going to look at a page that somebody else has started and kind of add a few things, and, you know, some of them are going to be the people who will go through the whole site and kind of just clean up a bunch of stuff but don't really ever add content.
24:21
Now, those are all the same persona. Those are all the same role, but they have very different affordances and very different sets of things that you need to design for. Any time you're trying to understand a participatory relationship, you want to keep the data in the relation,
24:42
not in the end points. This is one of the reasons why I think that personas are slightly missing the mark. They encourage us to think about these users individually. They don't encourage us to think about the relationships by the fact that they store this information separately. One of the things that we care about
25:00
when we're looking at creating participation frames, creating structures for interaction, is making sure that people have the right kinds of alibis, the right kinds of things that enable them to do these interactions, and those alibis are always relational. Those alibis always exist between people. Similarly, what are the designable surfaces
25:21
on all of these interactions? Those don't exist, you know, in isolation. They exist when those people start coming into contact. When we're talking about security, now we need to think about, you know, we need to go jump back to OODA loops. We need to talk about what are the ways in which someone might come to harm in this system?
25:42
Where are the negative security outcomes? So this is where having something like a scenario flow where you've got, you know, a set of tasks the user might be performing, the context that they'll be performing the tasks in, and, you know, what the kind of steps are, or even in rough sketch form in a fairly early design phase,
26:01
this gives you something where you can start saying, okay, well, I can see a spot for an adversary there. I can see a spot from an adversary there. And this is where you have to be working with your users. This all has to be co-designed, because you don't necessarily understand, you know, what those adversaries actually look like, what people are worried about. In some contexts, you're going to have better visibility
26:22
than your users are, but in a lot of contexts, you're going to have worse, or you're going to need to get teams from very different parts of the system to come together. All security at this kind of design level is cross-domain. One of the things that security people like to spend a lot of time thinking about is, oh, well, this is a physical security problem,
26:41
and this is a network security problem, and this is a digital security problem. There's really no such thing at this level, right? All security touches all of the different domains. All of your scenarios probably touch all the different domains, which means you need people from all of those different domains working together. So one of the things which I like to use
27:01
to talk about what a tool that you might give to a user is an invariant, right? An invariant is something that you are going to ensure doesn't change in the context of the system that you're building. Confidentiality is an invariant that people talk about a lot, right?
27:22
The system will make sure that no one other than this set of authorized people can get access to this piece of information under any conditions. Now, the user probably has to do certain things to maintain that invariant, which is very useful because you're now adding tasks to what the user has to do.
27:41
There are a lot of different invariants. Deniability can be an invariant, although it's a complicated one for a lot of reasons. You know, non-renewability, being able to ensure that any time a user has taken an action, they're always going to have to stand by that action if you're designing a financial system.
28:01
This is something you probably care about a lot. Invariants are things that you can deploy at will, and there's a lot of complexity there. It's one of the big things that you can enable your users to shape their experience with. Your users are also one of your biggest assets
28:20
when you're trying to build secure systems or systems that have interesting participation structures. Your users are smart. This is something that, again, especially security people are often not great at noticing. Your users are very good at certain kinds of tasks. Earlier, we talked about stuff that users are bad at.
28:42
If you want, for instance, your users, if you want something in the system to understand a pattern and to notice when that pattern changes, like a pattern of behavior, your users are going to be better at that than any chunk of code you can write, for the most part, if it's a pattern of something that actually affects their life and those kinds of daily rhythms.
29:02
Ceremonies is a term of art from security, which is what we talk about, the steps that someone has to interact, has to take to maintain an invariant. There are ceremonies for participation, too, and a lot of the time, they get buried. Sometimes we call parts of them business rules.
29:21
Sometimes we call parts of them UI patterns. It's very rare that you have a team that actually has a catalogue of all of the different ceremonies that their system interacts with and what those ceremonies are intended to do. One of the things that you would like your system to do, if at all possible, is to leave your users smarter
29:43
than they started using your system. You have an opportunity to interact with your users, often fairly deeply. You have an opportunity to teach your users things. Your users are never going to come to any system that you build with a full understanding
30:01
of how to do everything that that system can enable them to do, and if they do, then you're probably designing a really boring system and you should maybe try and build a system that lets them do something interesting. So you're going to have to teach your users, right? And because you're going to have to teach them, you should probably think about how you're going to structure that teaching
30:22
and how it interacts with the security properties and the participation properties. You know, you need to make certain kinds of things very legible to certain kinds of users who might be good fits for certain kinds of activities on the site. You know, for instance, one of the things that happens in some contexts, it's a big problem in games.
30:43
You'll get someone who joins the system, you know, joins the game, is still trying to figure out their way around and what does what, and then they immediately just get piled on by a bunch of players who are like, great, fresh meat, let's take a bunch of resources, et cetera. That is a failure to manage the learning process. Now, in games, people are actually a bit better
31:02
about thinking about this because they do think about play, they think about performance, they think about those kinds of narratives more explicitly, um, how many people here use Photoshop, have used Photoshop? How many of you remember the first time you opened Photoshop and it was basically incomprehensible?
31:21
Or better yet, the gimp? Um, you know, that is a tool which does not have a very good learning journey. You know, it's designed for experts and it's expected that, hey, this is kind of the benchmark tool for this industry, you're going to spend the time learning, you're probably going to engage with some structured tutorial content, whatever, you know, there are other ways
31:40
that that learning process is managed, but if you don't have any of that, it's still a terrible experience. Um, when your failure or kind of floundering and clicking around in a system has very serious real-world consequences to you and you can't just sort of explore, this is a much bigger problem.
32:00
Um, if you had the same set of problems learning to use Photoshop, learning to use your bank's website that you had learning to use Photoshop and you ended up, you know, crashing it a bunch of times and that kind of thing, that would be significantly more problematic for you financially, I'm guessing. Um, a very good secure system
32:20
teaches its users how to be more secure outside of the context of that system. It actively enables the kind of learning that stays with those users and makes them better at whatever the tasks that they're trying to do in the world, you know, without regard to just that system. Similarly, a system that's really good
32:41
at creating solid participation dynamics builds better communities. Um, and this is true whether or not you're building a social network. Again, every tool is participatory. Every tool has these structures. If you build it correctly, you end up leaving a bunch of extra value on the table. Um, there's something called Metcalfe's Law,
33:01
which is that the value of a system is proportional to the square of the number of nodes in the system. Um, Metcalfe was the guy who invented Ethernet, and this is, you know, kind of what he came up with as he was realizing that, hey, this Internet thing, it's gonna be worth some money someday. Um, now, one of the problems, and this is a problem that we're seeing right now,
33:21
is it's not actually the square of the number of nodes. It's sort of the square of the upside value that the system leaves on the table, um, for each user, because if you have a system that tries to capture all of the value, right, if you have a system that says, well, we're gonna build communities, but not communities that you can take anywhere else.
33:40
They're not gonna be worth... Excuse me. They're not gonna be worth anything outside of this context. That's not actually a very valuable system. It's not actually very valuable or very useful for, uh, for anyone. So, once you've understood
34:03
kind of what you want people to be doing, like, what are your goals for interaction, what are your goals for security, right, if this process flow happens and an adversary tries to step in here, this is the set of things that the system should help happen, right? Um, this is the set of interaction roles
34:22
that we think are gonna happen. These are the places where we think that there are gonna be problems. This is how we're dealing with those problems. This is how we're steering people towards interactions that are gonna build better communities. This is the thing that you need, right? This is the thing that the design practice, other than fundamentally what is the system going to be,
34:43
this is the thing that you actually really care about, because then this is what tells you, you know, all of the detail of how you build the rest of the system. And one of the things that we find again and again is that capturing this detail, capturing the intuition for what are the interactions that we want, what are the interactions that we don't want,
35:01
how do we shape those, and all of the kind of fine details of that shaping is one of the hardest things, especially when you're looking at large design teams, large products. It's much easier to do this stuff iteratively, right? If you have to build a bunch of stuff
35:22
and then throw it out to the world and actually release it for real, you're gonna have a much harder time than if you can basically be playtesting, right? Playtesting is what people do in the games world, and I think it's what people should do in the security world as well, right? If you have, oh, this set of invariants and this interaction with this kind of adversary
35:40
we think will produce this sort of result, you know, make a game of that. You know, make some kind of way of testing it before you've written any code, before you've committed to a bunch of requirement stocks. Get that stuff in there early. You can't do this without people who look like your users. If no one on your design team looks like your users,
36:00
you have a problem. Don't be Apple designing giant phones for a market that's half female and then wondering why people are annoyed. Any time you have, you know, the more distance you have between the set of users and the set of people building the system,
36:21
the more work you have to do to ensure that intent crosses that gap. And specifically, the places where you are likely to run into real problems without intent transfer are in this area, are what are the set of adversaries? What are the set of resources that people have to draw on? What are the tactics that are actually going to work
36:41
for different categories of users? There's really no way to figure that kind of thing out without testing. So once you have an understanding of what your security goals and your participation goals are, now you can do the rest of the development. Now, of course, this is not the giant front-loaded pile of work.
37:03
This is actually, you know, do a little bit of design, do a little bit of building, check it, move back and forth. You know, it's the same process as anything else. However, you obviously have to have community in there. Especially for participation, sketchy is fine. You don't need to have everything nailed down
37:20
because you're going to get it wrong the first five times anyway, and everybody does. One of the things that I see repeatedly causing problems is a failure to capture either the original assumptions or the original community interactions. The more of those artifacts you can have to draw on when somebody in the development team, somebody in the test team, somebody in operations
37:41
is like, why are we doing it this way? Oh, we're doing it this way because all the way back up to a story from a user about how a thing works in the world. The easier you can make that process of chaining through that set of artifacts, the easier of a time you're going to have it. For all of these kinds of things, when we're talking about the structure of security processes,
38:03
the structure of design processes, what you care about is not the process, you care about the criteria, right? You must be at least this tall to enter the requirements phase. You must be at least this tall to enter development. You need to have certain things figured out before you start different processes. If you don't know what your design goals are,
38:22
you have no business writing requirements. So that's what I've got for you today. I'm happy to take questions, and I'm happy to jump back into stuff if people have stuff that they want to hear more details about.
38:44
So are there any questions? Please raise your hand. Oh, yeah.
39:01
Thank you for your lecture. And I'd like to know how you design the end of systems. Sorry, can you repeat the question just a little louder? How do you design the end of systems? The end? System life cycle, just the end. Oh, right, the end of systems.
39:21
So that depends really what the system does. If you have a system that a lot of people are dependent on, you have an obligation to the people who use that system to work with them to move out of a system. So, I mean, it's actually really very rare that you have an end of a system. You have an end of a technical system,
39:40
but you don't have an end of a set of processes in the world. So there always needs to be some kind of transition plan. Now, I mean, sometimes a given company isn't in a position to manage that transition plan or something like that, but I think it's something, and we're going to see this much more as we start looking at, like, Internet of Things-type technologies,
40:04
as we start looking at more and more infrastructural tools that have software inside them that managing those ends probably becomes a legal responsibility. How many people here saw the Google Revolve shutdown announcement?
40:22
The company that made Nest before they got bought by Google had this kind of home automation hub that would, like, you know, manage your, I don't know, your smart furnace talking to something else and this and that and all of your lights talking to each other, and Google just decided, you know what, it's not worth supporting these. There aren't that many in the world.
40:40
We're just going to push a firmware update that turns them off. So whatever infrastructure you had built on top of that, it's now just broken, and you have no recourse, and you have no way of, you know, blocking it from updating or doing anything like that. If that system, you know, and probably in a few years even in this case, but if that system, say,
41:01
was managing something that was life critical for people, that would be criminal negligence on Google's part. You know, you don't get to just turn stuff off. So some kind of transition plan, if it's not profitable for one company, there probably needs to be some kind of handoff, and really this is a thing that as a society we are going to have to make decisions about.
41:21
We see things in, for instance, in the medical world where it's like, well, yeah, your implant is end of life. You know, and so we don't have any hardware that can talk to this medical device anymore. Or we do have hardware. There's these three machines in the world. And if any of them break, there are no spares. And the company that made them is bankrupt, and good luck.
41:45
So, yeah, it's a challenge that we're not very good at right now. So any more questions? Please raise your hand. Thank you. Thank you for a wonderful talk.
42:01
My mind has been quite blown. What I don't quite understand in the way you describe the model is how do you account for uncertainty? If you are not sure you understand the system deeply enough to know where your white spots are, how do you approach resilience against things you can't plan for yet?
42:25
I mean, generally what you do is you try something, and then you fail in public, and hopefully no one gets killed, and then you try something else. I mean, all of this stuff is always, like, one of the fundamental things, and this got really drummed into me in the high-risk world, the place that you have to start when you are doing especially security work
42:45
and also even in participation work is that you are going to fail, right? Some people are going to be bored by your system and are going to leave. Some people are going to get compromised or are going to get hurt. This is unavoidable because we can't build perfect systems in the first try. We probably can't build perfect systems ever in many cases.
43:02
But you do what you can, right? And I think that this is one of the things where having users involved makes a big difference because they are going to give you a much better read on, well, yeah, that tactic might work or it might not work. You know, for instance, deniability that I mentioned earlier is a property
43:22
that some of the designers of secure messaging systems are really fond of, and when they're asked why, they point to, like, times when they've talked to lawyers, and lawyers said, you know, defence lawyers have said, well, yeah, sure, you know, claiming that my client can cryptographically deny a message might be worth trying in court, but what they don't realise is that that's not, oh, yeah, this will work,
43:43
that's, well, yeah, sure, I'll try anything. I mean, whatever, it might work, you know, so I might get lucky at some point. And so capturing, like, that understanding of, like, is this likely to work, is this unlikely to work, is this reasonable, et cetera, is a core part of that kind of elicitation of the adversarial dynamics.
44:06
Okay, I think if no more hands raises, no, I think that's it. So thank you very much.