We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Hacking Your Thoughts Batman Forever meets Black Mirror

00:00

Formal Metadata

Title
Hacking Your Thoughts Batman Forever meets Black Mirror
Title of Series
Number of Parts
335
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Companies are coming for your brains. The electricity in your brains, to be more precise. Valve, Facebook, Elon Musk and more are funding research into technologies that will translate neural signals into controls for devices like computers, smartphones, and VR/AR environments. While this would be super exciting, it represents some serious data privacy issues. First: what kind of private information can be elicited from your neural signals? It’s possible to use a specific kind of neural response to visual and audio stimuli to deduce information about the user… like where you bank, who you know, your real identity, etc (Edward Nygma in Batman Forever, anyone?) More broadly, there is also the issue of what happens when you provide your neural signals to a company. If you’re worried about what Facebook is doing with your information now, imagine what they can do when they have hours of information straight from your brain. If neural data is treated the same as your DNA, commercial companies become the owners of your thoughts (as electrical signals). Will they readily share it with the FBI without probable cause? These kinds of questions, and many more, are starting to surface with neurally-controlled devices and other emerging technologies. This talk will cover all of this and more. Katherine Pratt/GattaKat Dr Katherine Pratt received her B.S. in aerospace engineering from MIT in 2008, and her PhD in Electrical and Computer Engineering (ECE) from the University of Washington (UW) in 2019. During undergrad she completed several internships with the private space venture Blue Origin, working in systems and propulsion engineering. She has served four years in the United States Air Force, working primarily as an operational flight test engineer on the F-35 Joint Strike Fighter. Her doctoral dissertation focused on the privacy, ethics, and policy of information derived from elicited neural signals. She was the recipient of a National Science Foundation Graduate Research Fellowship and the 2018-19 UW ECE Irene Peden Endowed Fellowship. During graduate school she interned with the ACLU of Washington through the Speech, Privacy, and Technology Project. She also completed a six month fellowship as the first Congressional Innovation Scholar through Tech Congress where she crafted technology policy and legislation in the office of a member of the House of Representatives.
Presentation of a groupComputerInterface (computing)Multiplication signNeuroinformatikInterface (computing)Presentation of a groupHacker (term)Dependent and independent variablesRight angleMereologyVideoconferencingResultantGame controllerSource codeView (database)QuicksortArchaeological field surveyLatent heatGame theoryArmReal numberHypermediaOrder (biology)BitRow (database)Standard deviationDifferent (Kate Ryan album)Video gameGroup actionComputer animation
Hacker (term)Service (economics)PlotterWaveComputer animation
WaveNumberPlastikkarteCodierung <Programmierung>WaveLie groupPlastikkarteCuboidInformationTopological vector spacePhysical lawMereologyAutonomic computingMetropolitan area networkComputer animationMeeting/Interview
DisintegrationNetwork operating systemFeedbackData structureSound effectInformationLoop (music)Computing platformBrain–computer interfaceMach's principleQuicksortPresentation of a groupOcean currentType theoryFeedbackCarry (arithmetic)Extension (kinesiology)ArmWeightBranch (computer science)InformationFacebookSound effectData structureVirtual machinePoint (geometry)Semiconductor memoryAreaMathematicsCausalityComputer animation
FacebookStandard deviationObservational studyMaxima and minimaDirected setMathematicsStandard deviationWordObservational studyInformationFacebookSurfaceSpherical capDisk read-and-write headSurgeryComputer animation
SphereNumberFacebookDegree (graph theory)Computer animation
Feasibility studyInformationSurgeryCausalityWaveMachine visionError messageDependent and independent variablesEvent horizonNegative numberSpherical capRight angleSurgeryDisk read-and-write headProduct (business)Vector potentialTheory of relativityDifferent (Kate Ryan album)FamilyQuicksortComputer animation
Software testingDependent and independent variablesDependent and independent variablesProgramming paradigmSoftware testingDisk read-and-write headSpherical capComputer animation
Software testingDependent and independent variablesSet (mathematics)Dependent and independent variablesInformationSpherical capDisk read-and-write headGame theoryComputer animation
Numerical digitProgramming paradigmSingle-precision floating-point formatGame theoryBoom (sailing)Uniform resource locatorElectric currentDifferent (Kate Ryan album)ComputerSound effectRoundingWave packetNumberNeuroinformatikCountingDependent and independent variablesDigital librarySoftware testingSeries (mathematics)SpacetimeMultiplication signSpherical capSound effectTouchscreenSignal processingRoundness (object)Level (video gaming)Confidence intervalInformationSet (mathematics)Dot productComputer animation
Electric currentNumerical digitBounded variationNumberMultiplication signRoundness (object)Digital libraryTouchscreenRight angleComputer animation
Information privacyInformationContext awarenessIntegrated development environmentInformationDisk read-and-write headObservational studyCombinational logicContext awarenessRaw image formatTracing (software)Computer animation
Civil engineeringPattern recognitionCategory of beingTerm (mathematics)Physical lawRight angleInformation privacyInformation privacyDependent and independent variablesPermanentMultiplication signRight anglePhysical lawLatent heatComputer animation
Information privacyPrime idealSoftware frameworkCommon Language InfrastructureInformationAutomatic differentiationHand fanVideo gameQuicksortSoftware frameworkCollatz conjectureDifferent (Kate Ryan album)Right angleInformation privacyBitHypermediaSlide ruleData structureCausalityComputer clusterComputer animation
Common Language InfrastructureInformation privacyArchaeological field surveyExecution unitDifferent (Kate Ryan album)InformationArchaeological field surveyInformation privacyTwitterRange (statistics)Software frameworkMobile WebComputer animation
PlanningInformationMereologyInformation privacyMobile appLevel (video gaming)Set (mathematics)Ocean currentContent (media)QuicksortVideoconferencingHacker (term)Type theoryDifferent (Kate Ryan album)Exterior algebraComputer animation
InformationPlanningInformation privacyMobile appBus (computing)Information privacyInformationPlanningMobile WebQuicksortComputer animation
Dependent and independent variablesNumberState of matterInformationFamilyUniverse (mathematics)Computer animation
Dependent and independent variablesNumberAsynchronous Transfer ModeFeedbackVariable (mathematics)Process (computing)Universe (mathematics)Computer animation
Similarity (geometry)InformationStatement (computer science)PlastikkarteInformationResultantSimilarity (geometry)Generic programmingRow (database)PlastikkarteNatural languageLevel (video gaming)Vector potentialWeb pageAbstractionComputer animation
FlagVideo gameFingerprintRegulator geneState of matterMessage passingCASE <Informatik>CausalityCellular automatonInformationFlagHypermediaPhysical lawCondition numberMereologyBit rateRight angleStatuteAnalogyArchaeological field surveyVector potentialInternet service providerForm (programming)Boss CorporationMultiplication signBiostatisticsBitWordGroup actionSequenceNatural languageComputer animation
Dependent and independent variablesSelf-organizationArchaeological field surveyExecution unitMeta elementInformation privacyImplementationBiostatisticsComputer clusterState of matterUniverse (mathematics)Archaeological field surveyMereologySoftware developerInformationPhysical systemPoint (geometry)Level (video gaming)Multiplication signStatuteHacker (term)Group actionTerm (mathematics)Data conversionSelf-organizationQuicksortRight angleDependent and independent variablesService (economics)1 (number)Natural languageSource codeSocial classCausalityComputer animation
Software developerTask (computing)Coefficient of determinationDifferent (Kate Ryan album)Slide ruleInformationHacker (term)Modal logicSoftware developerMobile appData streamData storage deviceGame controllerEntire functionRaw image formatShooting methodVoltmeterType theoryInformation privacyTask (computing)Arithmetic meanGodComputer animation
Information privacyQuicksortTouchscreenProduct (business)NeuroinformatikSoftware testingCausalityBitProcess (computing)InformationBit rateComputer animation
TouchscreenBit rateInformation privacyProcess (computing)Prime idealConvex hullReading (process)Service (economics)Term (mathematics)Regulärer Ausdruck <Textverarbeitung>Speech synthesisExpected valueInformation privacyTerm (mathematics)State of matterInformationRepresentation (politics)Hacker (term)Physical lawPosition operatorOffice suiteProcess (computing)System callForm (programming)Electronic mailing listProjective planeExpected valueSpherical capResultantObservational studyQuicksortStudent's t-testRepetitionDependent and independent variablesGoodness of fitMedical imagingService (economics)TouchscreenHypermediaPlastikkarteSpeech synthesisMereologyBasis <Mathematik>Video gameWebsiteComputer animation
InformationVideo gameVideoconferencingWeb crawlerHand fanComputer animation
Dew pointGoodness of fitPlotterLine (geometry)Hand fanDifferent (Kate Ryan album)Computer animation
InformationQuantum stateMultiplication signTouchscreenSpherical capTwitterComputer animation
Transcript: English(auto-generated)
Hello everyone. Thank you for coming at the crack of 11 o'clock in the morning. Okay, how do I full screen? Why do I have a thing here? Is that what I want? No. This is terribly embarrassing. Uh, there we go. Cool, okay. So, after that little embarrassment, my name
is Catherine. Um, I am here to talk today about Hacking Your Thoughts, Batman Forever meets Black Mirror. Um, the standard disclaimer, um, the work that is in this presentation was done while I was at the University of Washington as part of my PhD dissertation. The results and views presented here do not necessarily represent those of my
funding sources or my current employer. So, thanks. Okay, so, there's been a lot of sort of hype in the media recently surrounding brain computer interfaces, putting things into your brain, getting signals out. So, I want to take a little bit of time to sort of separate the hype from the reality and let you know what really is possible. Um, I'm gonna talk
about some of the experimental results from the experiments that I did. Um, and as part of that, I did, um, ethics and policy research. So, even though my dissertation was electrical and computer engineering, um, I actually did a neuroethics survey, um, and I, um, spent some time looking at the policy proposals that we can come up with for emerging
technologies like, uh, brain computer interfaces. Okay, so, real quick things that are not covered by this presentation. Um, I am really sorry there are no aliens involved at all in this. Um, I'm also really sorry I know nothing about any chips that the government may have implanted in your brain. Um, I use non-invasive stuff so that is outside the
purview of my research. Before we get started, um, we're gonna just baseline the definition for brain computer interface. Um, this is the definition that I used in my neuroethics survey that we're gonna get to in the second half of the presentation. Um, I defined a BCI as, um, it can record brain activity, uh, while an individual is
performing different actions, for example, blinking their eyes, playing a video game, or texting on a phone. BCIs are often used to give user control of a computer using their brain activity. So this could be anything from playing a video game to actually controlling like a prosthetic arm or a wheelchair. So, I'm also gonna be talking about
targeted elicitation. Emphasis on targeted and elicitation. So we're showing specific stimuli in order to obtain a particular response. This is not writ large, I am taking everything from your mind. So, right off the bat, what do you think of when you hear brain hacking? Um, for some of us of a certain age, you might go immediately to my
favorite movie from 1995, Batman Forever. Um, I forget which streaming services it's on, but you should totally go check it out. Probably while drinking if that is your, your thing. Um, for those who haven't seen the movie, one of the plots is Edward Nigma, aka The Riddler, wants to find out who Batman really is. So he creates a device that
basically sucks up all the brain waves of everyone in Gotham, um, and then figures out who has bats on the brain and that's obviously Batman. So at the, uh, end of the movie, um, they're having the showdown and, and Batman says, you've sucked Gotham's brain waves and advised a way to read men's minds. And The Riddler says, you betcha, soon my little box will be on countless TVs around the world feeding me credit card numbers,
bank codes, sexual fantasies and little white lies. So this is in 1995 that we already had this concept of taking information that could be useful either for, you know, stealing all of someone's money or using it to blackmail them. Um, for those of you who have not seen this lovely movie, you may be familiar with Black Mirror. This is the
episode Crocodile. Um, sorry, spoilers, it's been out for a while, hope you've already seen it. But basically in this futuristic Iceland, it is required by law that you give up your neural information as part of investigations. And so this woman, um, saw a man get hit by an autonomous pizza delivery truck and the insurance agent is, um,
compelling her by law to give this information. Um, unfortunately for the insurance agent, this woman also killed someone in her hotel room. Um, and you can all guess where that goes from there. So, this is in a future where you have to give up your brain signals, which kind of leads to this question of, do we really want to have to do that? So, based on those two examples, you might think that, oh, we're gonna be able to
pull things out immediately and kind of like a pensive in Harry Potter, just look at them. But not all that's really possible. And so two things have happened recently, um, and I wanna sort of explain them for you. So, some of you may have seen Elon Musk's presentation about Neuralink and they have this great little chip that
has thousands of little electrodes that they're gonna put into your skull using this great little, uh, um, sewing machine essentially that by the way was developed by DARPA at UCSF. Um, and Elon's goal is to have someone who is not a neurosurgeon just use a laser to drill a hole in your brain, they'll pop this in and you're good to go.
So, my problems with that, first of all, a lot of what they're talking about deals with reanimation of limbs. So, spinal cord injury, prosthetic arm type deal. It's great if you can get the information out to, I want to move my arm. But what happens is there's nothing coming back saying, um, this is what the arm is or th- where it is in space, this is what I'm feeling, this is how much weight I'm carrying. So, without that
feedback loop, it's not actually that helpful. Um, a lot of the things they're talking about for the augmentation, things like memory and things like that, it's unclear how the current implantation method they're talking about is actually gonna get to those deep brain structures. Um, right now if you're looking at things, um, for like hypothalamus, you're looking at very very long electrodes that you stick in the brain and oh by the
way, once you put them in, the brain says we have a foreign invader and it scars around it, so it lessens that, um, effectivity. Um, one of the other things that Elon's talking about is actually using electricity to stimulate the brain, which is great except for the fact that we have a very extensive literature from deep brain
stimulators that show that putting electricity in can cause side effects like profound behavioral changes, um, changes in sexual preferences and behavior, um, compulsive gambling and spending and so if you're just gonna be putting one of these into your brain and turning electricity on, you may wanna know that it's gonna do a lot of really crazy things. So that's an area of research that really should be explored more before you
just go down to your local, you know, Radio Shack 2.0 to get one of these put in. So, the next one is Facebook. This one got a little less press because they released a peer-reviewed article in Nature Communications. So two years ago, they said they were gonna do typing by brain and it was gonna be 100 words a minute from your brain to
Facebook. Um, right now the gold standard for typing by brain, if you're gonna be reading someone's neural signals non-invasively, so just a cap on their head, it's about one word a minute, I think Stanford can do eight words a minute. Um, also the study that they came out with, which if you're interested, you're more than welcome to go online and read, they only did it with three subjects and they actually did it very
invasively. So I'm gonna warn you now in a couple slides, I'm gonna show a picture of brain surgery, I will tell you when, feel free to close your eyes, but I'm gonna show you what they actually did to get this information out. And of course, everybody knows Facebook, do you really want Facebook to have direct access to all of your neural signals, particularly when they know what you were looking at when you were using it, so just little, little thought in the back of your mind. Okay, so the way that the
people, uh, the way that they did the experimentation in this is that they used something called electrocorticography or ECOC and that is for patients who have intractable epilepsy, they don't know where the, the locus of the seizure's coming from, so they bring them to the hospital, they take off the, the skull and they put electrodes on the surface and they let them sit in the hospital for two weeks and
they have seizures and they can find out where those seizures are coming from. While they're sitting in the hospital, they're bored out of their gourd, literally, um, that wasn't as funny as it's supposed to be, um, and so researchers come in and you can do really cool experiments, so, this is what the grid looks like, so it's, you know, they have different sizes, they have different numbers based on where you want to put them
and hemispheres and coverage, okay, the gory picture's coming up next, if you do not like surgery or gore or blood, please close your eyes, I'll let you know when you can look again, okay, here we go, this is what they're actually doing, so this is, um, electrocardiography, this is the subjects from the Facebook experiment, this is kind of
what Elon is trying to do, except smaller, um, so if you just look at this and think about, do you really want someone who doesn't have a medical degree doing this to your brain, yeah, little, little thought for mind, okay, gory picture's gone, you can open your eyes now, so what is currently feasible now that I've scared the living crap out of you? Um, what I ended up doing is I used electroencephalography or EEG, this is no
surgery needed because it turns out I am not a neurosurgeon, I can't put, uh, electrodes into my head, um, as you can see, I take goofy pictures, I actually have lots of these of me wearing EEG caps because I think they're cool, um, this is the setup that I used, it's a brain vision brain products cap, and what I was looking for is I was
looking for events related responses to specific stimuli that I was showing, so on your right hand side of the screen, you can see sort of this family of, um, brain wave patterns, and so they're, are related to potentials and what happens is they come in response to different stimuli, so ERN is, um, error related negativity, so if you make a mistake,
your brain actually creates that so you know you made the mistake, there's one for spelling errors, there's one for grammatical errors, and the particular one I'm interested in is called P300, that's a positive peak 300 milliseconds after the stimuli is shown to you, and the best way to explain this is to tell you about the
experimental paradigm that a lot of people use to test this out, so this is called the guilty knowledge test, um, and the P300 is called the oddball response because it recur, it occurs in, um, uh, in response to things that are different than things around it, so the way this usually goes in the experimental literature is you'll have a subject come in the room and there'll be six pictures usually of jewelry or
something, and you'll be asked to steal one of them, so either put it in a drawer, put it in your pocket, put it, you know, somewhere else, and then they sit you down with an EG cap on your head and they start showing you pictures of the things that you could have taken and they record your neural signals to see if they can figure out which one
elicits the response of this is the one that you took, and lo and behold it was the watch you have been caught, um, obviously it's not quite this drastic, you have to do this over and over again, but this is the general idea of you have a family, a, a set of stimuli, and you're hoping that one of that set is the target so that you can elicit this
response and then you can actually use that information. So what I ended up doing is I did a single digit guessing game, and I did this because I wanted to go back to basics from the literature, so if you look at the prior literature and there are a couple papers specifically about elicitation of private information, they tend to use overt or
unconscious, so you know what you're looking at stimuli, or subliminal, technically unconscious, but most people can actually see what it is because monitors being the way they are, they don't refresh fast enough, et cetera, et cetera, and they all relied on experimental training data, so what happens is you come in and the experimenter show you a series of stimuli and they know which one you're going to have a response to
and they can use that set then to match the test data so they know what they're looking at. I was looking at completely untrained data, so you come into my lab, I put the cap on you, I start showing you stimuli, and then I try to figure out the number. So this is the picture of, again, me wearing a lovely cap, you know, in the, in the lab, and
basically what I did was I had subjects pick a number, and then I told them they were going to stare at a dot on the screen, and the numbers would flash around it, you can see the timeline of the stimuli on the screen, and the only thing I told them after they selected the number was that they would have to put the number in again at the end of the experiment, so I didn't actually tell them that they were supposed to think about it, or
what they were supposed to do with that number, I just said pick a number, and at the end you're going to tell me what that number is again. And I ended up with three kinds of results, uh, one of them about the overall effectiveness in identifying subjects chosen digit, the effect of attention on identifying the subjects digit, and then determining the current versus future digit information, so actually getting it in
attention here. So, for the first one, for all but one of the subjects that I had, the computer correctly calculated the correct digit that they were thinking of two to three times out of ten sessions that we were doing, so this is an example of one of those sessions, the computer got it right three times out of ten. And you may be thinking, well, twenty to thirty percent when chance is ten percent, that's not great, however,
this is with zero training data, and actually fairly simple signal processing techniques, and if you look at the rest of the literature, it's actually not that much worse compared to even things with training data sets, and I will say the one paper that did have untrained data in it, they said it was quote five to ten times harder to calculate with untrained data versus trained data. So it's, it's okay, you can, you can
actually do this where if you show enough stimuli over and over again, you can increase that confidence interval, you just have to have more time and more data. The effect of attention, so in my experiments, the subjects literally sat there and stared at a dot in a screen for five minutes at a time, and it turns out people get really tired and start falling asleep. So what I ended up doing is I had subjects in, um,
counterbalanced, um, sessions press the space bar when they saw their digit. And so that let, meant that I had some sessions where they were paying attention and others where they were more passive. And so percentage wise, the correct digit was calculated more often for the space bar rounds when they were paying attention than non space bar rounds, which is when they were passive. And this actually holds up
with the literature in a, another experiment where they were having people, um, count the number of times a region on a map where they live showed up, and that one actually they also got a higher percentage of gas accuracy. So, this is great. The third one, which I think is super cool, is determining current versus future intent. So like I said before, I didn't actually tell subjects how they were supposed to maintain that digit in
their head. And it turns out for some, the number the subject was going to pick in the following round was calculated almost as many times as the number for the current round. But this was not consistent. So as before, it was 20-30% across all subjects. But here, you had some subjects for whom you only got, you know, 20% right and no
future guesses. And you had some subjects for whom you got more of the future digit correct than the current digit. So, this is super cool. It may have something to do with how they were thinking about the experiment. Maybe they were just thinking about the future digits because they wanted to get done and stop staring at a screen. Um, but if anyone wants to do a PhD dissertation, I have a great research topic for you and I
can tell you which lab to talk to. So, let me know. Okay. So, that's all fine and dandy. We can extract information. It may not be 100%, it probably will never get to 100%, but we have that technology. So, can we then ask the question, what do consumers think about neural privacy? So, if you lived in Gotham City and Edward Nyingma's device came
out, what would you actually think about the fact that, you know, information was being put into your head and then taken out? So, what I talked about in this scenario is what's being protected. I am interested in the quantifiable information that is determined from the combination of the electrical signals from the brain, along with the
relevant environmental stimuli. So, what you were looking at and we looked at it. The original raw neural signals without context are much less informative. Um, there are some studies where they're looking at determining things like Parkinson's and Alzheimer's using just raw traces of neural signals, but for the actual information you need to know the context in which it was generated. So, um, we're talking about
definitions of neural privacy. This is not the first time that people have thought about privacy. So, for those of you who are law nerds in the audience, if you go back to 1890, this is the very famous right to privacy, um, by Warren and Brandeis and they wrote this article in response to this crazy new technology called photography and how it
was going to be, you know, invading people's lives and, you know, things were going to be put in, in permanence and there was all these papers and so they basically were like, we need to declare a right to privacy now. And now in 2019, we're still having that conversation about privacy. What I'm saying is we also need to extrapolate it to emerging
technology. So, let's talk about neural privacy specifically. And so the four issues that I considered in defining neural privacy, um, is privacy a right or an interest? So, in legal terms, a right is something where if you're harmed, you can actually get some sort of compensation or, um, um, you know, reimbursement for that. An interest is just, yeah, I'd
really like to not have this happen to me, but if it does happen to me, I have nothing to do about it. So, can we actually come up with sort of a legal structure, whereas if something happens in your neural signals, you can do something about it. Do we own our own thoughts? So, I love this question because everyone has their different ideas about who our thoughts are and how they make us a person and our thoughts are a person or our
thoughts are just inside of us. Um, but what happens when they are extrapolated? So, if you are, um, playing a video game with your brain and they start showing you different pictures of coffee logos and they determine that you like Starbucks, do they own the fact that you own- you like Starbucks now? They're obviously gonna probably
monetize it and try to send you targeted ads about Starbucks, but who owns that information? So, that's a great question to ask. What is the relationship we have with those who elicit information neurally? So, um, there's a lot of questions about relationships we have with the, um, data aggregators and social media. Is it, uh,
fiduciary information, fiduciary relationship? Is it a, uh, parasitic one? Is it a symbiotic one- symbiotic one? Um, and there's a great philosophical, um, discussion that I get into on the next bit. And then the importance of trust. So, do you actually trust when you hand over your neural information that they're gonna do what they say they're gonna do with it? And that's something that we're gonna talk about
from the neuroethics study. So, I took out a bunch of slides because y'all don't need to sit here and listen to my dissertation, uh, chapter about philosophy. But what it boils down to is we should all have an interest in protecting our neural privacy, but we do need additional legal frameworks to make it a right. So, congresspeople, I heard there were congressional staffers in here, take note. Um, defining and ascribing ownership
is necessary to provide value to what is being elicited. So, this is the case of, yeah, maybe eliciting that you are a fan of Starbucks over Pete's coffee may mean more than you turned right in a video game. So, how can we actually ascribe a value to that information if we do want to come up with some sort of economy where you are actually
allowing these thoughts to be elicited? Users should be able to trust that the information taken from elicited neural signals by a company will be used and interpreted properly, making the relationship between a user and a company an intimate one. So, um, the concept of int- intimacy and privacy is talked about a lot by Julie Ennis. Um, she
actually has a great book about this if you like philosophy or just like reading. Um, and she talks about an intimate relationship where you actually have an understanding. And so that, I like that framework for talking about this. And so, to test out some of these questions, I actually did a neuroethics survey. And so, I put this out online. Some of you may have seen this on Twitter. Um, but I got 77 respondents in about 24 days at
the beginning of the year. And in it, I had 4 questions. I'll go over 3 here and, um, the last one in the, the latter half of the talk. So, I basically asked, is there a difference in perceived privacy violation between a person intercepting BCI information versus, um, a phone or an app? So, something that is not a person. Um, what are
the differences in trust and willingness to share neural information with a range of entities? And is neural information more important than other data that's already available about us? So, things like your Fitbit or maybe your online shopping history. One of the things that I did do in this survey is I asked about mobility status. And the reason I asked about this is that someone who uses a wheelchair or a
cane or may not be able to move, um, about, like someone else in the world, they may have a different relationship with, um, privacy and trust and that they have to have a homemade come in and help them use the bathroom or they can't reach the top shelf so they're always asking someone for it. And I wanted to see if we could sort of suss out that relationship. So, question 1. Who or what is taking your information? So, the
scenario is you're sitting on a bus, you're using a BCI to control your phone, and some malicious hacker is sitting behind you and is able to intercept those signals. Meanwhile, um, in a different alternate timeline, you have an app on your phone. Um, I was ambiguous as to whether or not you installed it or you knew that it was doing it all. I
said there was an app on the phone and it was also doing the same thing. And so I asked in sort of a, a stair step set of questions, what is your, um, um, level of, um, violation of privacy based on scenarios? So, if it's just the content, if they're able to take a video of you typing it, if they have access to the current brain activity that
you're doing when you type it out, if they have access to you planning to type out, so, getting back to that experimental part of future intent, and then if they have access to the emotional content of that message. Uh, two takeaways from this one, the personal procurement, so the, the person sitting behind the bus of neuro-planning information,
so the future intent is a statistically significant privacy violation over the app. That was the only one that I found to be statistically significant. So that's kind of strange, you know, you, you don't want someone to know what you're doing, um, but also it gets to the, we tend to be okay with giving away information on our phones, even though the phones may have far, you know, worse implications about it, going back to the face
app thing of aging everyone. Everyone was like totally okay with putting their faces on there until everyone was like, wait, are they Russian? Um, sort of thinking about that. Um, I also found that across all five categories, mobility status did not statistically impact perceptions of neuro privacy. So, um, I have more analysis, it gets a little more nuanced when you look at the, each of the individual scenarios, but, um,
overall, didn't have an impact. So that was interesting. So the next question. If you're using a BCI and it has the ability to find out what foods you like, what your physical and mental state is, who you're attracted to, or maybe your political views, are you willing to and do you trust giving this information to six different entities? So I
started with a family member, a physician, university researcher, a government entity, a non-profit, and a for-profit. So lovely chart from R for those of you who are familiar with R. Um, you can basically see that family members, medical professionals, and university researchers, you're okay. Trust and willing. You're, you're willing to go there. But you get to government, non-profit, and for-profit and it's like, no, no, no.
Untrustworthy, not, not going there. And it's interesting because I feel like the thing that I left out of this is the, um, the feedback loop. So why were you giving it? So maybe if, um, you needed to, um, you wanted to donate something to a non-profit, like you wanted to donate your brain signals for an AG repository, maybe you'd be a little more
willing to do that. Or, you know, if you know that in a medical profession you have heparin protecting you or university researcher has to go through an approval process through a review board, then you might increase it. So there's a lot of variables that needed to look at here. But I really like this because I can go to a for-profit company and say, look, people really don't trust and aren't willing to give you this information, you
should do something about that. And finally, what's more important? So I asked, is your neural information more equally or less important than Fitbit or similar exercise tracker, uh, the record of your personal medical history at your doctor's office, genetic information, um, like 23andMe, your online shopping history, monthly credit card
statement, and a journal or a diary. So these are the results here. Um, what's really interesting and there are definitely people who think that their neural information is less important than things like their online shopping history. I would really love to meet these people to find out what they're buying online. Um, but, um, but you can kind of see that things that may or may not have more like bodily salience, the
medical records and information, journal or diary, you know, directly projecting your thoughts and feelings onto a page, you know, they're, they're about equivalent, but you get to exercise tracker and credit card and it's like, yeah, so that's, that's a level of abstraction of what you're thinking about. So, cool result. Okay. So, what are the
potential policy and regulatory implications? So people obviously have thoughts and feelings about this. We've shown that it is possible. Is there anything that we can actually do about it? And so this gets to the, you know, back to the Batman Forever, um, analogy. If Harvey Dent hadn't turned into Two-Face, could there have been a law in Gotham City that would have allowed them to go off and prosecute him? Or, at the
beginning of the movie, when Edvard Nigma was proposing this, Bruce Wayne and, um, Nigma's boss could have been like, oh, according to the FDA or, you know, whatever government agency, you can't do this because of XYZ regulations. So, let's talk about that. So there are some existing biometric precedents where they're either protecting or profiting off you. Um, the 2008 Genetic Information Nondiscrimination Act, um, is, um,
implemented in, by Congress, um, and this allowed people to seek out, um, genetic, um, sequencing and then not be discriminated against it. So, the, the, words. Um, the way that they were calling this out in the bill is they specifically talked about sickle cell anemia. So that is a particular, um, affliction that only happens to a certain
part of the population and they should not be discriminated against for going out and taking treatment for it. So that's all fine and dandy. Um, there are also, it ties into the Affordable Care Act where you can't technically be discriminated against, um, for that genetic information but that's because we have the pre-existing condition clause. So if
we get a pre-existing condition, you can technically be discriminated against. So, go for covering pre-existing conditions. Um, the life insurance one is interesting. Um, in, uh, this starting this year, I believe it's John Hancock, um, will only provide life insurance if you do active tracking. So you have to wear a tracker or like a Fitbit or
something or you have to fill out a survey. They will no longer just let you sign up or fill out a questionnaire. They actually have to be monitoring you at all times to make sure they put you in the right life insurance bucket. Um, in the state of New York, you are also, um, companies are also allowed to follow your social media feeds to figure out how they're going to set your life insurance rates. I'm really annoyed because I
can't find the notes, um, in this particular setup. But the Wall Street Journal actually published things that you should and shouldn't do for your insurance rates in New York. And one of them was like, um, you should frequent, um, gyms but leave your phone at home when you go to the bar. Or do activities like running but if you go
skydiving, you know, that's a little more risky. So they're literally telling you how you should and shouldn't act because otherwise your life insurance is gonna change. There you go. Yes, boo. Very boo. If you live in New York, talk to your, your state legislators. Um, so the final one that's really interesting, um, is a state case called Rosenback versus Six Flags. And so this is a Illinois state supreme court case. Um, the
state of Illinois, if you live in Illinois, good on you. Um, they actually have one of the strongest biometric, um, protection laws in the country. Um, unfortunately there's not much competition because the only two other states that I know of are Oregon, or Washington, and Texas. Um, but they basically said that, um, in the statute you were
required to get consent to take any form of biometric information, that includes fingerprints. Um, and you also have to have a written, um, documentation of what you're doing with the information and how long you keep it. And so in this case, a mom signed her son up for a season pass at Six Flags and that she said go fill out the
paperwork and you get there and bring back the pass. And the kid comes back and she's like, where's the pass? And the kid said, oh, they just took my fingerprint. Cause it turns out, of the Six Flags in Illinois, um, they didn't have passes, it was a biometric to get into the park. And so the mother said, not only did I not consent to having my son give his fingerprint, uh, I don't know what they're gonna do with it. And so they went through and the state supreme court said that it
was a harm that Six Flags had violated the statute. So they had taken the information, even though they hadn't done anything with it, there was a harm and the mother was allowed to seek, um, a right of action, i.e. she could sue them or do whatever. And this is different because most of the time, when someone takes something, you have to prove that they did something with it. And so this is a great example of how you can
actually create laws that allow you to get, um, compensation for something even though, you know, nothing terrible may have happened with it. So this gets to the last question that I asked. Do people actually have feelings about who should be involved in development, in, you know, the, the sale of and then in reparations for malicious use,
um, or elicitation? So I asked this question, it was kind of a, a grid chart, um, of who should be in charge at what portions of it. So, uh, if you're a user, uh, an industry researcher, an independent regulatory organization, a legislator, or a device manufacturer, how should you be involved in BCI development compared to current involvement, so there is a subjective how much do you know about what's currently going
on, for development oversight, the actual implementation and use, and then the reparations. And so the two main takeaway points here are that independent regulatory organizations, legislators, and device manufacturers should be more involved going from development to reparations for misuse. So as you sort of go down this development chain, they should be more involved in regulating or actually saying that you should be
anonymizing or, you know, where things should be happening. Um, I also found that users should be the least involved in reparations for misuse and device manufacturer should be the most involved. So it shouldn't be the onus of the user when something is taken from them to go out and try to sort of survey how they can get
reparations, the device manufacturer should really be taking charge of that, either in just giving out money, or maybe they should be protecting the information to begin with so they make sure that no one's eliciting information without their knowing about it or they're not the ones who are doing it. So, based on all that, there are some policy solutions, and there's a part in here where I, all y'all can participate, so get ready to take notes. Um, one of the biggest things is increased
involvement by legislators with reparations from elic- elicitation or misuse. So it'd be great if we could get federal or state level rights to neural privacy, or broader genetic biometric data privacy legislation. Um, also possibly providing reparations by statutes, so either monetary, allowing for a private right of action, i.e. you yourself can
sue someone, you don't have to wait for a class action, um, and also empowering regulatory agencies like the FTC to actually, um, have more money and have more people to look into this, because, thank you FTC, I know you're doing great things, but there's not a lot of you. Um, another great one is involving independent regulatory organizations, so things like IEEE, ACM, um, I actually consider you, the hacker, uh,
DefCon audience as an independent regulatory organization, because now that you know about this, maybe you start looking at source code, maybe you start, you know, taking down the- the terms of service and actually looking at what's going on so that you can inform, you know, here at DefCon or otherwise what's actually happening with the
systems that we're using. Um, it would be great to have accountability for device manufacturers, cause let's be honest, there's not a lot of that right now, and then overall, how do we actually, uh, portray to consumers the risk of using a device? So how do you let someone know that there's a 75% risk that information could be elicited from them by using the device? Or how do they actually understand that 99% of
the time they use it, they're gonna be protected from hackers coming and eliciting information? So how do we have that conversation? And this gets to, um, overall tech literacy, um, in the United States and beyond. Okay, so, homework. Here's my asks of all of you. Um, for those of you who are familiar with This Is Fine Dog, we are about on
slide 3, um, but it's not too late, we can start putting out the fire, um, and there's a couple different ways that we can do that. I actually have a- a stuffed This Is Fine Dog at home, um, it's very cute, a little plushy. Um, reminds me that things are terrible but they can get better. So to the developers in the room, just because you can
doesn't mean you should. And I say this, I- I see some yeas in the front of the audience here, okay. I say this because as someone who loves new technologies and loves hacking and things like that, you really have to start thinking about the things that you're doing. So if you are trying to create a game controller for someone who's paralyzed, that's great because maybe the brain signal's the only thing that's left. If you
are literally, like, shooting electricity through your skull, you know, using a 9 volt battery or whatever, or you find a friend and a drill bit and you're like, yeah, we're gonna drill a hole in my skull, we're gonna stick this wire in, like, no. Please, for the love of god, no. Um, ask yourself what problem it is you're trying to solve. Maybe are there other modalities that you can use to obtain that information and what
is the least amount of information you need to complete a particular task? So maybe you don't need to have complete coverage over the entire cortex. Maybe you just need motor cortex. Maybe you just need, um, paraloxypital for the P300. Try to gather the least amount of information possible to make yourself the least liable. And then
finally, do as much processing as possible locally or on the device. And so the best example I have of this, this is a BCI anonymizer. This is, um, from a paper called App Stores for the Brain. Um, this is by Tamara Bonacci, Ryan Calo, and Howard Chizik. Um, Tamara Bonacci started this research in the lab that I was in. Ryan Calo was on my dissertation committee and Howard Chizik was my PhD advisor. So, um, but they
basically were saying, look, is there a way that you can still get information out but you're only releasing information that's necessary to be used by the device itself? So, if you're controlling a helicopter with your mind, it doesn't need to have the entire raw data stream of your neural information, it just needs to get the commands up, down, right,
left, type deal. To the privacy conscious people in the room, I'm really sad there's not a lot of aluminum tinfoil hats in here, I was expecting a little bit more. Um, so right off the bat I can say now don't use these kinds of devices. And it's easy for me to say because they don't have market saturation, it's not like you need them to do your job. This is gonna be a lot harder if for some reason these devices start getting
mandated for use. So, you have to pass some sort of lie detector test using a BCI. You have to wear it for your job because they're monitoring your productivity or that's just how you use the computer. So, as we get further and further down this technological path, how can you opt out and how does that disadvantage you when you do that? Um, if you
are worried about people eliciting information without your knowledge, you may feel better with the slower screen refresh rate to prevent that subliminal elicitation, go pull that CRT monitor out of the basement or the garage, um, you should be fine. Because then at least you'll know that, you know, someone's trying to get information out from you. And here's my ask, ok, contacting congress.org, everyone write this down or go to
it right now. So, you can look up your federal, um, legislators, your congresspeople, um, your house representatives and your senators. Um, step one, you can either call or email their DC offices and ask them point blank what is your position on data privacy? What is your position on regulating emerging technologies? They will probably send you
back a very nice form letter, but having been a fellow and staffer in congress, um, they will categorize this and if they get enough people calling about it, then they know that the consumers are interested. If any of you were at the, um, uh, hacker talk yesterday that had reps Langevin and Liu, um, they also said the same thing, like you can
totally be involved in this process. Um, it is the August recess which means all of your congresspeople are back home in their districts, um, go to the town halls, ask them what their data privacy feelings are, um, you can even call and make an appointment and you'll either talk to a staffer there or you'll talk to the representative or senator themselves depending on their schedule and have a conversation, let them know that you
are an expert in security, let them know you're an expert in privacy, let them know what expertise you have and then maybe when a bill comes up, they'll be like, oh, we should, you know, find out more information about it and they can use you as a resource. Um, and just generally be involved in the democratic process. So, uh, the offices of the members are actually, uh, they belong to you, not to them, so you can go into them if you go to DC, um, you should be totally feel free to reach out to them, um, in
your state as well. And this goes for state legislatures, um, and city legislatures. So, anyone from the state of Washington, we just had a big showdown over, um, data privacy in the last congress, um, they're probably gonna be bringing that back so get ready to call your, um, state legislators, um, in 2020. And most importantly, and I know
that there are studies saying this is gonna take years off your life, but really read the terms of service to find out what's happening to your biometric information, particularly biometric information. It's probably too late for, you know, most of the social media sites, but if you're gonna be putting that cap on your head, you should really know what's happening to your information. Okay, three letter agencies in the room, I know
you're here, I know that you offered money to my advisor to fund this project and we turned you down. Um, I will say, I don't know if I was supposed to say that, crap, nevermind. Um, too late now. Um, I, so I'm guessing that there's a couple people in this room who are probably saying, why the hell would you do this research? You're enabling the further, you know, use of this technology and you're gonna let the three
letter agencies come and steal all our information. And my response to that is, if I'm not the one telling you that this is happening now before it becomes an actual problem, would you rather find out later when it is a problem and they've already been taking information from people? So, now that you, the hacker community at large, are aware of
this, you can start looking for it. You can start asking those questions and you can start being skeptical of these kinds of devices coming onto the market. Trying not to make you all too paranoid, but let's be honest. Um, if you're thinking about using this kind of technique for interrogation, you have to come to terms with some serious ethical and legal questions. Yes, there is actually a neuro law group, um, it's
part of the MacArthur, um, it's got a MacArthur grant, it's out of Vanderbilt, you can sign up for their distro list and they will send you, on a semi-regular basis, papers, um, and conferences related to neuro law. It's actually quite interesting. Um, but you can look at questions of, you know, freedom of speech and expression, reasonable expectations of
privacy, um, self incrimination. Are you really gonna find a judge who, you know, for whatever reason is going to allow that if you don't have some sort of warrant? I don't know. Um, there are, I'd also like to point out that all of these results are from compliant and willing participants. So these are mostly graduate students who volunteer to come sit in a room and you give them money or a gift card afterwards and they're
perfectly happy to stare at a dot on a screen. I don't think someone in custody is gonna be that compliant, so I don't know if the results are gonna be that good. This technology is also still in its infancy and you really shouldn't think of technology as the solution to a problem. Um, for those of you who are familiar with FMRI literature, there was a poster that someone did where they took, they took a fish, a dead salmon from
Pike's Place Market and they put it in an FMRI scanner and they got statistically significant results when they showed it images. So think about that. If you can get statistically significant results from a dead fish, are you really gonna be confident that
the information that you're gonna be taking from someone who doesn't want to be giving the information is really what you want to be getting out of them. So, think about that. So in summary, this is one future. So going back to Black Mirror, um, this is Playtest, this is someone who volunteers to um, come and play a video game, again spoilers, I'm sorry, um, but they figure out that he's afraid of spiders. They figure
out who his childhood bully is and they really start using this information um, against him and super spoilers, he dies. Sorry. Um, but it's a very dystopian future of, you know, everything that we think is now going to be used against this. And I'd like to posit that we can create a different future. Um, so, shout out to my Star Trek fans in the
audience, Jordan LaForge, um, yay! But you know, this is another BCI, like it is taking information in from the outside world and putting it into his brain and I don't know if there was a plotline that I missed but I don't remember them being like, oh, we found out that Geordi likes Starbucks because we showed him a bunch of logos. So you
know, I don't know, it's, it just seems less dystopian, it's a much happier future. So start thinking about the ways that we can do this for good. And if there's one thing that I need you to remember from this talk, there is a difference between telepathy and targeted elicitation of information. Targeted elicitation. So we're not gonna come and
use a brain ray and take all of your thoughts, most of the time it's going to be very specific and the stimuli are going to be very particular. So, on that note, um, I took out my funding screen slide, you can come talk to me about funding. Like I said, I have lots of pictures of me wearing nerdy EEG caps. Um, you can find me on Twitter, um, and thanks to everyone who made this talk possible and thank you so much for coming, um, so
early on a Saturday morning to Vegas. Um, I'll be around if you have any questions. So, thank you!