We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Fundamentals of EEG based Brain-Computer Interfaces

00:00

Formal Metadata

Title
Fundamentals of EEG based Brain-Computer Interfaces
Title of Series
Number of Parts
254
Author
License
CC Attribution 4.0 International:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
The availability of consumer grade EEG headsets and open EEG hardware platforms makes it easier for everyone to develop a Brain-Computer Interface (BCI). The talk will explain the basics of Electroencephalography (EEG) and how information can be extracted from the Electroencephalogram, which is basically a noise signal. This covers Event Related Potentials (ERP) and their application in common BCI paradigms and biometric schemes. Standard approaches for the experimental setup and for EEG signal processing will be discussed. As an example an EEG based authentication system will be presented that uses the P300 component of the ERP and images as a password. This talk will give an overview of the issues of performance, usability, privacy and security in BCIs and how far the technology is from reading the mind or connecting us to the matrix.
Keywords
German
German
English
English
ComputerInterface (computing)Computer animationJSON
ComputerInterior (topology)Ext functorElectronic data interchangeFundamental theorem of algebraSoftware developerComputerMetric systemLevel (video gaming)Interface (computing)TrailComputer animation
Computer hardwareComputer hardwareProjective planeSoftware developerBitOpen setGradientReading (process)WhiteboardInterface (computing)Computer animationPanel painting
Translation (relic)Function (mathematics)Interface (computing)Task (computing)FeedbackReading (process)Interface (computing)outputPreprocessorTask (computing)Support vector machineWhiteboardComputer animation
Function (mathematics)Wave packetMultiplication signSupport vector machineInformationGoodness of fitSet (mathematics)Translation (relic)Virtual machineVirtualizationDisk read-and-write headKeyboard shortcut
Keyboard shortcutOpen setState of matterFeedbackInterface (computing)Virtualization
State of matterLine (geometry)FrequencyOscillationMathematical analysis2 (number)RandomizationRange (statistics)Multiplication signType theoryOrder (biology)Disk read-and-write headComputer animation
Task (computing)Software developerState of matterPower (physics)Level (video gaming)PlotterRange (statistics)Multiplication signTerm (mathematics)Spectrum (functional analysis)MathematicsComputerFrequencyOscillationDifferent (Kate Ryan album)AdditionAlpha (investment)Musical ensembleInternational Date LineNoise (electronics)Source code
Row (database)Area2 (number)Physical systemComputerDifferent (Kate Ryan album)Noise (electronics)Line (geometry)MathematicsAverageLecture/ConferenceComputer animation
BiostatisticsElectric power transmissionFilter <Stochastik>FrequencyDisk read-and-write headDirected graphMereologyNP-hardDifferent (Kate Ryan album)NumberMultiplication signComputer animation
Task (computing)Event horizonComputerType theoryProgramming paradigmLine (geometry)CASE <Informatik>VirtualizationRow (database)Keyboard shortcutMultiplication signMedical imagingProcess (computing)Connectivity (graph theory)Noise (electronics)Vector potentialInterface (computing)Whiteboard
AuthenticationPhysical systemMultiplication signType theoryRow (database)Numbering schemeInterface (computing)Lecture/Conference
Interface (computing)Digital photographyThumbnailAuthenticationSet (mathematics)PasswordNumbering schemeForcing (mathematics)Computer animation
Computer-generated imageryAuthenticationStreaming mediaTask (computing)Multiplication signArmComputer animation
ResultantComputer animation
Metric systemMedical imagingCross-validation (statistics)Bit rateLine (geometry)Multiplication signSound effectPhysical systemWave packetBiostatisticsAuthenticationNumberValidity (statistics)Set (mathematics)PlotterSubsetBitDifferent (Kate Ryan album)AverageEqualiser (mathematics)Error messageRoundness (object)Fitness functionTask (computing)LinearizationMathematical analysisComputer animation
InternetworkingBitMultiplication signLecture/Conference
Medical imagingSpherical capWave packetSoftware testing40 (number)Lecture/Conference
Software testingCartesian coordinate systemSimilarity (geometry)
AuthenticationMedical imagingInternetworkingBitDifferent (Kate Ryan album)BiostatisticsValidity (statistics)CASE <Informatik>Computer animation
Different (Kate Ryan album)Right angleInternetworkingCross-validation (statistics)Multiplication signComputer animation
Lecture/ConferenceComputer animation
Computer animation
Transcript: English(auto-generated)
Okay, so
Maybe you are Always wondered how you could do Jedi metrics with a computer and that's exactly why we are here now So gnudi is going to tell you fundamentals of eg based brain computer interface and he's always been fascinated with the human brain and
He is a researcher and that scope and I give the stage to you Hello the reason why I'm giving this talk is Recently there has been development in electroencephalography
That was developed about 100 years ago And has been used in research and in medicine as well but we now have a consumer grade EEG headsets as well as some open hardware projects
aiming to develop EEG headsets There I have a picture of the emotive epoch which I think was the first consumer grade EEG headset and Actually, I think the aim of the open BCI project is to
get cheap research grade hardware I'm not going to explain too much about About the devices I want to talk a bit about how we can use eg readings to
To have a brain computer interface a brain computer interface Typically consists of a user having a task the task can be Thinking for example to it to have some input or if
If it's used to drive an electric wheelchair, for example, it could be a thought to go forward The signal has to be the eg signal has to be acquired I'm not going to talk about that. I'm more focusing on the pre-processing of the data and the feature extraction
Classification can generally be done with all kinds of classifiers Popular our support vector machines But a good feature extraction is essential we cannot really do Machine learning approaches where we learn the features because we typically do not have very much training data
doing eg experiments with human subjects takes a lot of time and also The data might contain Private information so often the data sets are not made public
So, yeah, that's what I'm going to talk about mainly but generally after classification we have an output translation It can be a virtual keyboard or something And there can be that is optional a feedback that allows the user to train the brain computer interface
Here I'm showing you the timeline of an eg signal this was a resting state Experiment so the subject was just resting doing nothing with eyes open You can't see very much in the type line it
Looks quite random Random oscillations quite low generally the signals are in the millivolt range So one of the first steps that you can do to a time frequency analysis so what we have here is we have 14 seconds of
eg and 14 electrodes at the 14 channels as the emotive epoch eg's headset has And here we also have 14 seconds and this is basically doing Couple of yeah computing
Spectra for different time slots So you'd see the development of the spectrum over time and what you see here is one of the things that make it difficult most of the signal power Is in the range below 5 Hertz
The different frequency ranges are typically associated with For example, yeah some states of mind like sleep states Actually, it's quite easy even with the timeline to
To see in which sleep state someone is Important is see our alpha band That's something that should be in in the plot that I showed before as well
We couldn't really see that in the timeline if actually I had instructed the subject to have the eyes closed but not open we would have seen oscillations in that range on the electrodes that are above the visual cortex because They would go to idle state as if nothing is seen and we would have more power in there
If we are doing eg experiments We typically look into changes because we have this huge random noise signal Where we have basically no idea what it means
We typically have experiments where we look at some At least two different states So we define something as a baseline and then we typically Look at what changes if we for example, we have a resting state task for the subject and then the task is to think
maybe the top The command to move the wheelchair What we see here is again from the same eg recording What I did was I used
Two seconds before that are not plotted here, but it looks about the same computed the average and Divided it all by it we call that a baseline correction. So now
It's easier to see changes here in the other areas so Having a baseline is something that you Normally do if you have some eg experiments We also have some other problems besides the general background noise that we have
It's artifacts here. It's a similar Timeline the difference is The subject was instructed to blink in intervals you see that there are some huge peaks there especially on the lower and the higher lines that's because
according to the 1020 system The electrodes are numbered around the head. So on the top and on the bottom It's basically the electrodes on the forehead. So what we see there is eye artifacts actually to see artifacts
That's quite easy in the eg timeline The problem is getting rid of it if we don't want to have it. So we start typically by instructing subjects Not to blink not to move Also a problem can be having for example the 50 Hertz power grid
Can also be an artifact in the signal therefore we typically have Filters at frequency There are different approaches to to get rid of the artifacts the simplest one is cutting out of the data
The parts with artifacts, but this basically means we have to repeat the experiment to have it several times But we are doing that anyway
if we One approach for brain computer interfaces are event related potentials If we have an event that could be a subject is shown a picture or any other stimuli
And will you repeat that And then we average over all the reputations of showing this image Then all the random noise will Will cancel out and what we are left with is
The eg component that actually depends on the processing of showing this image And that we call an event related related potential This is just an example Typically, it doesn't look that nice
We Typically count the peaks so we have three Positive peaks here we call p1 p2 p3 and the p3 or also called p300 because it's about 300 milliseconds after the stimulus that is something that we use for brain computer interfaces
Because There's the so-called oddball paradigm the p300 is only there if something is relevant to your task And it's not happening very often. So the p300 speller, which is Something like a virtual keyboard
Unfortunately, I don't have an animation the lines and rows would light up at At Some speed if you want to type a letter you would focus on a letter and when it lights up That is a rare and relevant event to you
Because you want to type it so exactly in that case you will have The p300 in your ERP and of course The system that is providing the stimuli Is recording the timing and then knows which
Which letter actually was lit up when this? Appeared so this way you can type things But again, you would need to to stare at one letter for a while because it has to be repeated several times
So I want to present More specialized p300 based Brain computer interface that is an authentication scheme the idea is we are having 100 Normal photos can be anything And we select thumb which are our password
So this is the example that is actually a set that we use for the experiments Five very different photos. So now I'm doing a small experiment with you try to
Remember those photos and try to See them and count them in this video stream
So I think I'm not asking you to raise your arms because I can't see you anyway. I guess some of you Might have seen all five pictures might some might have counted less
For the first time the task is not really easy, but generally When in this experiment you you counted something you will probably have had this p300 in your brain
Those are the results of the experiment I Hope you can read it Might be a bit too dark As we have 95 non target image and only five percent so five
target images we use f1 score to evaluate the classifier We did cross validation on the data here Where we have the best score we also did Train the classifier, which was in simple a linear discriminant analysis
We trained it just by The experiment done by one person and then we also tried to do a general classifier that we trained With other people's data for that we used the best data sets that we have that had the highest
score in the cross validation It still works the interesting thing here is that The classification of the EEG data is possible without tuning the classifier to the user Making the system non biometric
This is the number of trials that we averaged so we actually showed 50 of those bursts that you have seen just before that takes about 20 minutes and No one wants to use an authentication system where one log in takes 20 minutes
So we looked at how many bursts do we actually need but it seems like Up to 50 it still increases and there is a huge difference between different sets. So The highest line is is the top rated Subjects and the others are much lower. So it depends a lot on the subject
maybe also on how well EG headset fit and of maybe how well they focused on the task Yeah, but it's The biggest effect that we found was
For an authentication system we want to have Permanence, so if we want to log in again after a few months, it should still work. So we had three sessions with some months in between And actually we got better scores over time We feared that they might degrade but it seems there's a training effect
Yeah, and the signal is permanent enough So here is the final score. That's a plot because even though it's no real biometric system We are measuring biosignals and that can go wrong
Therefore we have false acceptance rates and false rejection rates and we are coming to an equal error rate of about 10% Which is of course too bad, especially when you see that one authentication round takes about 20 minutes Yeah, so the idea was if we have the
The fear showing the image is very fast. We can have a very short Log in time, but it actually didn't really work well enough and That it There are three minutes left are there questions
30 ah, I found my was again. So indeed we have a bit of time There are two microphones and the Internet's so remember and there's a question at that microphone
Hello there. Um, I was wondering have have you heard of This being used to detect terrorists There was a there was an experiment done where they showed images of things that you're not supposed to know as a regular citizen So they would show all these innocent images and then there would also be a blasting cap or the magazine of an ak-47
Or stuff that you're supposed to know if you went to a terrorist training camp and they would could do it the exact same P300 thing I thought that was interesting and of course if you read about You know about training camps and terrorists you would fail the tests, which would be interesting. Yeah, I
Haven't heard about the application on terrorism, but very similar application on criminal Investigations as a lie detector based on p300. So basically the same you are showing some pictures that only the
only the Yeah, yeah that that only the one who's guilty knows and but there are also some papers about if you are if you are attacked by this piece 300 based guilty knowledge test how to How to prevent being detected
Okay, another question that might perform Two small questions one is which was the eg headset used in the image-based Authentication was at the open BCI and then also is p300 Like individualized like does it need a lot of calibration or is it something that you can just detect like straight in
I'm so the eg headset used for for the experiments was the emotive epoch and The p300 seems to be a bit individual there are approaches doing biometrics by looking at
P300 but our approach was to have a general p300 detector that would work on anyone that was the difference between the IC the individual classifier and GC general classifier You can see the score difference here the middle and the right one And the left one was a cross validation
Okay, it's the question from the internet I'm looking at the no question from the internet So unfortunately the time is running out and so I have to ask you to approach community directly and I don't know you're here. You're at the Congress. Yes can be contacted in some way I guess okay, then I would say thanks Mooney again for his talk