We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Can You Trust Autonomous Vehicles: Contactless Attacks against Sensors of Self-Driving Vehicles

00:00

Formal Metadata

Title
Can You Trust Autonomous Vehicles: Contactless Attacks against Sensors of Self-Driving Vehicles
Title of Series
Number of Parts
93
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
To improve road safety and driving experiences, autonomous vehicles have emerged recently, and they can sense their surroundings and navigate without human inputs. Although promising and proving safety features, the trustworthiness of these cars has to be examined before they can be widely adopted on the road. Unlike traditional network security, autonomous vehicles rely heavily on their sensory ability of their surroundings to make driving decision, which opens a new security risk. Thus, in this talk we examine the security of the sensors of autonomous vehicles, and investigate the trustworthiness of the 'eyes' of the cars. In this talk, we investigate sensors whose measurements are used to guide driving, i.e., millimeter-wave radars, ultrasonic sensors, forward-looking cameras. In particular, we present contactless attacks on these sensors and show our results collected both in the lab and outdoors on a Tesla Model S automobile. We show that using off-the-shelf hardware, we are able to perform jamming and spoofing attacks, which caused the Tesla's blindness and malfunction, all of which could potentially lead to crashes and greatly impair the safety of self-driving cars. To alleviate the issues, at the end of the talk we propose software and hardware countermeasures that will improve sensor resilience against these attacks. Jianhao Liu is the director of ADLAB at Qihoo 360. He specializes in the security of Internet of Things and Internet of Vehicles. He has reported a security vulnerability of Tesla Model S, led a security research on the remote control of a BYD car, and participated in the drafting of security standards among the automobile society. Being a security expert employed by various information security organizations and companies, he is well experienced in security service, security evaluation, and penetration test. Chen Yan is a PhD student at Zhejiang University in the Ubiquitous System Security Laboratory. His research focuses on the security and privacy of wireless communication and embedded systems, including automobile, analog sensors, and IoT devices. Wenyuan Xu is a professor in the College of Electrical Engineering at Zhejiang University and an associate professor in the Department of Computer Science and Engineering at University of South Carolina. She received her Ph.D. degree in Electrical and Computer Engineering from Rutgers University in 2007. Her research interests include wireless security, network security, and IoT security. She is among the first to discover vulnerabilities of tire pressure monitor systems in modern automobiles and automatic meter reading systems. Dr. Xu received the NSF Career Award in 2009. She has served on the technical program committees for several IEEE/ACM conferences on wireless networking and security, and she is an associated editor of EURASIP Journal on Information Security.
33
35
Multiplication signRoundness (object)Autonomic computingGoodness of fit
Student's t-testInformation securityCybersexSystem call
RadarHacker (term)Physical systemService (economics)Software developerData conversionTelematikWordHacker (term)SurfaceIntegrated development environmentAutonomic computing
GoogolBeer steinDevice driverLevel (video gaming)Data acquisitionVirtual machineAlgorithmDecision theoryAutonomic computingLevel (video gaming)Streaming mediaDevice driverStandard deviationAdaptive behaviorGame controllerNormal (geometry)Multiplication signModel theoryString (computer science)Computer animation
Interface (computing)Device driverDecision theoryClient (computing)Autonomic computingLinearizationGame controllerComputer architectureElectronic program guideUser interfacePlanningVirtual machine
MathematicsCartesian coordinate systemAutonomic computingLine (geometry)Set (mathematics)Decision theoryDirection (geometry)Integrated development environmentRange (statistics)Level (video gaming)WebsitePlanningDemoscene
Power (physics)Game controllerUser interfaceHacker (term)Suite (music)Power (physics)Game controllerString (computer science)Regular graphTelecommunicationDirection (geometry)Decision theoryPlanningRoutingAutonomic computingElectronic program guide
MathematicsContext awarenessControl flowLevel (video gaming)Insertion lossPhysical system1 (number)Streaming mediaAutonomic computingDevice driverMathematicsComputer animation
Crash (computing)MathematicsCASE <Informatik>CausalityOperator (mathematics)Incidence algebraAdditionTheory of relativity
Data modelShared memoryPoint cloudMetreRadiusLine (geometry)Type theoryUltrasoundVideoconferencingWindowRange (statistics)Functional (mathematics)Social classEngineering drawing
AreaMeasurementElectronic visual display
VideoconferencingGame controller
Vulnerability (computing)Physical system
Type theoryPersonal digital assistantSpacetimeSupersonic speedDistanceComputer animation
Punched cardUltrasoundElectronic visual displayReading (process)DistanceComputer animationLecture/Conference
UltrasoundSound effectSign (mathematics)Pulse (signal processing)DistanceTransmitterPropagatorMultiplication signUltrasoundMobile appType theoryComputer animation
NoiseUltrasoundNoise (electronics)Service (economics)Pulse (signal processing)Order (biology)Supersonic speedGroup actionDistanceElectric generatorCASE <Informatik>Computer hardware
Model theoryService (economics)UltrasoundPulse (signal processing)Noise (electronics)TransmitterService (economics)UltrasoundResonanceModel theoryFigurate number
FreewareUltrasoundModel theoryMaxima and minimaPulse (signal processing)Pole (complex analysis)Figurate numberMultiplication signPulse (signal processing)DataflowNoise (electronics)MeasurementReading (process)Type theoryDistanceResultant
Maxima and minimaData modelMultiplication signComputer configurationDistanceMaxima and minimaVideoconferencingResultantUltrasoundSupersonic speedComputer animation
UltrasoundDistanceNeuroinformatikMaxima and minimaString (computer science)Touchscreen
Normal (geometry)Maxima and minimaMessage passingFeedbackDistanceFunction (mathematics)Maxima and minimaModel theoryDifferent (Kate Ryan album)Message passingFeedbackPhysical systemMenu (computing)Reading (process)ResultantEndliche ModelltheorieVideoconferencing
Demo (music)VideoconferencingSoftware testingUltrasoundFiber (mathematics)VotingLecture/Conference
Demo (music)VoltmeterLevel (video gaming)DistanceFunction (mathematics)Metropolitan area networkVideoconferencing
Data modelMetropolitan area networkVideoconferencingClosed setReverse engineering
Demo (music)Normal (geometry)Different (Kate Ryan album)DistanceFunction (mathematics)Food energyCurveSoftwareDifferent (Kate Ryan album)WordLevel (video gaming)Thresholding (image processing)Maxima and minimaGroup action
Maxima and minimaDifferent (Kate Ryan album)InformationShape (magazine)Process (computing)Maxima and minimaCartesian coordinate systemLatent heatDigital electronicsDistanceMach's principle
Maxima and minimaDifferent (Kate Ryan album)Beat (acoustics)DistanceNoiseMultiplication signLine (geometry)Video gameMatching (graph theory)MeasurementDataflowNoise (electronics)Maxima and minimaThresholding (image processing)
Normal (geometry)Multiplication signHoaxRight angleVideoconferencingCAN busPulse (signal processing)2 (number)Reading (process)PropagatorPhysical lawDistanceSound effectTransmitter
ComputerUltrasoundDistanceNeuroinformatik
VideoconferencingDemo (music)DistancePunched card
UltrasoundNormal (geometry)Phase transitionUniformer RaumMusical ensemblePrice indexDistanceVolume (thermodynamics)ResultantReading (process)Sinc functionType theoryMereologyComputer hardwarePhase transitionReverse engineeringUltrasound
Phase transitionDemo (music)Materialization (paranormal)Sound effectUltrasoundRhombusTouchscreenComputer animation
Demo (music)
Lamb wavesRadarData modelStudent's t-testOffice suiteSocial classReading (process)Table (information)Quantum state2 (number)Endliche ModelltheorieType theory
Lamb wavesControl flowAdaptive behaviorShape (magazine)AngleCartesian coordinate systemShape (magazine)Very-high-bit-rate digital subscriber lineDistanceAngleBlind spot (vehicle)CollisionAdaptive behavior
Phase transitionRadarDoppler-EffektSound effectMusical ensembleBlock (periodic table)Noise (electronics)Numbering schemeProcess (computing)Order (biology)FrequencyElectromagnetic radiationPropagatorDoppler-EffektException handlingMultiplication signTransmitterLamb wavesSpektrum <Mathematik>
Lamb wavesTransmitterDifferent (Kate Ryan album)Multiplication signSweep line algorithmReflection (mathematics)Shift operatorDoppler-EffektMeasurement
Feasibility studyData modelProcess (computing)Mathematical analysisBinary multiplierFrequencyMultiplication signRight angleStandard deviationCellular automatonNumberFamilySweep line algorithm1 (number)Multilateration
Binary multiplierData modelMathematical analysisReal numberTime domainBand matrixTime domainDomain namePoint (geometry)Mobile appBand matrixFunction (mathematics)
Musical ensembleFreewareSimilarity (geometry)Binary multiplierFrequencySweep line algorithmTransmitterSimilarity (geometry)Covering spaceReading (process)Traverse (surveying)Multiplication
CASE <Informatik>Multiplication signComputer iconContext awarenessExpert systemResultant
Group actionVideoconferencingDemo (music)Spherical capMultiplication signMeeting/Interview
Sweep line algorithmRadarObject (grammar)Multiplication signSpherical capResultantSource codeDistanceEntire functionDisk read-and-write headType theory
Point (geometry)Data modelMobile WebMachine visionPoint (geometry)Machine visionMobile WebNeuroinformatikAsynchronous Transfer ModeDegree (graph theory)Object (grammar)Sign (mathematics)Pattern recognitionOperator (mathematics)Line (geometry)Personal digital assistant
Pointer (computer programming)WhiteboardDistribution (mathematics)Type theoryPlanningWhiteboardInformationReflection (mathematics)ResultantLaserPartial derivative
Total S.A.PermanentLaserDemo (music)
Demo (music)LaserView (database)Point (geometry)Sound effectLecture/Conference
Dependent and independent variablesExplosionPhysical systemNP-hardDependent and independent variablesMessage passingMultiplication signReliefLecture/Conference
Maxima and minimaPhysical systemType theoryDifferent (Kate Ryan album)Data fusionMeasurementAngleBlack boxAnglePhysical systemRange (statistics)Maxima and minimaType theoryActuaryStack (abstract data type)DistanceDoubling the cubeDifferent (Kate Ryan album)Function (mathematics)MultiplicationSupersonic speedFunctional (mathematics)Level (video gaming)Data fusionConfidence intervalUltrasound
Message passingInformation securityInformation securityOnline help
Open setOnline helpEmail
Transcript: English(auto-generated)
So up next we're gonna talk about um some some current stuff but also thinking very far forward because everybody hates driving right? I'd much rather have my car drive me to work than uh than me have to drive. We wanna make sure that that's nice and secure. So these guys are gonna talk about whether we can um trust self driving vehicles. Um let's
give them a big round of applause. Have a good time. Thank you. I'm so excited to stand here. Good afternoon. Today I bring you the latest work on attacking self driving vehicles. The title is can you trust your autonomous car vehicles? I
would like to talk about our latest work on vehicle security. I'm Jianhao Liu from China and I work for Qihoo Sui City in SkyGo team. Folks research uh uh vehicle cyber security. I'm
Chanyan I'm Chanyan from Georgia University and Dr. Xu is my advisor. Uh she is a professor at Georgia University and University of South Carolina. Uh I believe she is hiding somewhere in the audience because she wants us to do all the work. Okay this
talk uh in this talk we first introduce what is autonomous vehicles. The idea of car hacking by sensors and present our word attacked. At last we decide the possible defense. With the development car hacking ranging for conversion cars with telematics to
autonomous car. The car is in increasing in interacting with the environment. The third opens up new attack surface. In this talk we show you our work on autonomous vehicles. So
what are autonomous vehicles? Autonomous vehicle can sense is surrounding and uh make uh driving decisions by using m- using the machine learning algorithm. Basically a car that can drive itself without human doing anything. According is uh uh
to this international standard. Autonomous driving can be divided into 5 levels. And the next example of level 1 adaptive control where we must put hands on the string
wheel. Level 3 conditioned uh automation where hands can be off the string wheel. Yet the driver is still needs to take over for time to time. The driver needs to drive. Level 5 is for automation. A car can handle all the driving models and drives
itself without a human in it. So basically we can sleep in your car. Typically Tesla is conditioned as level 3 and uh successfully Google car will be level 5. This is uh
autonomous vehicles. First the car has to have sensors to monitor these surroundings. And for more advanced cars they will have a V2X. Where V2X stands for vehicle to anything.
Then the sensor data can guide vehicle movement and uh to plan and uh control the path. The driving plans will be fo- formed to the driver by HMI. The HMI is means
the machine le- uh uh human machine interface. All the driving decision will be executed by the car. Th- this is how automatic to driver works. Let me show a few
automatic driving application. They include autom- autonomous vehicle. They include autonomous line keep. Autonomous line change. Autonomous line overtake. Autolom- autonomous highway merger. And autonomous highway exit. Uh and uh aut- autonomous
interchange. Autonomous vehicle have a rich set of sensors which include following. Uh is about uh unique uh uh ultrasonic ultrasonic sensor can a difficult
oversight nearby. Camera can use a difficult road s- road scenes lines and make sure car designs and speed. LiDAR creates a 3D map by scanning the environment and plan a
driving decision. LiDAR can a difficult cars form middle range to long range and speed is moving direction. Because this sensors the car can sense the environment and
a difficult identify what kind of obstacles are nearby. Finally the car can make decisions for driving. Of course the automatic driving are controlled by
electronicals. That's the to power a regular car into self driving car. One has added uh uh electronicals to control the ultr- ultrasonic directly. This this way the car can send commands to control the brakes, electronic power string and so on. So how
can I attack autonomous vehicles? We are sensor data guide to travel route of the car and the sensor safe as the plan to control the car. Thus we set uh scope of our
attacks. Attacking the sensors on autonomous cars. If we can modify the sensor data in driving decision will be made based on fact data. Uh we can modify the sensor
data. What is displayed on HMI may be wrong and may be mistake. The past planning may not be correct which leads to wrong execution. In short the reliability of the sensors will will be affect the reliability of the automatic driving vehicle. Now to up to now the
technology we have access in Tesla. Tesla has uh drive advanced autopilot system which relies the the auto normal- autonomous driving at between level 2 and level 3.
Basically Tesla has all the feature of the autonomous driving. Thus the autopilot system still requires the driver to place his hands on the steering wheel. It has
already changed people driving habits. Unluckily this habit change has lead to a recent inc- incident which is cause of sensor malfunction. Thus the reliability of sensors is
important. If autopilot can fail under normal yet speckled case what will happen if there is uh international intentional mer- mer- mer- relations attacks as as is some as China to have a traffic addition. So there is a street trip of sensors in Tesla. One of
millimeter rear readers. A middle range reader is amount in front of the Tesla. And a
front camera. A front looking camera is amount on the window chair under the near rear mirror. And the 12 ultrasonic sensors. Ultrasonic sensors are cloud are cloud near the front and the near the bumpers. That's a video. We will show how we can find the
sensors under the cars. Which make the autopilot of Tesla to mirror mirror function. Let me show you a few videos give you the highlight of our work. The first is uh proof of
ultrasonic to take HMI have a mirror function. Now the engine is behind the car. The engine is here. It is really too close now. But the HMI can't display the designs.
Now uh engine of the device. The HMI displayed. So we can HMI mistake. And that's thank you.
Next the video is a ghost car. This is our tech go to controller car. This is our
autopilot system. And the starting driving. But in in front of the car have a no no car.
When the car pass the engine. The ghost car can force our car to stop. It's display to
it's display to hit the ghost car. So the car is to stop. So. Thank you. Uh I guess I'll take over from here. Uh the first type of tech is ultrasonic sensors. And we have
tested this tech on Tesla, Audi, Volkswagen and Ford. So uh uh what is an ultrasonic sensor? It is a sensor that measures distance generally within 2 meters. Uh it is used for um parking scenarios like parking assistance, parking space detection, self
parking and also on Tesla there's there's a feature called summon. Which means that you can park the car without even being inside the car. So in a parking scenario like this. Uh generally there will be a display of distance. It is either acoustic or visual so
that we can know the sensor readings. So how can we misuse ultrasonic sensors? So imagine uh someone dislikes the owner of a shop and he wants the car to keep backing into the glass wall. So he did something to the sensor that the car does not stop. Well it should. So what will happen? I believe most of you want to protect your parking
spot. It is really annoying when someone gets parking into your parking spot. So um instead of putting up a sign uh if you can do something to the sensor that makes the car
stop in the middle of parking that would be awesome. So uh before going to how these misuses can be done let me walk you through how an ultrasonic sensor works. So an ultrasonic sensor it emits ultrasound and receive echoes based on the piezoelectric
effect. I believe this technology is uh motivated by bats. So the sensor generates an ultrasonic pulse and it it propagates and hit an obstacle and bounces back and create an uh receiver pulse. So we can measure the uh so if we can measure the propagation time
between the uh transmitter pulse and the receiver pulse and knowing the uh speed of sound in air we can basically we can calculate the distance uh from this very simple uh formulation. So there are 3 types of attacks on ultrasonic sensors. The first one is jamming attack. So jamming attack generates ultrasonic noises that causes no service
out of the sensor. And spoofing attack uh it crafts fake echo pulses so that it can order distance. The third one is acoustic quieting. It means that uh this attack can diminish the original ultrasonic pulses so that it can hide obstacles. To validate these
attacks uh these are the equipment we use. Uh so first we need uh uh ultrasonic transducers that can emit ultrasound. Uh and second we need uh signal signal uh signal suppliers that can generate excitation signals. Uh in our case we use uh either uh
adrenal or uh signal generator. Um to make it start a faster and cheaper we use autoshoot hardware but you can totally design your own piece uh of jammer. So the basic uh idea of jamming attack is to inject ultrasonic noises at the resonance frequency of the
sensor which is generally between 40 to 50 kilohertz. Uh and it ca- it can cause the uh denial service out of the sensor. So actually it's really in the right figure. Uh so first we there is uh on the sensor there's transmitter pulse and the uh received echo pulse. If it generate an ultrasonic noise noise at the jammer so this noise will be
received by the sensor and this noise will fully cover the uh original echoes. And we have tested this attack uh in the laboratory on 8 uh models of standalone sensors and all those on uh 4 vehicles. So um for for this uh indoor uh experiments uh as you can
see on the right figure it is uh uh a figure of received electrical signal at a sensor. Uh when there's no jamming you can see that there are there are uh excitation pulse and the following echo pulses. So it is how it works. Uh and but when there's weak jamming
signal you can see that the noise flow has been increased. And as we increase the noise flow you can see that when there's strong jamming the noise can fully hide the original echoes so no measurement is possible. So what about the sensors? What is the reading of the sensors? So basically we get we get 2 very opposite types of
results. The first one is zero distance which means that the sensor detects something very close. And the other one is maximum distance which means that the sensor cannot detect anything. So how should cars behave to jamming attack? Should it be zero
distance or maximum distance? If it is if it is zero distance it means that the car detects something so that it will stop. But if it is maximum distance it means the car cannot detect anything and the car will now stop and will keep moving. So obviously zero distance is a fail safe option for vehicles right? However uh according to our
experiments on cars uh the result is unfortunately the maximum distance. So um let me show you a video that demonstrates how it is really maximum distance. So this is an
ultrasonic sensor on audio Q3 and this is a ultrasonic jammer which is wired to a computer. And now from the uh screen of the car you can see that the jammer has been detected as an obstacle uh as displayed in in white bar. And we read the data from the
OBD. It says distance is 28 centimeters. And now let's turn on the jammer. And the obstacle disappears. And the distance it says is maximum. So in conclusion uh jamming
attack can output at maximum distance and it can hide obstacles. So let me summarize the result of jamming attack. So on ultrasonic sensors there are uh there's zero
distance and there are maximum distance for different sensors. And on cars with parking systems the result is maximum distance. Well interestingly uh from the menu of Tesla model S it says if a sensor is unable to provide feedback this is from our instrument panel will display an alert message. However we have never seen this alert message. Well
another question is how will the car behave when like uh self parking and someone that the car actually drives itself based on these false sensor readings. So let me just show you a video of how we do this attack on Tesla summon. So as you can see that there's
nobody in the car uh this is me standing in front of the car holding an ultrasonic jammer ultrasonic jammer. And now Jane Howe turned on the Tesla summon. Well
normally the car will now move because I have been detected right? However when we jam the sensor it moves and hit me. That hurts. Well in conclusion jamming attack can
also hide obstacles when the car is driving for itself. Uh you might ask well the distance is only like 20 centimeters can it be longer? Well of course because if we increase the watch level of the jammer like uh we use uh if we use uh ultrasonic uh
adrenal outputs at 5 volts. If we uh output at uh 20 volts with a signal to function generator we can increase the um the attack distance. So in this video uh there's a man uh standing uh behind the Tesla. Uh this is not this is not me this is another brave man in our lab. Uh his name is Wei Bing. Uh this is more dangerous. So now the
interferon is off and I turn on the Tesla summon and you can see that the car starts reversing. However the it will now move because the man has been detected. And now
we turn on the uh function generator to uh turn on the interferon. So watch closely. Now we turn on the Tesla summon again. Well it moves. And I hit the man.
And I hit the interferon. So um the car only starts because the interferon has been hit. Thank you. Because the interferon has been hit and stopped working. So uh jamming
attack the distance can be increased if you have no budget right? So let me summarize the result of of jamming attack on on on stop working summon. So the car uh energy snares the car does not stop and a strong jamming it might hit someone or
something. So there's another question. Uh why some sensors output their distance and some output maximum distance? Well we believe it is because of different sensor designs. For their distance the sensor compares the signal with a fixed threshold. So if the signal exceeds the voltage level exceeds the threshold it believes that there is uh justify uh
echo. So the jamming signal actually increased the virtual level so the sensor thinks that hey there's um uh there's an echo right after it's made. So it is zero. Well for maximum distance we uh kind of started the sensor on audio Q3 broke it, probe it and uh
reverse the schematic uh but we didn't find any useful information because they it is um application specific I say. So all these signals uh process inside the chip. So uh to make it easier we uh started another sensor which is known as Maxima MB-1200. It is
another sensor that outputs maximum distance. So uh we basically we have to destroy the uh transducer on top of it and expose the circuits. So this is how it works when there's no jamming. You can see that the the the white line means the uh time of light. And the blue line means the echoes. Well you can see that there's uh
excitation pause and there are there are echo pauses. And if you watch closely the time of light exactly match with the echo the first echo pause. Uh and when there's strong jamming and when there's weak jamming uh you can see that the noise flow has been increased. But they did but the measurement is still uh correct. However when there's
strong jamming you can see that the uh signal is totally overwhelmed by noise. And it seems that there is no echo. So the sensor uh outputs maximum. Uh we believe it is uh uh it
uses adaptive threshold. So it is used for noise suppression. Well uh the designers definitely has a good intention designing this but they didn't consider the malicious scenarios. Well the second attack is uh spoofing attack. So basically there is to inject
ultrasonic pulses at a certain time that can uh fool the sensor. So for example uh if we craft a fake pulse right before the first original one we can kind of spoof the uh the uh propagation time so that so that we can manipulate the distance. But this attack is
non trivial because only the first justifiable echo will be processed. So there's kind of like an effective time slot which is right after the transmitter pulse and before the first echo pulse. So you're gonna have to inject within this slot to make it successful. And if if if it change the arrival arriving time of the fake echo we can make
manipulate the sensor readings right? So this is uh uh video demonstrates um the
distance. So this is jammer connected to computer. Uh this is computer. And you can see that the jammer has been detected as an obstacle and distance is 66 centimeters. And now it starts spoofing. Wow. So distance has been altered. It's a stop. And if you
look outside the vehicle there's nothing moving. And if you if you look at instrument panel the spoofing is still going on. So in conclusion spoofing attack can
alter distance. Uh and this is a demo of spoofing attack on Audi. Uh in this video we just
randomly altered the distance. At first nothing is in front of the car. Well I'm
assuring you that the jumping bars are now volume indicator of the music. So spoofing attack can alter distance on Audi. Uh let me summarize the result of spoofing
attack. So spoof attack can manipulate sensor readings for some stand alone sensors and on cars so that we can make the car stop where it shouldn't. The third type of attack is acoustic quieting. Uh uh a method is uh acoustic cancellation which means that we cancel the original one with what every sound of reverse phase. So uh so the when they
add up together there's no echo at all. And from our experiments uh uh we observe that by matter of phase and amplitude adjustment we are able to cancel ultrasound. But if you want to cancel cancel ultrasound from the car you're gonna need to uh use dedicated
hardware. So uh a easier way a easier way to do this is cooking. Which means that we absorb the ultrasound with some kind of sound absorbing materials. Uh like like some some acoustic damping foams which is very cheap. And it has the same effect as jamming that can have obstacles. So this is how we uh cloak a car. Now we drive toward a
car uh this lovely panel car. And you can see that the car's been detected and displayed as the the red bars on the screen. And now we'll apply the acoustic
damping foam. Well it disappears. And we we drive closer to the car, still nothing. And now we remove the damping foam and it reappears. So uh so in conclusion, choking can hide a
car. So what about human? Can clicking also hide a human? We tried this. So this is me walking across the car and you can see that I have been detected by the sensor. But now
if I wear the damping foam I'm invisible. And still nothing. Well can you think of a
new way to wear this foam? Here we go. This is uh dump to the foam scar. It also works. So choking can hide a human. So if you want a car, a human or a glass to be visible, just buy this. Well um by the way uh behind the glass door is my advisor's office. So this
is what happens when you uh let your students do all the work. I'm sorry. So the the third type uh so the the second type of attack is on the millimeter wave readers. So we have tested this attack on Tesla Model S because we don't have uh the the other
sweetcars don't have a reader on it. So uh M&W reader, it measures distance, angle, speed and shape uh etcetera from from long, short to long distance. Uh it is useful for some high speed and critical applications like adaptive cruise control,
collision violence and blind spot detection. So how can we misuse readers? It is similar. So uh when there's uh you're driving on highway and there's danger ahead of you and you want to stop but the car if you do something to the to the reader that the car
does not stop where it should. It could cause some serious accident. And if there is danger behind you and you want to steer away from it but the reader tells you that there's something ahead of you you have to stop. So that would be terrible. So let me let me walk you through how a reader works. So a reader transmits and receives
electromagnet magnetic waves and measures the propagation time and etcetera. It is uh similar to ultrasonic sensors except that the signal is is is RF. So uh when we are dealing with RF uh it is uh difficult to measure the time because it it travels at the
speed of light. So uh in order to do this we have to do modulation so that uh we can make this process easier. So the most popular one of the most popular modulation scheme is FMCW. So uh which is kind of frequency modulation. And the Doppler effect can be used
to measure the relative speed and there are two major frequency bands. Which is at uh 24 or 76 gigahertz. This is how um the frequency modulated continuous wave works. Uh basically it is kind of like a sweeping frequency signal so the frequency actually
varies uh with time. And when the signal is transmitted and it hit a target and bounces back we'll receive a similar uh receive signal. And what what what we'll measure is the reflection time but it's difficult so we measure the difference frequency FD and calculate the time knowing the uh the RAM slope. So sometimes when the car is moving
relatively uh there will be a Doppler frequency shift. So um before doing the text the first thing we have to do is to understand the radar signal. So we we're gonna have to analyze the signal to find out uh what is the frequency range, what is the
modulation process, what is the RAM height, and what is the number and duration of RAM and what is the cycle time. So after doing this we can we can know whether jamming tag or sweeping tag is feasible right? So this is kind of like a a family picture of all the equipment we use. Uh special thanks to Keysight OpenLab for providing us uh free
access to this equipment which is three times the price of Tesla. Well um so I'm going to uh uh explain which ones I use later. Well um I forgot one thing. It doesn't have to be so expensive because uh you can actually you can just buy a radar and modify it to
be your own jammer. So this is how uh we analyze the signal. So first we receive the uh radar signal with a home uh with a home antenna which is connected to a harmonic mixer and analyze the signal from the frequency domain on the signal analyzer
and on the time domain from the oscilloscope. So basically what we found is that the radar outputs at uh 76 point 65 gigahertz as it says in the frequency and bandwidth is 450 megahertz, modulation is MCW but uh I have uh we have known all the details of
radar charts but I'm not I'm not gonna tell you because uh I wanna be responsible. So uh the idea of jamming tech is to jam radar within the same frequency band which is 60 76 to 77 gigahertz. So uh we can jam at fixed frequency like this and we can jam at
sweeping frequency like this. It covers all the frequency band. Well the the the idea spoofing tech is to spoof the radar with similar RF signal something like this. Pretty straightforward. And to to generate the radar signal we have to uh generate a signal
with a signal generator uh at 12 gigahertz and multiply the signal to with a frequency multiplier and transmit it with a home antenna. So before showing you how uh how the results are uh let me uh introduce you how the autopilot is placed. So the blue
icons means that the uh traffic aware cruise control and auto steer is on and the blue car means the car ahead of you has been detected and locked. And we have to do the experiments when the car and the experiment is stationary because uh when the car is
moving and in case our attack is successful the car might hit the equipment and if I damage the equipment with 3 times the price of tesla I won't be able to graduate. So this is a demo of jamming attack. So in this video I am standing in front of the tesla controlling the radio interfere as you can see from the camera of the mobile phone. So
now the autopilot is turned on and the car controlling the equipment has been detected as a blue car. And now I show how uh so now the interfere is is turned
off. So we turn on interfere and you can see that the blue car disappears. And we turn off interfere it reappears. We have cap we have kept trying this for many many times and it
works every time. So jamming attack on radar can hide obstacles so that the car may now
stop where it should. So let me summarize the result of all the uh radar attacks. So for jamming attack it can hide obstacles which has already been detected. Uh and other fix
or swimming fix works. Uh for the spoofing attack we can spoof the distance of the car head. So basically what we would have seen is that the car actually jumps forward and badboard. Well the third type of attack is on cameras. Uh we have tested stand
along cameras from mobile eye and and point degree and tested on tesla mode S which has a mobile eye. So camera uh actually detects ob objects uh by computer vision. Uh there's forward camera and there's backward camera. It is used for limpet lint departure warning, lint camping uh traffic sign recognition and also for parking assistance. So how
can cameras be misused? So a camera is mainly used for steering. If the camera does not work the car may not steer where it should. So there can be some accidents. Well the
attack we have on on on camera is blinding attack. So basically it means what we jam the uh the we we uh there are 3 types of interference we use. Uh there are LED spot, laser pointer and infrared LED spot which are all very cheap. And there are 2 scenarios. The one is we point the interference directly at the camera and the other is we
point the interference at the calibration bar and reflect back to the camera. So it is this is a result of of of blinding with IOD. So uh when the IOD is pointed toward the the calibration bar there's only partial blinding but when it is faced toward the camera
directly there will be uh total blinding. And this is a result when we use a laser beam. Uh it is even more prominent. Uh either fixed laser beam or wobbling laser beam uh can cause total blinding. Uh and there is something we didn't expect is the permanent damage of
the camera. So you can see that there is this uh black scar on the camera. And we have to send it back to vendor and have repair and cost uh cost us a lot of money. Uh which I don't care because it is Jien Ho's camera. Well this is a demo uh of of of blinding
the camera with a laser beam. This is a view from the camera. And now we uh point the laser beam at the calibration bar and you can see that the effect is it is it's not
very effective. However when we point the laser beam directly at the camera, we can see that there is uh this blurry white and blurry red and you can not see anything. So you can imagine what will happen if a camera on a car has been blinded like this. So laser
can blind camera. Uh we have also tested infrared IOD it doesn't work very well. Um we have tested blinding uh cameras on Tesla. Uh well the good news is the Tesla actually gave you an alert message that asks you to take over uh when there is
jamming time. So it is uh kind of like uh a relieving response. Well um we have uh uh submitted our findings to Tesla uh and got their active response. Uh they uh appreciate our work and they are looking into this issue. Well uh looking forward how can we improve
these sensors? Well to begin with the sensor has to feel safe. Uh for example this zero or maximum distance for ultrasonic sensors it has to be zero distance so that the car will stop instead of hitting something. And it should also be uh designed with uh anomaly
detection function. Uh I believe at least jamming attack is easier to be detected because there is uh uh abnormal strong level of signal. And also increase the damage of sensors such as using multiple ultrasonic sensors for measuring one distance. And also
using different types of sensors to uh for like uh kind of double check. And also in the system that does the sensor data fusion uh it is better if the trustworthiness of these
sensors are evaluated. Uh so that when there's uh when the system does not have enough confidence in the sensor data it will stop the car uh from self driving. So uh it can be it can feel safe. Whereas safety is always uh more important than convenience driving. But what's next? Uh in the future we hope to um to get the output of the
sensors directly. Uh so instead of uh a black box approach and we hope to read uh the sensor data and the actuator data. Or we hope to carry out uh moving uh vehicle
experiments to to to examine whether these attacks are feasible when the vehicle is moving on the road. And we hope to uh measure the longest uh the maximum attack range and angle. And also how we can improve the performance of these attacks. Well uh in
conclusion I hope what you can get from this work is that uh attacking existing sensors on cars is feasible. Uh we have found many ways to fool sensors. Uh some attacks are easy uh some some are non-trivial. So the sky is not falling. It's not like someone on the
road has a lot of sensors. Well for the manufacturers the sensors should be designed with security in mind so that uh we should also always think about intentional attacks especially when the sensor is going to play a very important role in self driving
cars. Well for customers uh do not trust semi autonomous cars yet. You have to always be careful yourself. Well well we have fully secure autonomous cars in the future. Let's wait and see. Well these are the people we'd like to thank uh with all the help this
work will not be possible. These are our colleagues that helped us in the experiments. Uh if you want to know more details about this work please check out our white paper or just write us an email. Thank you. Uh thank you. If you have questions if you have questions you can come up here we'd like to answer.