- News
- Tech
- MadRadar Makes Cars Hallucinate
Duke University researchers recently developed "MadRadar," a black box system capable of attacking automotive radars without prior knowledge.
As more vehicles move toward autonomous driving, radar has played an important role in providing vision in adverse conditions. These advantages, however, open the door to malicious attacks utilizing spoofing hardware such as MadRadar.
Photo: Duke University
The Duke University team, led by Dr. Miroslav Pajic and Dr. Tingjun Chen, hopes to shed light on MadRadar's radar flaws, allowing OEMs to develop more secure autonomous vehicles.
Automotive radar sensors use lower-frequency EM waves rather than light waves, allowing them to see in low-light conditions and other situations where cameras produce blurry images. Radar functions similarly to a flash camera in that it emits an EM signal (or "flash" in the case of a camera) and measures the reflected signals to learn about the sensing environment.
Photo: IEEE Signal Processing Magazine
This ease of use, while certainly useful for measuring distances and velocities, opens the door to spoofing attacks, in which malicious groups can confuse the radar by injecting a fabricated signal to add or remove targets. In one example, Duke University researchers discuss how a fake oncoming car can cause an autonomous driving system to deviate from the road, allowing attackers to steal the vulnerable vehicle.
The MadRadar system can attack radar systems in three ways: false positive (FP), false negative (FN), and translation attacks. In each case, the MadRadar system first learns about the victim radar by receiving transmitted signals and determining the bandwidth, chirp time, and frame times.
Photo: ArXiv Preprint
FP attacks simulate the response of a car when one is not present. They can be carried out by precisely attenuating and phase delaying the received signal to "confuse" the victim radar into believing an object is within its range. Automotive OEMs have long been aware of these attacks and have thoroughly tested against them in security test environments.
FN attacks, on the other hand, are an unusual aspect that Duke researchers introduced in their study. FN attacks use the CA-CFAR technique of automotive radar systems, which reduces the effects of noise and clutter. By simulating a broad target with no discernible peaks, the MadRadar can trick a victim radar into thinking there is no target when there is.
Photo: ArXiv Preprint
Finally, translation attacks indicate that an actual object is moving in an unreal manner. For example, an oncoming car may appear to be shifting into a driver's lane while actually driving normally. However, from the car's perspective, this may justify evasive actions that endanger both the driver and pedestrians.
Duke University researchers hope that their findings will assist OEMs in strengthening automotive radar security against MadRadar and similar attacks. For example, they propose that dynamic frequency hopping can randomize the radar operating point, preventing systems like MadRadar from locking on and predicting the radar's response.
Photo: ArXiv Preprint
Tagged:
Written By
Anis
Previously in banking and e commerce before she realized nothing makes her happier than a revving engine and gleaming tyres........