Friday, January 10, 2014

HackRF Present and Future

At CSAW THREADS in November I gave a talk about the present and future of the HackRF project (video). I reviewed the new HackRF One design, and then I showed all sorts of different things that people are already doing with HackRF Jawbreaker. It's pretty exciting to see all the applications that people are coming up with!

Saturday, January 04, 2014

The Amp Hour #177

I joined Dave and Chris again on Episode 177 of The Amp Hour. Happy Festivus!

Saturday, November 16, 2013

Multiplexed Wired Attack Surfaces

Kyle Osborn and I presented Multiplexed Wired Attack Surfaces at ToorCon 15. This was the second time we gave the talk. The first was at Black Hat USA 2013, but the ToorCon video was posted first.

The basic idea is that connectors on electronic devices are often used in unexpected ways and that some devices, especially phones and tablets, even multiplex several functions onto a single connector. We demonstrated how we are able to access an interactive shell on certain Android phones by connecting a special serial adapter to the phone's USB port; although we were physically connected to the phone via the USB port, we were not using USB.

Similar multiplexed interfaces are present on a wide variety of portable devices, often accessible via USB or headphone connectors. An excellent example using a headphone jack was published earlier this year. We hope that our talk will raise awareness about the attack surfaces presented by these types of interfaces.

The talk at ToorCon was a lot of fun. We got a shell and activated adb on a phone handed to us by a volunteer from the audience. I hope you enjoy the video, but you should also read the paper we wrote for Black Hat.

We've posted links to several resources related to the talk.

Wednesday, October 30, 2013

Unintended Acceleration, Software, and Sadness

A few years ago I became concerned about reports of sudden unintended acceleration in Toyota vehicles, especially when some of my family members started driving new Toyotas. At first I was skeptical of the reports, but they kept coming. In time, a friend of a friend had a terrible accident, and I was only two trustworthy people removed from a firsthand experience.

I started paying more attention to the reports, and I developed a very strong suspicion that software was to blame. Three simple facts led me to this suspicion:

  1. The engine throttle was controlled by software.
  2. The brakes were controlled by software.
  3. Nobody knows how to make software without bugs.

The first fact surprised me a little. The second surprised me a lot. The third is common knowledge to anyone who has ever developed software, but it may be surprising to those who haven't.

When I first learned how to program in BASIC as a child, I was taught that computers don't make errors; people do. If you write a perfect program, the computer will do exactly what you expect. This is a tantalizingly optimistic view, and it helped me challenge myself to become a better programmer. Unfortunately it is not true.

During those early years my programs were small and simple. Sometimes I could write an entire program in a single page of text. It seemed very possible that programs could be perfectly correct, but somehow they never were. There was always a bug, and almost every time the bug was my own fault. As time went on, I started working on larger and larger software projects, and it became clear to me, as it should to any software developer, that the likelihood of bugs increases when software complexity increases.

This is a Big Problem. It is so big that many of the greatest minds in computer science have devoted their lives to it. Some very interesting progress has been made, but it is still largely an unsolved problem in real-world systems. It is hard to create a program that correctly implements a specification. It is hard to create a correct specification. It is hard to implement a programming language correctly. It is hard to build correct interfaces to other programs. It is hard to build computers that reliably execute programs correctly, especially in environments with high levels of electrical noise like the engine compartment of a car.

By the way, I often use "is hard" to mean "might be impossible".

Not all software is equally buggy, of course. It is possible to create computer systems that are more reliable than others (consider the hardware and software on spacecraft, for example), but it is difficult to do. It is a very different problem than the problem of building reliable mechanical systems.

The problem of software bugs is probably the biggest reason that computer security is so awful. We don't know how to make software without bugs, and bugs tend to undermine security. This is why people in the information security community seem to understand and expect bugs more than most other people; we spend our lives discovering, analyzing, exploiting, and fixing bugs. We find bugs that others miss. We break things that are supposedly unbreakable.

To me, the unintended acceleration reports smelled like buggy software from the very beginning. Few of the reports were identical, but all of them involved the inability of the driver to influence a computer that controls the engine throttle.

Some of the reports agreed on a particular point: Pressing harder on the brake pedal did nothing. This is terrifying to imagine. Your car accelerates rapidly even while your foot is on the brake pedal. You press harder and harder until the pedal is at the floor. Maybe you have time to switch off the ignition or shift into neutral, but how long would it take you to think of that? It might take only a second of unintended acceleration to cause a fatal accident.

At first Toyota denied the problem. Then they recalled floor mats. At the time, I thought that was a pretty stupid response to what seemed like a software bug. Then they recalled pedals. Then they blamed the drivers. They repeatedly said that they couldn't recreate the problem when testing the software (but any software developer knows that an inability to reproduce an error rarely means that a bug doesn't exist).

I started wondering: Have any information security professionals audited the software? Has anyone actually skilled at finding bugs looked for bugs? As far as I could determine, the only people who had tested the software were automotive engineers employed by Toyota. Automotive engineers might not know anything about finding bugs, but they should at least know something about fail-safe design.

To me, the most troubling part of the whole thing was that the brakes and all fail-safe mechanisms were also under computer control. Really? You would make a car with software throttle and also give it software brakes? Don't you know that an automobile is a lethal weapon? Have you never seen software fail? How about a traditional brake system just in case, even if it is only activated when the brake pedal is fully depressed? How about a mechanical linkage that limits the throttle when the driver slams on the brakes?

I can't imagine any engineering culture within Toyota that would fail to consider such things unless it is simply a case of automotive engineers putting too much trust in software because they don't understand software failures. Maybe they tested the things ten thousand times, unaware that they should have tested ten trillion different conditions.

As I became more and more convinced that a software bug was to blame and that nobody was properly looking for it, I started planning a blog post. I considered trying to reverse engineer a car. Even better, perhaps I could convince someone more skilled than me to try to find the bug.

Then the unexpected happened: Tin whiskers were implicated as a cause of unintended acceleration in Toyota vehicles. I had convinced myself that software must be to blame, but suddenly a seemingly plausible alternative arose. I understood tin whiskers well enough to believe that they could explain at least a portion of the failures, yet tin whiskers were just mysterious enough that I didn't question whether or not they might explain all of the failures.

Then I failed. I stopped paying attention after I heard about the tin whiskers. I didn't consider the likelihood of software bugs vs. failures due to tin whiskers. I didn't follow through on making recommendations for mechanical fail-safe (which could prevent fatal accidents regardless of the root cause of the problem). I didn't notice when Toyota denied that tin whiskers caused unintended acceleration. I never went back and reviewed the notably weak software analysis results of the NASA report that first implicated tin whiskers. I ignored the fact that the United States government stopped investigating the problem.

This week I read that a court of law found Toyota's faulty software to blame in a case of unintended acceleration. A software audit for the plaintiff revealed that coding standards for safety-critical software were not followed and that the software is buggy and incredibly complex. The audit even identified a particular failure mode in which a driver could press harder on the brake pedal with no effect, which is as close to a "smoking gun" as we could hope to see. The case clearly indicates negligent software development and deployment practices on the part of Toyota.

This shouldn't have happened if the automotive engineers were appropriately skeptical of software. This shouldn't have happened if the executives were appropriately skeptical of software. This shouldn't have happened if the software engineers were appropriately skeptical of software.

At the very least, the software engineers should have known better. If I were developing software that could kill someone in an error condition, I would feel a moral obligation to tell people about the potential for error. However, as everyone in the information security community knows, developers tend to overestimate the quality of their own code, and very few software developers are skilled bug hunters.

Unfortunately the software source code still has not been made available to the public. We have to trust the analysis of the plaintiff's expert witness (or trust Toyota) to understand how the software works. The details from the expert witness that have been reported, however, seem very credible to me. The jury found in favor of the plaintiff, so Toyota failed to effectively argue against the analysis.

I'm pretty confident in agreeing with the analysis, but it would be nice to be able to verify. If the software were open source, that would be possible. In fact, if the software were open source, others could have done the same analysis years ago and likely would have been able to fix bugs and save lives. How many people will have to die before we decide that open source is as important for safety as seat belts?

I am deeply sad for the people who died in automobile accidents for years before Toyota's negligence was revealed, I am sad for the people who will die in future accidents, and I am sad and ashamed that I never followed through on my own suspicions about the bugs at the heart of the problem.

Saturday, October 26, 2013

Appearance on The Amp Hour

In case further evidence is needed to demonstrate that I am a terrible blogger, I submit this: I had a wonderful time as a guest on Episode 161 of The Amp Hour and forgot to mention it here for nearly eight weeks! We discussed HackRF, Daisho, Ubertooth, wireless security, and how I came to the world of open source hardware from a background in information security. Thanks for having me on the show, Chris and Dave!

Wednesday, July 31, 2013

HackRF is on Kickstarter

I launched HackRF on Kickstarter today. I hope you'll take a look, pledge your support, and spread the word. Thanks for your support!

Sunday, June 23, 2013

HackRF LEGO Car

In the Hacker Lounge at Open Source Bridge last week, the well-stocked LEGO table caught my eye. In particular, I spotted an antenna protruding from the pile, and I followed it down to a radio-controlled LEGO car platform! The controller was quickly located, a battery replaced, and I found that it worked pretty well.

The controller was clearly marked with a sticker indicating operation at 27 MHz (FCC ID: NPI71646). That would have been my first guess anyway as it is a very popular frequency for radio-controlled toys. Since several of us were having a HackRF party, I decided to see if I could control the car with my HackRF Jawbreaker.

After verifying that the car worked with the original controller, I recorded several waveforms with the hackrf-transfer utility. I made eight separate recordings, one for each active controller state: forward, backward, left, right, forward/left, forward/right, backward/left, and backward/right.

The recordings were quite clean even though 27 MHz is below Jawbreaker's official operating range (30 MHz to 6000 MHz). In fact, I had captured some apparently good recordings of NFC transactions at 13.56 MHz just the day before. The major drop-off in performance I've observed on Jawbreakers I've tested has been just below 10 MHz.

The first HackRF transmission I tried was by building a small flowgraph in GNU Radio Companion to replay the captured waveforms with my Jawbreaker one at a time. With the car's controller switched off, I was able to make the car move with a simple replay! The best waveform worked at a distance of up to 20 meters even though I put very little effort into cleaning up the waveform or adjusting the power level.

Although I didn't have much time left before I had to catch my flight home, I wanted to see if I could synthesize control transmissions in software on my laptop instead of replaying captured waveforms (that included received noise and minor defects such as quantization and DC offset).

The first step toward synthesizing control transmissions was to analyze the captured waveforms. I found that each transmission consisted of a series of pulses at 27.145 MHz. The pulses were all at the same power level. Each pulse lasted one of two durations and was followed by a pause of consistent length. It looked like On-Off Keying (OOK) with data encoded in the number of consecutive short pulses.

Each transmission featured a repeated pattern of four long pulses (each 1.875 ms long, separated by 0.625 ms pauses) followed by some number of short pulses (each 0.625 ms long, separated by 0.625 ms pauses). The repeated pattern continued for as long as the controller was held in a particular state. The number of consecutive short pulses depended on the state of the controller:

  • forward: 10 short pulses
  • forward/left: 28 short pulses
  • forward/right: 34 short pulses
  • backward: 40 short pulses
  • backward/left: 52 short pulses
  • backward/right: 46 short pulses
  • left: 58 short pulses
  • right: 64 short pulses

That's as far as I got.