In the previous two posts (Part I and Part II, available on the original blog), I talked about the lives lost on the roads and how improved communications and IT can help save lives, and about the path to deployment for that technology. In this post hardware I’m going to address security.
Security Innovation (and NTRU Cryptosystems, which was acquired by SI in July 2009) have been active in this area since 2003; since then, I’ve been the editor of IEEE 1609.2, which specifies the security processing for these communications. We’ve done a lot of work analyzing exactly what the security requirements are. In the next few posts in this series I’m going to go through some of the design decisions we made. This should be useful in two ways:
1. When you come across news stories about the system (which will be more and more common in the next eighteen months), you’ll have some independent background that might help clarify any information about the system’s security, bearing in mind that security is a topic that high-level news stories frequently get wrong
If the system is flooded with fake messages, what happens? To an extent this depends on whether and how the fake messages affect the driver. Different carmakers will implement different alert systems within their cars, and all carmakers are keenly aware of the importance of reducing driver distractions, so they will work as hard as they can to reduce false alerts. Nevertheless, if fake messages can be accepted, they’ll lead to false alerts, and the easier it is to send fake messages the more false alerts will be raised. This might cause one of three problems: