The Autonomous

secunet Report of the Workshop

secunet Bericht zum Workshop „Safety & Security“ @ The Autonomous congress 

The Workshop "Safety & Security" dealt with the question of how the security of highly automated or autonomous vehicles can be ensured in the future. Often, the focus is on the functional safety of the self-driving capabilities, however, the IT-security of the car and its communication to the outside world is the essential fundament, which cannot be treated half-heartedly.

Since self-driving cars require a multitude of incoming and outgoing communication channels to the infrastructure, i.e. to server backends, or to other vehicles (Car2x) to exchange e. g. sensor data or driving data, the attack surface of the IT of the car has increased significantly. Moreover, many of the communication channels or devices inside the car need a sophisticated software, which adds to the complexity of the involved technologies.

Finally, to ensure IT-security of all parts now and in the future, it is mandatory to have secure software updates and crypto-agility strategies, which can handle the computing capabilities of the coming decades. In general, a secure data exchange among communication partners within a car among different ECUs or between different cars requires some or all of the following security goals to be trustworthy:

  • data exchange must be encrypted ("confidentiality")
  • data must not be changed ("integrity")
  • the sender or recipient of the data needs to authenticate itself ("authenticity")
  • data must be readily available, when required ("availability")

Together, they form the basis for any trusted communication. The Workshop addressed these security needs by three presentations from various suppliers for the autonomous driving world discussing on how to

  • Manage trust
  • Implement trust
  • Validate trust

In the first presentation, Nils Abeling from Secunet gave an overview of how to establish and manage trust among the participants of a network using a Public-Key infrastructure (PKI).

First presentation

The idea behind this concept is that there is a hierarchy of authorities, which issue certificates to important participants in the network. These certificates contain the public key of a public-private key pair and can be used for cryptographic functions like signing and verifying, encrypting and decrypting and negotiating symmetric keys (Key Agreement Algorithms). Hereby, the certificate fills the remaining gap in a trust network by ensuring that the relation between an entity (i.e. a person or a device) and a public key can be trusted. After being issued in a precisely defined process by a higher authority, a certificate experiences a lifecycle from its publication through a public listing to its expiration, when the validity period comes to an end. The registration and the revocation of certificates can be managed by dedicated components of the PKI. It was emphasized that the specification of the PKI system, i.e. the technical details and algorithms, is not sufficient to ensure the trust in the operation of the network. It is equally important that the requirements and processes of its operation are precisely defined in Certificate Policy (CP) and Certification Practice Statement (CPS) documents. The CP publishes generic requirements about e. g. rights and roles or key renewal procedures such that every user can estimate the security level of the PKI. The CPS, however, documents how the requirements in the CP are met in detail. Therefore, it is, in general, kept under closure. Finally, it was underlined that it is essential to understand the use cases, the environment and the technology of the network before a PKI can be defined in order to achieve high cybersecurity by design.

Figure 1: Understanding the use cases, environment, processes and technology is essential to successfully manage trust.

Second presentation

If the fundamental structure of a communication network is understood and defined by a PKI, it remains to assess the cybersecurity on a technical level. In the second presentation Dr. Jörg Schepers from Infineon presented how various attack vectors on different levels of the software and hardware require different countermeasures.

Since the attacks range from high level attacks on the software to the deconstruction of a piece of hardware, it is necessary to embed security for each application from the beginning in both hardware and software. Examples include multiple parallel channels on Cyber Security Satellites (CSSs) and Real-time Modules, which are hardened to act as a trusted secure hardware environment. It is emphasized that it is not necessary to implement maximum security for each and every device, not only because of performance limitations, but also because of high production cost. Embedded hardware security can provide a sound basis for a balanced security architecture at appropriate security levels for each device. This has then to be aligned with appropriate security measures in software, for example, for secure software updates over the air.

Figure 2: Choosing the right level of embedded security for each application

Third presentation

In the third presentation of the Workshop III, Nico Vinzenz from ZF explained how the software and hardware can be tested and validated not only before production, but throughout the lifetime in the field.

The engineering process usually follows the V-model and begins with a threat analysis and risk assessment (TARA). After the requirement engineering is complete, a thorough testing plan is defined in order to scan for cybersecurity vulnerabilities. This ranges from functional testing and vulnerability scanning to penetration attacks. The fundamental challenge is that the intended behavior of a function or piece of software can differ from the implemented behavior. And while it is easier to validate that the function executes as planned using positive test cases ("Testing against the 'known'"), it is significantly harder to find behavior, which is not as desired and could be exploited ("Testing against the 'unknown'"). One important tool to tackle the latter is fuzz testing, where malicious or artificial input is fed to the software and the output is monitored creating a feedback loop. This can be refined by taking parts from the source code into account. The results of fuzz testing can be used to improve the initial threat analysis or other aspects of the testing strategy, thereby becoming a very powerful method in the toolbox of a tester.

Figure 3:The Cybersecurity Engineering Process using the V-model
FAQ

Jörg Schepert (infineon): Secure software-over-the-air updates are already standard as of today. A powerful and flexible hardware architecture must provide sufficient headroom for future algorithms and key-lengths to be implemented in the upcoming years in case of need.


Jörg Schepert (infineon): Hardware cannot be updated in the field per se. FPGAs would allow for updates over the air but are extremely costly and performance limited. Hence, future hardware requirements should be anticipated right from the start and a flexible architecture should be defined to be able to react later with software updates on algorithmic level.


Jörg Schepert (infineon): An appropriate security level can be determined by following the ISO21434 recommendations. Once the assets, which have to be protected, have been identified a risk and vulnerability analysis can be performed. For example, a temporary key does not need the same protection level as a private key being used for a PKI scheme that identifies a car to the outer world. Based on that assessment the overall security architecture can be defined to provide the right security level for each component.


Nico Vinzenz (ZF): It is important to define a migration concept when designing a PKI. This must contain the procedures when and how the entire PKI can transition to a post-quantum secure PKI. The National Institute for Standards and Technology (NIST) works on standardizing post-quantum algorithms and procedures, which can be implemented in the future.


Nico Vinzenz (ZF): A dedicated red team is suitable for this task. The red team mimics an attacker, but may use company-internal documentation to penetrate into the product. If there is any finding, they forward it to the PSIRT (Product Security Incident Response Team).


Nico Vinzenz (ZF): I am in favor of such programs as they foster the responsible disclosure of security vulnerabilities by researchers and ethical hackers.


Contact request
You have questions or need consulting?
You have questions or need consulting?

Write us a message and we will get back to you as soon as possible.

Site 1