As explained in the first article of this series, the last two stages of the IC lifecycle, board assembly and board test, are owned and controlled by the original equipment manufacturer (OEM).
Figure 1 OEMs are responsible for securing the final two stages of the IC lifecycle: board assembly and board test. Source: Silicon Labs
While there are fewer OEM stages in a product’s lifecycle than there are during IC production, the security risks in each of these stages are similar to the challenges faced by silicon vendors and are equally consequential. The good news is that OEMs can build upon the security foundations established by their silicon vendors and reuse many of the same techniques.
- Board assembly
Board assembly is much like the package assembly step in IC production; however, instead of putting a die inside a package, packages are mounted to a printed circuit board (PCB), which is then typically installed in some sort of enclosure. Physical and network security at the package assembly site is the first line of defense against attacks; however, this can vary widely from contractor to contractor and tends to be poor due to cost considerations and the nature of board testing.
Figure 2 Board assembly is quite similar to package assembly in IC production stage. Source: Silicon Labs
The most significant threats at this stage are theft, device analysis, and modification. Mitigation for these threats is described below.
Device theft for the purpose of resale as legitimate devices is the primary concern at this step. As with the package test stage in IC production, theft of any significant quantity is easily detectable by comparing the incoming and outgoing inventory of the board assembly site.
The biggest risk for OEMs at this stage is an attacker obtaining a significant number of devices, modifying them, and then introducing the modified products to end-users. If the silicon vendor offers custom programming, an OEM can greatly mitigate this risk by ordering parts with secure boot enabled and configured. Secure boot will cause the IC to reject any modified software the attacker attempts to program.
The potential for an attacker to obtain systems for analysis is greatly reduced in this step compared to the package assembly stage during IC production. Boards at this step typically don’t contain any useful information to be analyzed. If an attacker is interested in analyzing the hardware construction, they can easily obtain samples by buying the device. In addition, because the device has not yet been programmed, obtaining a device in this way does not afford the attacker the ability to access and analyze any device-specific software.
Covert modification of a PCB at scale is hard to achieve given the ease of detecting such modifications. OEMs can implement a simple sampling test in a trusted environment that visually inspects boards and compares them to known good samples to detect changes. Attacks which attempt to modify only a specific set of boards will evade such testing but are also harder to coordinate and implement.
- Board test
The board test stage presents threats similar to those for package test during IC production. For example, it’s common for test systems to be shared among multiple vendors, increasing the risk of security breaches or attacks from bad actors. However, OEMs tend to have an even greater diversity of vendors than those for IC manufacturing at this step, which makes board test even more difficult to secure than package test.
Figure 3 Again, board test is quite similar to package test in IC production stage. Source: Silicon Labs
Board test generally has poor physical and network security. It’s extremely common to share space and test hosts between products, and test systems may not be kept patched. Finally, the risk of exposing confidential data at final test depends on the implementation of the product and its final test process. If an IC has sufficient security capabilities, then a final test architecture that completely protects data from bad actors in the test environment is possible. Unfortunately, that topic is too complex to dig into in this article.
Malicious code injection
The simplest method of attack at board test is to modify the device’s software. Because secure boot enabling and application programming take place in the same board test step, there is concern that an attacker gaining full control of the test could inject malicious code and leave secure boot disabled. This risk can be mitigated by implementing sample testing or a dual insertion test flow.
In addition, if custom programming is available, then having the silicon vendor configure and enable secure boot will create a robust defense against malicious code injection. When a programming service is used in this way, it’s still important that board test verify that secure boot is correctly configured and enabled. Working together, the package and final test steps can validate each other such that an attacker would need to compromise both steps to alter the silicon vendor or OEM code.
It’s important to note that the strength of secure boot is reliant on keeping the private key a secret. It is highly recommended that signing keys be generated in a secure key store such as a hardware security module (HSM) and never exported. In addition, the ability to sign with keys should be highly restricted and ideally require authentication from at least two individuals to ensure that no individual actor can sign a malicious image.
Since it is common for OEMs to inject credentials—cryptographic keys and certificates—in board test, attackers may seek to gain access to credentials or the key material they are based upon.
Secure provisioning of identity credentials has proven to be a particularly complex and nuanced problem. It involves the device’s capabilities, the contractor’s physical and network security, and the provisioning method’s design. It also presents unique challenges due to the scale and cost of manufacturing. In addition, as with all security, there is no way to confirm you haven’t missed some flaw in the system. Providing devices with identities is easy. Providing them with robust secure identities at an acceptable cost and enormous scale is incredibly difficult.
In well-designed systems where private keys never leave secure key storage, gaining access to key material needed to forge credentials should not be possible. For example, in the implementation used by Silicon Labs, the private key used to generate device certificates is stored in a Trusted Platform Module (TPM) on a PC that is hardened to physical and logical attack and located in an access-restricted cage in the site’s data center. Further, those keys are restricted in usage, applying to only a single production lot, and in time, existing only a few days before that lot is tested and deleted once the lot is complete. Finally, if such a key is compromised, the devices manufactured under that key can have their credentials revoked, indicating they should no longer be trusted.
Similarly, all devices that support secure key storage generate their private keys on-board, and those keys are never able to leave the secure key store. Devices that do not support secure key storage must have their keys injected and will be more vulnerable to an attacker on the test infrastructure accessing their private keys. To prevent certificates for low-security devices from being passed off as credentials for high-security devices, all certificates generated in manufacturing have data indicating the strength of storage used for its private key.
OEMs should use test systems which are hardened against modification and restrict physical access. Physical security should be reviewed, and standard access controls and logging maintained. Finally, standard security practices for networks and PCs should be maintained. For example, test systems should not be allowed to have direct Internet connections and should not use communal login credentials. Periodic reviews should be conducted to ensure that any changes to these processes are noticed and reviewed.
These standard actions can prevent an attacker from gaining access to a test system in the first place. In addition to these practices, OEMs can consign testers to contract manufacturers (CMs) that won’t be shared with other vendors, further increasing physical and network security. Those systems can also be put through penetration testing to identify and fix vulnerabilities before they can be exploited.
Finally, higher-level keys stored in the OEM’s IT infrastructure need to be handled appropriately. They should be stored in a secret key store and have appropriate access restrictions. Their use should be monitored so that any unexpected operations can be identified, and the appropriate staff alerted.
For OEMs that don’t wish to set up their own credential provisioning infrastructure, there are silicon vendors that offer secure programming services. For example, Silicon Labs provides credentials in its catalog Vault-High products and can program credentials onto any custom parts ordered though custom part manufacturing service (CPMS). These services transfer this burden from board test to the programming step by the silicon vendor.
Extraction of confidential information
When confidential information such as keys or proprietary algorithms is programmed as part of board test, there is a risk an attacker will obtain this information by compromising the tester. All the recommendations made for hardening the board test stage against identity extraction apply here as well. Similarly, using a programming service can transfer this risk from the board test stage to the package test stage.
With the right set of security features, it is possible to provision confidential information and protect it even if test systems are compromised. This provisioning requires a central, secured machine, as discussed above, and a device with a secure engine that can attest to the device’s state in a way that is outside the influence of the test system and is verifiable by the central machine. The board test will program the IC, enable secure boot, and lock the device.
The device then will attest to its state. If the tester was compromised and did not do what it was supposed to do, the central machine will detect it in the attested information. When the central machine knows the device is configured properly, it can perform a key exchange with the known-good application and then send the confidential information over that secured link. This process prevents the test system from being able to see or alter the confidential information.
End-product security requires OEM diligence
When it comes to securing an end-product, OEMs face many of the same challenges as silicon vendors. While a well-designed product and robust physical and network security are the first layers of defense, OEMs can prevent the bulk of security attacks against their end-products by following many of the same steps and procedures practiced by their silicon vendors.
In addition, many silicon vendors provide services and capabilities that OEMs can use to reduce the effort and complexity of securing their manufacturing environment. Putting these techniques in place today will help ensure security for all their connected devices and the ecosystems in which they participate. Together, silicon vendors and OEMs can deliver on the promise of a secure Internet of Things (IoT).
Joshua Norem is a senior systems engineer at Silicon Labs.
Editor’s Note: This is the second and final part of the article series on OEM-specific security risks. Part 1 identified the threats at each step of the IC production lifecycle and described how to mitigate them.