Endpoint Security , Governance & Risk Management , Hardware / Chip-level Security
Could Large-Scale IIoT Failures Be on the Horizon?New Study Warns IIoT Use Will Require Close Attention to Security
The rapid pace of change within the industrial internet of things will open up new risks for attacks and will require close attention to security, according to a new study.
See Also: LIVE Webinar | Stop, Drop (a Table) & Roll: An SQL Highlight Discussion
The study, published earlier this month by the Lloyd’s Register Foundation – a charity funded by the centuries’ old shipping register – says that so far, there have been no large-scale systemic failures or breakdowns. But as industries increasingly adopt IIoT sensors and systems, such incidents may occur.
The belief that current efforts to manage cyber risks are sufficient "is unlikely to hold true in the future as we develop the internet of things,” the report states.
Delivering Security to IIoT
The study is the product of a collaboration involving more than 110 cybersecurity experts. Three workshops were held to gather input in Singapore, Oxford and San Francisco between October 2019 and February. The report centers on the use of the IIoT in critical infrastructure, including energy, transport, manufacturing and the “built environment,” such as cities.
The study warns that “current pace of change in operational security capabilities will not match the fast emergence of new security risks in IIoT environments. At a conceptual level, existing security standards and guidelines are still relevant for the IIoT. At a practical level, however, the ability to deliver these capabilities, and the ways in which they must be delivered, are altered in the IIoT.”
Some capabilities don’t scale or aren’t interoperable. Some are not technically feasible or have not been tested or do not even exist, the study says.
“As an added complication, gaps in some key capabilities have consequences for other risk controls,” it says. “There are widening gaps in skills and awareness. We are at a tipping point for recovery, as manual fall-back becomes infeasible for complicated systems-of-systems and mesh environments: The approach to recovery will need to change.”
Assume Failure Is Possible
There are plenty of examples of how IIoT already has been exploited. One is Stuxnet, the worm that tampered with programmable logic controllers within Iran’s uranium refinement centrifuges. Israel and the U.S. reportedly developed the worm to hamper Iran’s nuclear weapons program.
In December 2015, Russian hackers were suspected of the so-called BlackEnergy attacks against Ukraine’s power grid. The malware opened up access to energy company networks, allowing attackers to open circuit breakers, which cut the power. No one was injured, but the attacks caused alarm over the security of IIoT (see: Ukrainian Power Grid: Hacked).
More recently, the Ekans ransomware targeting industrial control systems emerged (see: How Ekans Ransomware Targets Industrial Control Systems).
To illustrate the risks around IIoT, the study cites a report from CyberX, an IoT security company acquired last month by Microsoft. CyberX conducted a survey last year of the maintenance and patching processes for more than 1,800 production networks (see: Microsoft's CyberX Acquisition: Securing IoT and OT). Some 71% were using either unsupported or soon-to-be unsupported versions of Microsoft Windows, such as Windows 7. Sixty-two percent were using Windows 2000 and XP.
The CyberX report advocates a key principle when planning for IIoT risk: Assume it’s going to fail at some point. Also, accommodate the potential for insider attackers within systems and supply chains.
The Lloyd’s Register Foundation study highlights useful guidance on IIoT security, including standards, risk practices and frameworks, such as: the Industrial Internet Consortium’s IoT Security Maturity Model, ENISA’s good practices for smart manufacturing, the IoT Security Institute’s Smart Cities & Critical Infrastructure Framework and NIST’s IoT Device Cybersecurity Capability Core Baseline.