Cybersecurity Vulnerabilities in Audiovisual Control Systems and Protocols

By Paul Konikowski, CTS-D

This article orginally published in Commercial Integrator on August 6, 2019.

The Stuxnet and Target attacks have squashed many of the myths about securing industrial control system security.

Wait, you don’t know about Stuxnet, and the Target attack?

  • The Stuxnet Worm, first discovered in 2010, was responsible for damaging approximately 1/5th of the centrifuges in Iran’s nuclear program. The complex malware exploited four zero-day vulnerabilities, and eventually targeted specific programmable logic controllers (PLCs) and supervisory control and data acquisition (SCADA) computers.
  • In 2013, Target’s in-store point-of-sale devices were infected by memory-scraping-malware, which allowed hackers to capture the data stored on magnetic strips of credit/debit cards as they were swiped. The hackers infiltrated Target’s internal systems using the stolen credentials of a third-party HVAC contractor, obtained through an email phishing attack, which also gave them direct access to Target’s payment system.

Myths Surrounding Industrial Control System Security:

Myth #1 – Isolating control systems networks keeps them safe from malicious attacks: FALSE.

In the Stuxnet attack, the SCADA systems and PLCs that controlled the Iranian nuclear centrifuges were air-gapped, theoretically isolating them from the public internet. It is widely believed that the Stuxnet worm was implanted using admin credentials and USB thumb drives on the internal air-gapped computers.

Myth #2 – Highly specialized controllers and industrial control networks are safe from attack due to “security through obscurity”: FALSE.

The Target attackers used custom malware that was unrecognizable by standard anti-virus software, and exploited a previously unknown flaw (a.k.a. zero-day vulnerability) in what was then traditional retailer point-of-sale encryption.

The Stuxnet worm was constructed to look for specific PLC’s, software, and industrial devices. If the worm infected a host computer and did not find the specific systems, it would move on to the next machine, and erase any evidence of itself upon departure.

Myth #3 – Firewalls, Intrusion Detection Systems (IDSs), and Intrusion Prevention Systems (IPSs) effectively protect control system networks from attack: FALSE.

ICS networks are subject to man-in-the-middle attacks, where the hackers capture control system protocol messages that are sent in plaintext in TCP packets.

Once the attackers figure out what is going on, which can take some time and/or knowledge of the industrial process, they can replace the intended plaintext commands with malicious commands that can cause damage.

The firewalls and IDS/IPS system don’t recognize the altered packets as malicious, because they appear to originate from an authorized host, and look like typical control system protocols. Target’s security team received alerts from their Intrusion Detection System, but basically ignored them.

Design with Least Privilege & Least Route

To minimize the chances and impact of an attack on a control system, AV programmers and system designers should employ the principles of Least Privilege (or Use) and Least Route for better industrial control system security.

Least Privilege means that users or services should only be given the minimum access required to do their job. Similarly, the principle of Least Route states that a device should only possess the minimum level of network access that is required for its individual function.

Instead of placing AV devices on open, general-purpose local area networks, AV devices should be placed on their own dedicated, purpose-built networks.

The network switches and routers should be configured using access control lists (ACLs) so that only specific, authorized devices can send traffic inside or outside of the network. VLANs are not an effective security measure.

Air-gapping AV control systems certainly makes them safer, but not completely, and many AV devices are now constantly monitored by internal support staff and/or external AV integrators.

This does not mean that everything in industrial control system security needs to be on the local area network. Many AV controllers have dual network interfaces that allow the AV devices to be placed on a dedicated control subnet, and the AV controller can communicate to the supervisors using the second networks interface via firewall.

This is a good example of Least Route. Technically speaking, network traffic can traverse the two network jacks, but it is difficult.

Figure1and2

If you enjoyed this article, you might like these related posts:

Danger! Logic Bombs in Audiovisual Control Systems

My 3-Tiered Approach to Networked AV Security

Design Principles For Secure AV Systems

Identifying Cyber Attacks, Risks, Vulnerabilities in AV Installations

5 Steps to Better Cyber Risk Management

The Best Data Breach Incident Response Plans Require These Steps

 

Danger! Logic Bombs in Audiovisual Control Systems

So-called Logic Bombs haven’t quite found their way into audiovisual control systems yet, but just wait…The unprepared will suffer.

This article was originally published in Commercial Integrator on July 9, 2019

Logic bombs are a form of malicious code whose effects are purposefully delayed by design. The name “logic bomb” stems from the classic ticking bomb imagery often depicted in James Bond type movies. The logic bomb initially goes unnoticed during its dormant phase, and is triggered by elapsed time, a specific date, or some combination of inputs. Logic bombs are common in computer malware, but haven’t been reported in audiovisual control systems, so you have to use your imagination a little.

Here are a few examples:

  • A logic bomb could be programmed into an AV control system, so that after a projection screen is lowered and raised 100 times, the logic bomb is triggered, and the AV system no longer functions properly.
  • A logic bomb could be set so that it is triggered on a specific date some time in the future.The AV system works fine until July 1, 2020, and then suddenly, it stops working, even if the AV system is rebooted.
  • A logic bomb could also be triggered by a certain combination of inputs.

Let’s say you have a 4-way divide/combine space that is typically separated into 4 rooms, A, B, C and D. The system is tested and works when the rooms are separated or combined into 1. But when you try to divide the rooms into A&B and B&C, it suddenly stops working.

Any permutation of the three examples above could also be combined, making the logic bomb is harder to detect.

Logic bombs in computer systems are often triggered by a certain login. Imagine if every time a particular CEO used a video conference system, it recorded and/or streamed the call to a hidden endpoint.

Who on Earth would do this?

The answer is: anyone with malicious intent.

An external hacker who has infiltrated a business network could replace the audiovisual control system with similar code that includes a logic bomb, which could open a back door for them at a later date, and/or forward logins and other valuable information out through the firewall.

This would make the security breach harder to attribute to a specific IP address or individual.

Another scenario might be a malicious internal attacker. Perhaps an on-prem AV support technician asked for a raise and did not get it.

Once they found a new job elsewhere, the jaded individual could replace the AV control system code with one that included a logic bomb. The logic bomb could be set to go off during a big annual meeting, or gather valuable information that could then be sold to a company’s competitors.

AV integrators could also implement logic bombs to generate unnecessary service calls.

Most AV systems are warrantied for the first year. After a year, the client has an option to continue the service plan on an annual basis. If they don’t have a service plan, the customer has to pay for each service call that is placed.

The best defense against logic bombs are passwords that limit the access to the audiovisual control system code. Any device on the LAN should be locked down using access controls on network switches.

Customers should also demand uncompiled copies of the final AV control system code, and watch the AV integrator upload that code to the AV system at the very end of the project, so they know there are no logic bombs.

LOGIC BOMB in the form of binary code, 3D illustration

If you enjoyed this article, you might like these related posts:

My 3-Tiered Approach to Networked AV Security

Design Principles For Secure AV Systems

Identifying Cyber Attacks, Risks, Vulnerabilities in AV Installations

5 Steps to Better Cyber Risk Management

The Best Data Breach Incident Response Plans Require These Steps

 

Attack of the USB Killers: Coming to Your Clients’ Classrooms

What are USB Killers, and what does their existence say about the security behind your classroom/higher ed tech installations?

This article was originally published on Commerical Integrator on May 6, 2019

Last month, a former student of the College of St. Rose in New York pled guilty to destroying “66 computers as well as numerous monitors and digital podiums containing USB data ports owned by the College.” The damage was done using a “USB Killer” device that discharged high voltage pulses into the host device, physically damaging the host’s electrical system.

According to the court documents, the total losses due to the incident were 58,471 USD. A quick Google search shows that these “USB Killer” devices are readily available on websites like Ebay for around 40 USD.

Details of the “digital podiums” were not released, but any AV integrator who has done work in higher education institutions could probably guess they were lecterns or teaching stations outfitted with room computers, portable laptop connections, confidence monitors, control touch panels, media switchers, and/or playback devices.

The “numerous monitors” in the court documents could have been simple computer monitors, or larger wall-mounted flat panel displays often used for small-group collaboration.

Motive? Doesn’t Matter

The motives of the attacker are unclear, and in the end, are essentially irrelevant. What is relevant is that the same thing could easily happen at another university, K-12 school, company, or house of worship.

Security experts have shown that USB drives and cables can be built to perform HID attacks, launch command shells, download malicious payloads, and/or modify the DNS settings to redirect traffic.

But more importantly, any USB memory device (a.k.a. USB stick or thumb-drive) could contain files that are infected with malware.

One penetration tester that I spoke to said he often drops off a handful of infected USB drives at hospitals and medical buildings.

The USB drives appear to be harmless freebies, and eventually an employee uses one, opens the file, and the test payload is delivered.

He said that the USB drive attack vector is not as effective as email phishing campaigns, but it is still part of his testing.

When I first shared the College of St. Rose story, many #AVTweeps commented that little could be done:

“It’s hard to protect against physical attacks. If you do block the USB port or somehow protect it from electrical discharge, the attacker could smash it with a hammer.” – Leonard C. Suskin (@Czhorat)

“Without an option to disable the port completely for both data and power transfer, there is little anyone could do in this instance. With physical access, all bets are off…”Kevin (@kevin_maltby)

What Can Be Done About USB Killers

I agree that if someone is truly intent on causing damage, they will find a way, but I think there are still some things that can be done to minimize the impact and likelihood of a USB-based attack.

First, make sure that all members of your organization have signed a computer usage policy, and formally agree to not destroy computer hardware.

Next, consider remoting all computers in locked data closets, and always lock classroom podiums and AV credenzas to minimize access.

Use card-keys or biometric scanners to allow limited access to server rooms, and add IP cameras to these rooms so you can prove who actually did the deed. This is called attribution, and is often a challenge in cybersecurity.

USB attacks should also be outlined in your cyber-awareness training, so that everyone knows to not use random USB drives or charging cables they find.

Last but not least, you should have an incident response plan that anticipates USB attacks, and communicate that plan, so everyone knows what to do in case of a “USB Killer” attack. It may seem unlikely, but it’s certainly possible, and it is best to be prepared for it.

If you enjoyed this article, you might like these related posts on PKaudiovisual:

Design Principles For Secure AV Systems

Identifying Cyber Attacks, Risks, Vulnerabilities in AV Installations

5 Steps to Better Cyber Risk Management

The Best Data Breach Incident Response Plans Require These Steps