Attack of the USB Killers: Coming to Your Clients’ Classrooms

What are USB Killers, and what does their existence say about the security behind your classroom/higher ed tech installations?

This article was originally published on Commerical Integrator on May 6, 2019

Last month, a former student of the College of St. Rose in New York pled guilty to destroying “66 computers as well as numerous monitors and digital podiums containing USB data ports owned by the College.” The damage was done using a “USB Killer” device that discharged high voltage pulses into the host device, physically damaging the host’s electrical system.

According to the court documents, the total losses due to the incident were 58,471 USD. A quick Google search shows that these “USB Killer” devices are readily available on websites like Ebay for around 40 USD.

Details of the “digital podiums” were not released, but any AV integrator who has done work in higher education institutions could probably guess they were lecterns or teaching stations outfitted with room computers, portable laptop connections, confidence monitors, control touch panels, media switchers, and/or playback devices.

The “numerous monitors” in the court documents could have been simple computer monitors, or larger wall-mounted flat panel displays often used for small-group collaboration.

Motive? Doesn’t Matter

The motives of the attacker are unclear, and in the end, are essentially irrelevant. What is relevant is that the same thing could easily happen at another university, K-12 school, company, or house of worship.

Security experts have shown that USB drives and cables can be built to perform HID attacks, launch command shells, download malicious payloads, and/or modify the DNS settings to redirect traffic.

But more importantly, any USB memory device (a.k.a. USB stick or thumb-drive) could contain files that are infected with malware.

One penetration tester that I spoke to said he often drops off a handful of infected USB drives at hospitals and medical buildings.

The USB drives appear to be harmless freebies, and eventually an employee uses one, opens the file, and the test payload is delivered.

He said that the USB drive attack vector is not as effective as email phishing campaigns, but it is still part of his testing.

When I first shared the College of St. Rose story, many #AVTweeps commented that little could be done:

“It’s hard to protect against physical attacks. If you do block the USB port or somehow protect it from electrical discharge, the attacker could smash it with a hammer.” – Leonard C. Suskin (@Czhorat)

“Without an option to disable the port completely for both data and power transfer, there is little anyone could do in this instance. With physical access, all bets are off…”Kevin (@kevin_maltby)

What Can Be Done About USB Killers

I agree that if someone is truly intent on causing damage, they will find a way, but I think there are still some things that can be done to minimize the impact and likelihood of a USB-based attack.

First, make sure that all members of your organization have signed a computer usage policy, and formally agree to not destroy computer hardware.

Next, consider remoting all computers in locked data closets, and always lock classroom podiums and AV credenzas to minimize access.

Use card-keys or biometric scanners to allow limited access to server rooms, and add IP cameras to these rooms so you can prove who actually did the deed. This is called attribution, and is often a challenge in cybersecurity.

USB attacks should also be outlined in your cyber-awareness training, so that everyone knows to not use random USB drives or charging cables they find.

Last but not least, you should have an incident response plan that anticipates USB attacks, and communicate that plan, so everyone knows what to do in case of a “USB Killer” attack. It may seem unlikely, but it’s certainly possible, and it is best to be prepared for it.

Advertisements

Hackathons and Cash for Hackers: What the AV Industry Needs

AV, IoT and automation manufacturers need to better understand zero-day vulnerability. Trade show hackathons and cash for hackers should be seriously considered.

This article was originally published on Commercial Integrator on April 1, 2019

In the AV industry, the concept of hackathons at tradeshows or actually paying a hacker to exploit a networked AV product or system is usually relegated to a fun topic of discussion – but perhaps they should be considered far more seriously.

A zero-day vulnerability is one where the manufacturer, vendor, and end-user are not aware of the security risk until after the system has been in use for a period of time.  Often, they are not made aware of the vulnerability until it is exploited by hackers, which is called a zero-day exploit. Zero-day vulns are often considered a software topic, but AV/IoT devices, firmware, and control systems are also at risk.

The zero-day vulnerability timeline goes like this:

  1. Vulnerability is discovered by a black-hat hacker. (More on that term later.)
  2. Vulnerability is exploited, attack is launched, system is hacked, data integrity is breached.
  3. Vendor is made aware of the vulnerability. This day is considered “Day Zero.”
  4. Vendor works on a solution to the vulnerability, this takes some time.
  5. Vendor releases a security update and hopes end-users implement it.

Sometimes the vuln is discovered by a “white-hat” or “grey-hat” researcher, but it is hard to say if these ethical hackers were the first to discover the problem, or if the bad guys have already exploited the vulnerability, and just have not been found out yet.

There are anomaly-based intrusion detection systems that are able to detect some unknown, zero-day exploits, but finding vulnerabilities in IoT and audiovisual devices often takes some smart humans kicking the tires, and picking the locks, so to speak.

What do hackers do when they discover a zero-day vulnerability?

There are three typical tracks:

  1. Full Disclosure – release the details of the vulnerability to the public and vendor simultaneously. This forces the vendor to react quickly, but it also alerts the bad guys of the vulnerability.
  2. Responsible Disclosure – contact the vendor directly about the vulnerability, and give them time to release a patch to their end-users before fully disclosing the vulnerability to the public.
  3. Black Markets and Grey Markets – hackers and researchers sometimes sell their findings to vulnerability exploit brokers, who can then re-sell them to the vendors, nation-states, or competitors. Oftentimes, the hacker does not know who the vulnerability buyer is, or of their intentions. Many are only incentivized by the money and the challenge of finding the bugs.

Some software vendorsonline service providers, and research firms are offering big rewards for zero day vulns.

Let’s Grow Up When It Comes to Cybersecurity

In this writer’s opinion, the AV industry should follow the example set by their big IT brothers and sisters; even if an AV company can’t pay out such large sums of money, there should be some sort of cash incentives for finding security vulnerabilities in AV systems. This could happen on three levels:

  1. At the manufacturer level – offer bug bounties for white-hat hackers who report vulnerabilities.
  2. At the integration level – setup knowledge bases of custom code and configurations, and reward other programmers, engineers, and technicians who can find any vulnerabilities in the systems.
  3. At the user level – reward any employee who raises a security concern about a device or process.

There have been some recent online discussions about setting up “hack-a-thons” at AV-industry trade shows. I think this is a fantastic idea to encourage AV security, reward hackers, and spread awareness.

-Paul Konikowski, CTS-D

If you enjoyed this article, you might like these related posts on PKaudiovisual:

Design Principles For Secure AV Systems

Identifying Cyber Attacks, Risks, Vulnerabilities in AV Installations

5 Steps to Better Cyber Risk Management

The Best Data Breach Incident Response Plans Require These Steps

Design Principles for Secure AV Systems

Secure AV systems start with smart design. Here are some standards that’ve been around forever but easily apply to modern audiovisual projects.

This article was originally written by Paul Konikowski, and published on Commercial Integrator on March 1, 2019

In my last CI article, we reviewed cyber threats and vulnerabilities in AV systems. Many of the known vulnerabilities, or “vulns” can be fixed with a firmware upgrade, securing your network, and/or enabling passwords; but what else can AV manufacturers, consultants, and integrators do to achieve secure AV systems?

One thing that can be done is to adopt a secure mindset from the get-go when designing secure AV systems, keeping the following design principles in mind.

These principles were outlined by Jerome H. Saltzer and Michael D. Schroeder in an IEEE paper way back in 1975. We will apply those secure design principals to AV systems here.

Economy of mechanism

Keep designs simple, which also means keeping your programming code as small as possible, making it easier to test and analyze. Simpler design means that less can go wrong.

Fail-safe defaults

The default access to a resource should be no access. A good example of something that violates this principle is a wireless router that does not require a password and/or encrypt the traffic by default.

Complete mediation

This means every access to a resource is checked against the access control mechanism, every time, and all attempts to bypass security are prevented.

Open design

“Security by obscurity” does not work. Adapt an open-source attitude so your security does not depend on secrecy. Code and designs should be open for scrutiny by your community. It’s much better to have a friend or colleague find an error, then it is to wait for a bad actor to discover it.

Separation of privilege

Access to rooms, systems, or files should depend on more than one condition. If someone gains access to the AV rack, can they simply access the components using a console cable? Or did you go a step further, and enable passwords, as well as encryption of those passwords?

Least privilege

Users (and programs) should only be given the minimum access rights to complete their tasks. The default access should be none, and then access should be granted as needed, on an individual basis, or based on well-defined roles within the organization. Temporary access can also be granted.

Least common mechanism

This means that one should minimize the amount of mechanisms and/or equipment that is used by more than one user. A good example of this would be a “room PC” in a training room used by multiple instructors. Does each instructor log in with their own credentials?

Psychological acceptability, a.k.a. ease of use

Users will avoid security measures that get in the way of convenience. A physical analogy would be a dead bolt that requires a key on both the outside and the inside. Some people won’t bother locking it from the inside, especially if their key gets stuck in the lock.

Other best practices like layering, isolation, encapsulation, modularity, and auditability should also be kept in mind.

If you enjoyed this article, you might like these related posts on PKaudiovisual:

Identifying Cyber Attacks, Risks, Vulnerabilities in AV Installations

5 Steps to Better Cyber Risk Management

The Best Data Breach Incident Response Plans Require These Steps