Information Security Buzz is a new resource that provides the best in breaking news for the industry. Aggregated from many credible sources, content is carefully selected to provide you with the latest threat trends, insights, practical solutions, hot topics and advice from around the globe.
Google’s announcement that it will be removing its ‘green padlock’ for HTTPS websites as of September, and will flag any non-HTTPS sites as insecure in Chrome from October. Google is hoping this will make secure websites are secured as standard. Craig Stewart, Vice-President EMEA at Venafi commented below.
“As consumers, we have been trained to look for the green padlock to make sure the site we are putting our details into is secure and can be trusted, so the fact these are now being removed might create some confusion and concern – but people shouldn’t worry, it’s actually a sign that the internet is becoming more secure. The fact is, websites should be secure as the de facto standard; it’s those sites that do not use HTTPS that should be brought to our attention so that we do not use them. When Chrome starts to flag any sites not using HTTPS as insecure, users will simply become used to expecting security as the default instead of checking for the padlock. This will pressure businesses to step up their game and improve security across the internet, which can only be a good thing.
However, as we’ve already seen from the depreciation of SHA-1 certificates, organisations are typically slow to react to warnings of this kind and can often underestimate the task at hand. Many organisations do not properly track which certificates they have applied where, and have thousands of certificates that they are unaware of. Just the task of discovering these and making sure they are upgraded to HTTPS will be a big task and, if done manually, there are likely to be gaps which cause disruption to customers and business processes. This is why businesses need to take control of their security and use automation to enable them to be agile in applying new changes such as switching from HTTP to HTTPS certificates. Unless organisations are able to identify where their HTTP certificates are, and then have the flexibility to revoke and replace these with HTTPS certificates, they will be faced with customers, partners and prospects refusing to access a seemingly insecure site. Businesses have less than 6 months to make sure they’ve resolved the situation, so better get started now.”
Attackers can obtain unauthorized access to financial applications at 58 percent of banks
Positive Technologies today released a new report, Bank Attacks 2018, detailing that banks have built up formidable barriers to prevent external attacks, yet fall short in defending against internal attackers. Whether by puncturing the perimeter with social engineering, vulnerabilities in web applications, or the help of insiders, as soon as attackers access the internal network, they find friendly terrain that is secured no better than companies in other industries.
With access to the internal network of client banks, Positive Technologies testers succeeded in obtaining access to financial applications in 58 percent of cases. At 25 percent of banks, they were able to compromise the workstations used for ATM management—in other words, these banks fell prey to techniques similar to ones used by Cobalt and other cybercriminal gangs in actual attacks. Moving money to criminal-controlled accounts via interbank transfers, a favorite method of the Lazarus and MoneyTaker groups, was possible at 17 percent of tested banks.
Also at 17 percent of banks, card processing systems were poorly defended, which would enable attackers to manipulate the balance of card accounts. Such attacks were recorded in early 2017 against banks in Eastern Europe. The Carbanak group, notorious for its ability to attack nearly any bank application, would have been able to steal funds from over half of the tested banks. On average, an attacker able to reach a bank’s internal network would need only four steps to obtain access to key banking systems.
The new report notes that banks tend to do a better job than other companies of protecting their network perimeter. In the last three years, penetration testers could access the internal network at 58 percent of all clients, but only 22 percent of banks. However, this number is still concerning, considering the high financial motivation of attackers and failure of many banks to audit code security during the design and development stages. In all test cases, access was enabled by vulnerabilities in web applications (social engineering techniques were not used). Such methods have been used in the wild by such groups as ATMitch and Lazarus.
Banks are at risk due to remote access, a dangerous feature that often leaves the door open to access by external users. The most common types are the SSH and Telnet protocols, which are present on the network perimeter of over half of banks, as well as protocols for file server access, found at 42 percent of banks.
However, the weakest link in bank security is the human factor. Attackers can easily bypass the best-protected network perimeter with the help of phishing, which offers a simple time-tested method for delivering malware onto a corporate network. Phishing messages can be sent to bank employees both at their work and personal email addresses. This method for bypassing the network perimeter has been used by almost every criminal group, including Cobalt, Lazarus, Carbanak, Metel, and GCMAN. In tests by Positive Technologies, employees at 75 percent of banks clicked on links in phishing messages, and those at 25 percent of banks entered their credentials in a fake authentication form. Also at 25 percent of banks, at least one employee ran a malicious attachment on their work computer.
The report also describes the organizational arrangements of these groups, with examples of announcements on hacker forums offering the services of bank insiders. Experts state that in some cases, the privileges of an employee with mere physical access to network jacks (such as a janitor or security guard) are enough for a successful attack. Another method for infecting banks is to hack their business partners and contractors, who may poorly secure their networks, and place malware on sites known to be visited by bank employees, as seen with Lazarus and Lurk.
After criminals obtain access to the bank’s internal network, they need to obtain local administrator privileges on servers and employee computers. To continue their attack, the criminals rely on two key “helpers”: weak password policies and poor protection against recovery of passwords from OS memory.
Almost half of banks used dictionary passwords on the network perimeter, but every bank had a weak password policy on its internal network. Weak passwords are set by users on roughly half of systems. In an even larger number of cases, testers encounter default accounts left behind after use for administrative tasks, including installation of databases, web servers, and operating systems. A quarter of banks used the password “P@ssw0rd”. Other common passwords include “admin”, keyboard combinations resembling “Qwerty123”, blank passwords, and default passwords (such as “sa” and “postgres”).
Once inside the network, attackers can freely roam about by using known vulnerabilities and legitimate software that does not raise red flags among administrators. By taking advantage of flaws in protection of the corporate network, attackers quickly obtain full control of the bank’s entire digital infrastructure.
Leigh-Anne Galloway, Cyber Security Resilience Lead at Positive Technologies outlined recommendations for banks: “The good news is that it’s possible to stop an attack and prevent loss of funds at any stage, as long as the attack is detected in time and appropriate measures are taken. Attachments should be scanned in a sandbox, without depending on endpoint antivirus solutions. It’s critical to receive and immediately react to alerts with the help of an in-house or contracted 24/7 security operations center. In addition, SIEM solutions substantially simplify and improve the effectiveness of incident management.”
5G is being hailed as the next big thing in the telecoms world. It’s seen as the enabler for IoT applications such as autonomous vehicles, healthcare solutions, and robotics – the future in other words, all thanks to its increased data speeds with incredibly low latency. With the number of worldwide 5G connections set to hit 1.4 billion by 2025[i], you can understand why its imminent rollout is also music to the ears of equipment manufacturers. Once fully implemented, the likes of Apple and Samsung will be using the “5G enabled” tagline as a key selling-point to an ever-growing smartphone market.
However, while these handset giants are busy counting their chickens before they’ve hatched with regards to profit margins and increased market share, and service providers are working up a sustainable business model prior to implementation (who is actually going to pay for it is still up for debate) security is being massively overlooked. If the next generation of telecommunications is to become a true success, securing the networks must be a priority.
Attacks can come in many different shapes and sizes; user malware, fraudulent calls, spam, viruses, data and identity theft, and denial of service, to name a few examples. The rise in security threats is partly due to the growing deployment of carrier Wi-Fi access infrastructures and small cells in public areas, offices and homes and will increase exponentially with M2M. Historically, carrier-grade telecom networks have had an excellent record for user and network security; however, today’s communications infrastructure is far more vulnerable than its predecessors. And with advances in security threats constantly evolving, service providers must invest in the right tools to keep on top of the issue.
These increasing security risks are due to the move to the IP-centric LTE architecture. The flatter architecture is what exposed the 4G networks, due to the fact there were fewer steps to the core network, and this will continue to be an issue with 5G networks. Previously, with 3G, the Radio Network Controller (RNC) controlled all access to the base stations meaning that potential hackers couldn’t get close to the core network. However, in LTE, IP backhaul is mandatory but the RNC node is eliminated, giving a potential attacker a straighter path to the core network. Operators recognise that IPsec tunnels will be required at every cell site connected to an insecure network for the purpose of authentication and encryption. In addition, there will be a large increase in RAN and small cells to provide the huge number of connections, giving intruders a greater number of access points to the core network.
To tackle these issues, operators must ensure connections from the device to the core network over S1 and Gb interfaces are fully authenticated. Operators must invest in and revisit the capabilities of their GPRS Tunneling (GTP) and Stream Control Transmission (SCTP) protocols, which will handle the connections into the core network. Authentication can be delivered by the RFC 4895 for the SCTP protocol without compromising performance or network monitoring visibility like IPsec/VPNs do. This can prove vital as networks become subjected to attacks with greater frequency and potentially disastrous outcomes. Alongside a highly reliable SCTP protocol, operators should implement a Datagram Transport Layer Security (DTLS) module. This helps detect and fix real-time connection failures, redundancy and fault tolerance for signaling applications and improved destination and peer path failure.
It’s clear that service providers cannot afford to cut corners when it comes to securing their networks and must look to a solution that will guarantee protection from attacks via a multitude of entry points. If 5G is set to dominate not only the telecommunications industry, but the tech world in general, providers must invest in security solutions to combat the ever-growing issue.
About Robin Kent
Robin Kent, Director of European Operations at Adax
boosting threat-sharing with the private sector, including a malicious code repository and exchange
curbing supply-chain risk, and
accelerating research and development to make energy systems more resilient to hacking.
Also, the plan serves as a roadmap for the new Office of Cybersecurity, Energy Security, and Emergency Response (CESER), for which The Administration has requested $96 million in the 2019 US Federal budget. In response, a Corero Network Security expert commented below
“This Cybersecurity Plan from the US DOE is far more radical than anything that’s been published so far in the UK/EU to enforce the NIS Directive, and constitutes key aspects of a superb roadmap for the relatively new Office of Cybersecurity, Energy Security, and Emergency Response (CESER). The DoE states their “Goal 3: Accelerate Game-Changing RD&D of Resilient EDS” and the Plan’s framers say they intend to “anticipate future energy sector attack scenarios and design cybersecurity into emerging energy delivery system devices from the start; and make future systems and components cybersecurity-aware and able to automatically prevent, detect, mitigate, and survive a cyber incident”. This is precisely the style of proactive “prevention and protection” posture that we’ve been encouraging. By way of contrast, the UK/EU legislation focuses on reactive “defense and disclosure”.
“Much of the DOE document rightly focuses on protecting the integrity of the power grid itself, but also recognizes that Internet-exposed management systems are at even greater risk than the more isolated power grid control systems themselves. The Plan underscores that the days of power grid controls being “off the Internet grid” are past, given the proliferation of “distributed energy resources ranging from electric vehicles to batteries and solar panels; increase customer participation and demand response; and integrate with other smart gas, water, and transportation infrastructure as Internet of Things technology”.
“The recent ESG outage underscored this, when a cyberattack on a 3rd party vendor’s systems (which relied on outmoded password security technology) drove system-wide disruption. That and other recent events have once again showed that many critical infrastructure operators have done the cyber-security basics well enough, including access control, but they have under-invested in cyber-defenses for advanced attacks such as DDoS, and they have demanded far too little of partners with Internet-exposed management and operations systems.
“The DoE Plan is a great start and worthy of applause. In one area, the recent UK/EU CI security strategies and directives remain ahead of US plans: enforcement.
“It’s unclear whether CESER will use the ‘carrot and stick’ approach that the UK has adopted for critical infrastructure entities with the NIS Directive, which allows for fines of up to $24 million or an eye-popping 4% of annual revenues against operators of essential services who fail to successfully defend their energy, control and data systems against cyberattacks.”
A new report has revealed that 25% of enterprises have suffered from cloud cryptojacking incidents, a sharp increase from the 8% that was recorded from last quarter. As more enterprises increase their activities in the cloud, this area has become a natural target for malicious attackers. IT security experts commented below.
“The popularity of cloud means more and more organizations are hosting their critical infrastructure on it, but not necessarily taking the right security measures to protect their critical assets. However, the shared trust that is necessary for a secure cloud operation may not be well understood by the organizations that adopt cloud infrastructures. They may have overlooked that a new set of security controls are required to safeguard the assets and infrastructure. For an attacker, there are powerful economics at play. When looking for free computer power, cloud infrastructure becomes a tempting target for cryptojackers as Tesla found out earlier this year. The combination of a higher risk exposure and an increased probability of success for attackers is like hanging a “Hack me” sign on your account. Furthermore, the standardization in cloud environments lends itself to automation, making exploitation more cost efficient.
For organizations, it is essential to perform audits of their accounts and environments – a hardening review is a good start, but continuity is the key to reduce the risk exposure. Use automated solutions to audit and track the security level of both the cloud accounts as well as business assets, continuously. This is relevant not only to protect your organization’s critical data, but remember that maintaining a sound baseline is part of the GDPR requirements for data protection. Hardening and audits are also clearly one of those basic practices”
In May 2017 the biggest ransomware attack in history broke out. Known as “WannaCry,” the now infamous ransomware spread like wildfire, affecting PCs around the world. One year on, the same malware – which exploits the EternalBlue vulnerability – is still prevalent.
Avast has detected and blocked more than 176 million WannaCry attacks in 217 countries since the initial attack. And in March 2018, we blocked 54 million attacks attempting to abuse EternalBlue. Given the publicity around the attacks, it could be assumed that people and businesses would have completed their system updates. Our data, however, shows that nearly one third (29%) of Windows-based PCs globally are still running with the vulnerability in place.
In the year since WannaCry did its damage, we’ve spent time investigating it and subsequent attacks to gather insights that can help us understand what needs to be done to prevent this sort of cyberattack from happening again.
If only there was a patch for poor patch adoption…
Despite WannaCry’s widespread attention and the devastating effects it had, people still failed to patch their systems. This begs the question: why are people not patching?
Firstly, it could be due to a lack of understanding around patches or software updates. The average consumer may not be aware that systems contain vulnerabilities, which cybercriminals can exploit. Once vulnerabilities are found, software developers typically issue a patch to rectify the problem. WannaCry’s impact could have been greatly minimised had more people downloaded the patch as soon as it was available.
The second possibility is that consumers don’t like interruptions. Patching a system or programme requires users to stop what they are doing, which might discourage them from running updates. Another reason why people may not update is to resist change. Operating system or programme updates can change familiar programme environments, which isn’t always welcomed
Thirdly, businesses and organisations like the NHS may place system updates into a planned calendar that fits around activity as it can be potentially very disruptive. For an organisation like the NHS, it also can necessitate the reduction of services while the update is carried out. In these cases, the balance is weighed between the risk of not patching and the expected disruption.
Patch perfect – what the technology industry must do better
In order to improve patch adoption, the technology industry needs to work together to raise awareness of patches. People may be more inclined to patch if they realise there is a problem that could negatively affect them.. Just as the technology industry has worked at building awareness of digital security , now it must work to educate and develop understanding of the importance of patching. These two things together are a powerful deterrent to cybercriminal activity.
It is not enough for users to become more conscious of patches; the inconvenience associated with them needs to decrease as well. This could be done by updating in the background or in smaller doses, or by simply making people more aware of overnight updates.
Finally, software developers should consider that their systems may live beyond their intended years. Windows XP, for example, is still being used by 4.3% of Avast users, although Microsoft no longer provides support for this popular operating system.
Businesses also need to get serious about employee education. Hackers like to exploit human mistakes, making it vital to ensure employees are aware of security best practices and that organisations appropriately limit access rights. Penetration testing is a great way for companies to see where their weaknesses lie.
Consumers also benefit from receiving educational information about personal device security and the role of patches. While they don’t have business tools to rely on, there are other services available that can help them ensure their device security.
Ultimately, it is clear that users need help and education about security and guidance through the necessary steps. At the same time, software distributors need to ensure the updates they push to their customers are clean. If this can be done, then the collaboration and contribution of users and the broader technology industry is a truly powerful one in the fight against malware.
The GDPR (General Data Protection Regulation) is designed to protect the privacy of all EU citizens and this will change the way the organizations store and use EU citizens’ data. Failure to meet the requirements of GDPR could also turn out to be an expensive expense. Here is summary of the penalty as it applied to articles in GDPR.
Penalty: Maximum penalty up to 4% of annual global turnover or €20 million, whichever is greater
Articles in GDPR:
5 – Principles relating to processing of personal data
6 – Lawfulness of processing
7 – Conditions for consent
9 – Processing of special categories of personal data
12 – Transparent information, communication and modalities for the exercise of the rights of the data subject
13 – Information to be provided where personal data are collected from the data subject
14 – Information to be provided where personal data have not been obtained from the data subject
15 – Right of access by the data subject
16 – Right to rectification
17 – Right to erasure (‘right to be forgotten’)
18 – Right to restriction of processing
19 – Notification obligation regarding rectification or erasure of personal data or restriction of processing
20 – Right to data portability
21 – Right to object
22 – Automated individual decision-making, including profiling
Penalty: Maximum penalty up to 2% of annual global turnover or €10 million, whichever is greater
Articles in GDPR:
8 – Conditions applicable to child’s consent in relation to information society services
11 – Processing which does not require identification
25 – Data protection by design and by default
26 – Joint controllers
27 – Representatives of controllers or processors not established in the Union
28 – Processor
29 – Processing under the authority of the controller or processor
30 – Records of processing activities
31 – Cooperation with the supervisory authority
32 – Security of processing
33 – Notification of personal data breach to the supervisory authority
34 – Communication of a personal data breach to the data subject
Optiv Security Cyber-Intelligence Report Reveals State of the Cyber-Threat Landscape
Optiv Security, the world’s leading security solutions integrator, has published its 2018 Cyber Threat Intelligence Estimate (CTIE) which details the current state of the cyber-threat landscape and uses estimative intelligence to predict how that landscape stands to change in the future. This report is generated to provide Optiv’s clients with a global view of security threats and trends, so they can effectively adapt their strategic plans to mitigate anticipated enterprise risk.
Among the key findings in the report:
The Rise of the Netherlands and Lebanon. Seemingly benign nation states such as Lebanon and the Netherlandsare rising in the ranks of nation-sponsored attackers. The motivations for this rise are unclear, although both countries made headlines this year with cyberattacks: Lebanon for spying on thousands of people across 20 countries via an Android malware campaign; and the Netherlands for penetrating Russia’s Cozy Bear organization and uncovering the hack of the Democratic National Committee during the 2016 presidential election in the U.S.
Cyber-Social is the Next Front for Nation States.Nation-state-sponsored attacks are expanding from “cyber-physical,” where the objective is to compromise data or critical infrastructure, to “cyber-social,” where the goal is to use social media to influence the opinions and actions of large populations of people. Russian cyber-social exploitation of European and American elections showed how relatively easy and cost-effective these can be, which dramatically increases the likelihood that this class of exploit will be exploited by a growing number of nation states, hacktivists and other groups in the future.
Critical Infrastructure has been Breached. The utilities and energy industries experienced high indicators of exploit activity without any high-profile breaches. This suggests that attackers have access to critical infrastructure but are waiting to exploit this access in response to events such as war, or attacks on their own infrastructure.
Healthcare IoT is Vulnerable. The Internet of Things (IoT) continues to suffer from weak security fundamentals and unmitigated vulnerabilities. The healthcare IoT is particularly problematic due to the increasing numbers of networked medical devices and the potential damage that could occur should those devices become compromised.
Phishing Remains the Delivery Vehicle of Choice. Despite years of technology countermeasures, publicity and education campaigns, phishing remains the number one malware delivery mechanism. Additionally, while modern email security solutions can detect and stop emails with malicious attachments, they are still largely impotent against detecting hyperlinks to malicious websites.
Protecting the Brand Rises in Importance.Brand security threats were the second most common source of alerts for Optiv during the year – behind phishing attacks, but ahead of typical security concerns such as data leakage and web vulnerabilities. These alerts were generated in response to the presence of “phony, misleading or malicious sites,” raising the importance of brand risk in the hierarchy of enterprise security concerns.
Professor Avishai Wool, CTO and co-founder at AlgoSec, looks at how organizations can ensure network security is extended to AWS environments
With organizations having a seemingly insatiable appetite for the agility, scalability and flexibility offered by the cloud, it’s little surprise that one of the market’s largest providers, Amazon’s AWS, continues to go from strength to strength. In its latest earnings report, AWS reported a 45% revenue growth during Q4 2017.
However, AWS has also been in the news recently for the wrong reasons, following a number of breaches of its S3 data object storage service. Over the past 18 months, companies including Uber, Verizon, and Dow Jones have had large volumes of data exposed via misconfigured S3 buckets. Between them, the firms inadvertently made public the digital identities of hundreds of millions of people.
Sub-par security practices
It’s important to note that these potential breaches were not caused by problems at Amazon itself. Instead, they were the result of users misconfiguring the Amazon S3 service, and failing to ensure proper controls were set-up when uploading sensitive data to it. In effect, data was placed in S3 buckets and secured with a weak password – or in some cases, no password at all.
Amazon has made several tools available to make it easier for S3 customers to work out who can access their data, and to help secure it. However, organizations still need to use access controls for S3 that go beyond just passwords, such as two factor authentication, to control who can login to their S3 administration console.
But to understand why these basic mistakes are still being made by so many organizations, we need to look at the problem in the wider context of public cloud adoption in many enterprises. When speaking with IT managers that are putting data in the cloud, it is not uncommon to hear statements such as ‘there is no difference between on-premise and cloud servers.’ In other words, all servers are seen as being part of the enterprise IT infrastructure: and they will use whichever environment best suits their needs, operationally and financially.
Old habits die hard
However, that statement overlooks one critical point: cloud servers are much more exposed than physical, on-premise servers. For example, if you make a mistake when configuring the security for an on-premise server storing sensitive data, it is still protected by other security measures by default. The server’s IP address is likely to be protected by the corporate gateway, or other firewalls used to segment the network internally, and other security layers which stand in the way of potential attackers.
In contrast, when you provision a server up in the public cloud, it is accessible to any computer in the world. By default anybody can ping it, try to connect and send packets to it, or try to browse it. Beyond a password, it doesn’t have all those extra protections from its environment that an on-premise server has. And this means you must put controls in place to change that.
These are not issues that the organization’s IT teams, who have become comfortable with having all those extra safeguards of the on-premise network in place, have to regularly think about when provisioning severs in the data centre. There is often an assumption that something or someone will secure the server – and this carries over when putting servers in the cloud.
So when utilizing the cloud, security teams need to step in and establish a perimeter, define policies, implement controls, and put in governance to ensure their data and servers are secured and managed effectively – just as they do with their on-premise network.
Security 101 for cloud data
This means you will still need to apply all the basics of on-premise network security when utilizing the public cloud: access controls defined by administration rights or access requirements and governed by passwords; filtering capabilities defined by which IP addresses need connectivity to and from one another.
You still need to consider if you should use data encryption, and whether you should segment the AWS environment into multiple virtual private clouds (VPC). Then you will need to define which VPCs can communicate with each other, and place VPC gateways accordingly with access controls in the form of security groups to manage and secure connectivity.
You will also need controls over how to connect your AWS and on-premise environments, for example using a VPN. This requires a logging infrastructure to record actions for forensics and audits, to get a trail of who did what. None of these techniques are new, but they all have to be applied correctly to the AWS deployment, to ensure it can function as expected.
Extending network security to the cloud
In addition to these security basics, IT teams also need to look at how they should extend network security to the cloud. While some security functionality is built into cloud infrastructures, it is less sophisticated than the security offerings from specialist vendors.
As such, organizations that want to use the cloud to store and process sensitive information are well advised to augment the security functionality offered by AWS with virtualized security solutions, which can be deployed within the AWS environment to bring the level of protection closer to what they are used to within on-premise environments.
Many firewall vendors sell virtualized versions of their products customized for Amazon. While these come at a cost, if you want to be serious about security, you need more than the measures that come as part of the AWS service. Ultimately you need to deploy additional web application firewalls, network firewalls and implement encryption capabilities to mitigate your risks of being attacked and data being breached.
This has the potential to add overall complexity to the security management. However using a security policy management solution will greatly simplify this, enabling security teams to have visibility of their entire estate and enforce policies consistently across both AWS and the on-premise data centre while providing a full audit trail of every change.
About Professor Avishai Wool
Professor Avishai Wool, Co-Founder and CTO at AlgoSec
Siemen’s has sent out an alert on a Denial of Service vulnerability that could affect its SIMATIC S7-400, a family of programmable logic controllers (PLCs) designed for process control in industrial environments. Andrew Lloyd, President at Corero Network Security commented below.
“As we’ve been discussing in relation to Critical Infrastructure security, there is a genuine risk of service disruption, malware infestation and/or safety if control equipment such as these PLCs is exposed on the Internet where the full pandemic of cyber-threats (including DDoS) is there to exploit their vulnerabilities. It comes as little surprise that these PLCs have such vulnerabilities and Siemens should be applauded for disclosing this one. As the notorious Stuxnet worm proved, older PLC equipment was not designed with Internet exposure in mind. Consequently, many have little or no security to protect them from being compromised.Best practice advice would have the control networks that these PLCs form part of completely isolated from the Internet. However, either through ignorance, convenience or efficiency, the required “air gap” is often breached; opening the flood gates to the cyber-threats.”