InfoSec Island aims to provide a place for IT and network professionals to go to find help and information quickly and easily, by combining an online community, infosec portal, and a social network. Infosec Island’s blog features several contributors and includes information about the Cloud, malware, cyberattacks, and more topics related to information security.
Today’s modern applications are designed for scale and performance. To achieve this performance, many of these deployments are hosted on public cloud platforms like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) for their benefit of elasticity and speed of deployment. The challenge is that effectively securing cloud hosted applications to date has been difficult. There are many high-profile security events involving successful attacks on cloud-hosted applications in the media, and these are only the examples that were disclosed to the public.
In reality, traditional security deployment patterns do not work effectively with applications hosted on public cloud platforms. Organizations should not try to push their previous on-premises application security deployments into cloud environments for several reasons.
Cloud application security requires new approaches, policies, configurations, and strategies that both allow organizations to address business needs and security risks in unison. Not incorporating these will no doubt deliver an insufficient security posture and cost unnecessary time and money.
The balance of performance and security
Whether your organization is a one-person startup, a global enterprise, or anything in between, you depend on applications to operate effectively. You cannot afford down time with these applications, and for many the cloud is still a confusing space when it comes to who is responsible for security. Unfortunately, a single unpatched vulnerability in an application can let an attacker penetrate your network, steal or compromise your data along with that of your customers—causing significant disruption to your operations. According to a recent report, “Unlocking the Public Cloud,”74 percent of respondents stated that security concerns restrict their organization’s migration to public cloud. Public cloud adoption is rapidly growing, yet security is the largest area of resistance when moving to the cloud.
Many organizations still rank performance well over security, but they should be in a balance with equal importance given the risks. For example, in a May 2018 report from Ponemon Institute, 48 percent of the 1,400 IT professionals who responded said they value application performance and speed over security.
While deploying layer 7 protections is extremely paramount to securing applications, it’s also essential that any security technology integrates deeply with existing cloud platforms and licensing models.
Security measures should be deeply coupled with the dynamic scalability of public cloud providers such as AWS, Azure and GCP, ensuring that performance handling requirements are addressed in real-time without any manual interventions. Also, organizations should direct access to the native logging and reporting features available to cloud platforms.
Fixing application vulnerabilities in the cloud
You wouldn’t necessarily think this, but application vulnerabilities are pervasive and often untouched until it is too late. Unfortunately, fixes or patches are a reactive process that leaves vulnerabilities exposed for far too long (months isn’t uncommon). The problem is clear and vulnerability remediation on an automated and continuous basis is paramount in ensuring application security both on-premise and in the cloud.
In reference to the Ponemon research, 75 percent of organizations experienced a material cyber-attack or data breach within the last year due to a compromised application. Interestingly, only 25 percent of these IT professionals say their organization is making a significant investment in solutions to prevent application attacks despite the awareness of the negative impact of malicious activity.
Because of frightening statistics like these, it is essential to implement a set of policies that provide continued protection of applications with regular vulnerability management and remediation practices, which can even be automated to ensure that application changes don’t open up vulnerabilities.
Security aligned with the cloud
Here are some best practices for effective application security in a cloud generation:
Application security must provide the ability to satisfy the most demanding use-cases specific to cloud hosted applications. Also, do this without carrying the management overhead of your legacy on-premises architectures.
Fully featured API that provides complete control via orchestration tools already used by DevOps teams.
Security needs to be deployable in high-availability clusters and auto-scaled with the use of cloud templates. Also, they should be managed and monitored from a single pane of glass user interface.
It is imperative they integrate directly with native public cloud services including Elastic Load Balancing, AWS CloudWatch, Azure ExpressRoute, Azure OMS and more.
It is essential security technologies provide complete licensing flexibility including pure consumption-based billing. This allows you to deploy as many instances as needed and only pay for the traffic that is secured through those applications.
Basically, securing applications effectively in the cloud means adopting new ways of thinking about security, and it is critical to look at the security technology stack you have deployed today. Assess what is lacking and adopt what is required for regular monitoring and vulnerability remediation on those applications. It is key to focus on protecting each application with the right level of security. This means deploying security that is aligned with your current cloud consumption and leveraging tools designed for those cloud environments that allow you to build security controls.
About the author: Jonathan Bregman has global responsibility for leading Barracuda's web application security product marketing strategy. He joins Barracuda from Seattle, WA where he worked with Microsoft, Amazon and their ISV partners to build innovative marketing programs focused on driving awareness and demand for emerging products in enterprise software, cloud services and cybersecurity.
The cybersecurity landscape has changed dramatically during the past decade, with threat actors constantly changing tactics to breach businesses’ perimeter defenses, cause data breaches, or spread malware. New threats, new tools, and new techniques are regularly chained together to pull off advanced and sophisticated attacks that span across multiple deployment stages, in an effort to be as stealthy, as pervasive, and as effective as possible without triggering any alarm bells from traditional security solutions.
Security solutions have also evolved, encompassing multi-stage and multi-layered defensive technologies aimed at covering all potential attack vectors and detecting threats at pre-execution, on-execution, or even throughout execution.
All malware is basically code that’s stored (on disk or in memory) and executed, just like any other application. Delivered as a file or binary, security technologies refer to these states of malware detection as pre-execution and on-execution. Basically, it boils down to detecting malware before, or after, it gets executed on the victim’s endpoint.
Layered security solutions often cover these detection stages with multiple security technologies specifically designed to detect and prevent zero-day threats, APTs, fileless attacks and obfuscated malware from reaching or executing on the endpoint.
For example, pre-execution detection technologies often include signatures and file fingerprints matched against cloud lookups (local and cloud-based machine learning models aimed at ascertaining the likelihood that an unknown file is malicious based on similarity to known malicious files), as well as hyper detection technologies, which are basically machine learning algorithms on steroids.
It helps to think that hyper detection technologies are basically paranoid machine learning algorithms for detecting advanced and sophisticated threats at pre-execution, without taking any chances. This is particularly useful for organizations in detecting potentially advanced attacks, as it can inspect and detect malicious commands and scripts - including VB scripts, JAVA scripts, PowerShell scripts, and WMI scripts – that are usually associated with sophisticated fileless attacks.
On-execution security technologies sometimes involve detonating the binary inside a sandboxed environment, letting it execute for a specific amount of time, then analyzing all system changes the binary made, the internet connections it attempted, and pretty much inspect any changes and behavior the binary had on the system after it was executed. A sandbox analyzer is highly effective as there’s no risk of infecting a production endpoint and the security tools used to analyze the binary can be set to a highly paranoid mode. The trade-off is that this would typically cause performance penalties on a production endpoint, and even risk compromising the organization’s network should the threat actually breach containment.
Of course, there are on-execution technologies that are deployed on endpoints to specifically detect and prevent exploits from occurring or for monitoring the behavior of running applications and processes throughout their entire lifetime. These technologies are designed to constantly assess the security status of all running applications, and prevent any malicious behavior from compromising the endpoint.
Layered Security Defenses
Multi-stage detection using layered security technologies gives security teams the unique ability to stop the attack kill chain at almost any stage of attack, regardless of the threat’s complexity. For instance, while a tampered document that contains a malicious Visual Basic script might bypass an email filtering solution, it will definitely be picked up by a sandbox analyzer technology as soon as the script starts to execute malicious instructions or commands, or starts to connect to and download additional components on the endpoint.
It’s important to understand that the increased sophistication of threats requires security technologies capable of covering multiple stages of attack, creating a security mesh that acts as a safety net to protect your infrastructure and data. However, it’s equally important that all these security layers be managed from a centralized console that offers a single pane of glass visibility into the overall security posture of the organization. This makes managing security aspects less cumbersome, while also helping security and IT teams focus on implementing prevention measures rather than fighting alert fatigue.
About the author: Liviu Arsene is a Senior E-Threat analyst for Bitdefender, with a strong background in security and technology. Reporting on global trends and developments in computer security, he writes about malware outbreaks and security incidents while coordinating with technical and research departments.
VirusTotal released an updated VTZilla browser extension this week to offer support for Firefox Quantum, the new and improved Web browser from Mozilla.
The browser extension was designed with a simple goal in mind: allow users to send files to scan by adding an option in the Download window and to submit URLs via an input box.
The VTZilla extension already proved highly popular among users, but version 1.0, which had not received an update since 2012, no longer worked with Mozilla’s browser after Firefox Quantum discontinued support for old extensions.
Starting toward the end of last year, Mozilla required all developers to update their browser extensions to WebExtensions APIs, a new standard in browser extensions, and VirusTotal is now complying with the requirement.
The newly released VTZilla version 2.0 builds on the success of the previous version and brings along increased ease-of-use, more customization options, and transparency.
Once the updated browser extension has been installed, the VirusTotal icon appears in the Firefox Quantum’s toolbar, allowing quick access to various configuration options.
Clicking on the icon enables users to customize how files and URLs are sent to VirusTotal, as well as to choose a level of contribution to the security community they want.
“Users can then navigate as usual. When the extension detects a download it will show a bubble where you can see the upload progress and the links to file or URL reports,” VirusTotal’s Camilo Benito explains.
“These reports will help users to determine if the file or URL in use is safe, allowing them to complement their risk assessment of the resource,” Benito continues.
Previously, only the pertinent URL tied to the file download was scanned, and access to the file report was available only via the URL report and only if VirusTotal servers had been able to download the pertinent file.
VTZilla also allows users to send any other URL or hash to VirusTotal and other features are only one right-click away.
VirusTotal is also determined to improve the extension and add functionality to it and is also open to feedback and suggestions. The Google-owned service can now make the extension compatible with other browsers that support the WebExtensions standard as well.
The extension revamp will soon be followed by VTZilla features that should allow users further help the security industry fight against malware. “Even non-techies will be able to contribute,” Benito says.
Recent headlines heralded the latest in cryptomining hacks to leverage stolen NSA exploits. This time in the form of PyRoMine, a Python-based malware which uses an NSA exploit to spread to Windows machines while also disabling security software and allowing the exfiltration of unencrypted data. By also configuring the Windows Remote Management Service, the machine becomes susceptible to future attacks.
Despite all the investments in cyber protection and prevention technology, it seems that the cyber terrorist’s best tool is nothing more than variations on previous exploits because most security products really can’t accommodate every variation of zero-day malware detection in order to prevent the ensuing damage.
Cryptomining Beats Out Ransomware
Ransomware was the threat that wreaked havoc across organizations for years and sent most IT Security professionals into a panic at the mere mention of a new exploit hitting the headlines. However, now it seems that Ransomware is taking a back seat to CryptoMiners. According to a recent article at DigitalTrends.com by Jon Martindale titled “Cryptojacking is the new ransomware. Is that a good thing?”
“In our history of malware feature, we looked at how malware tends to come in waves. While the latest and most dangerous in recent memory has been ransomware, it’s been pushed far from the top spot of common attacks in recent months by the advent of cryptominers, which look to force infected systems to mine cryptocurrency directly.”
The article goes further with this quote from a Senior E-Threat analyst on the expected growth of this type of threat:
“Since cybercriminals are always financially motivated, cryptojacking is yet another method for them to generate revenue,” said Liviu Arsene, senior E-Threat analyst at BitDefender. “Currently, it’s outpacing ransomware reports by a factor of 1 to 100, and these numbers will continue to increase for as long as virtual currencies remain popular and the market demands it.”
Variations on Old Hacks
Everything old is new again, or so goes an old adage, and it seems to apply to cyber threats as well. Fortinet researchers spotted a malware dubbed ‘PyRoMine’ which uses the ETERNALROMANCE exploit to spread to vulnerable Windows machines, according to an April 24 blog post.
“This malware is a real threat as it not only uses the machine for cryptocurrency mining, but it also opens the machine for possible future attacks since it starts RDP services and disables security services," the blog said. "FortiGuardLabs is expecting that commodity malware will continue to use the NSA exploits to accelerate its ability to target vulnerable systems and to earn more profit.”
The malware isn't the first to mine cryptocurrency that uses previously leaked NSA exploits the malware is still a threat as it leaves machines vulnerable to future attacks because it starts RDP services and disables security services.
The odds are great that we will see other variations on this NSA exploit before the year is up. Now is clearly the time to start evaluating other technologies that take more preventative steps to protect your IT infrastructure.
About the author: Boris Vaynberg co-founded Solebit LABS Ltd. in 2014 and serves as its Chief Executive Officer. Mr. Vaynberg has more than a decade of experience in leading large-scale cyber- and network security projects in the civilian and military intelligence sectors.
On May 25th of 2018, the General Data Protection Regulation (GDPR) goes into effect. This is a law passed in 2016 by the member states of the European Union that requires compliance with regard to how organizations store and process the personal data of individual residents of the EU. Now maybe you are thinking that this regulation does not apply to your organization because it is not based in the EU. Don’t stop reading just yet.
This regulation applies to any organization that offers goods or services to EU residents and/or processes the personal information of EU residents, regardless of whether the organization is based in the EU or not. And the law does not apply only to the huge multinational companies of the world. It applies to small businesses as well. For example, consider an e-commerce business that sells Tshirts online, and it sells to people in the EU. Or perhaps an email marketing company that sends out periodic emails to EU citizens. Or even a message board website that allows users to create profiles and gathers personal information during the registration process. The GDPR would apply to all these businesses, no matter how big or small.
This regulation is the biggest change to the protection of individual personal data in over twenty years and is far reaching in its scope. It is important to understand if and how it applies to your organization.
What Type Of Data Is Protected?
The GDPR is meant to protect the personal data and fundamental rights and freedoms of natural persons in the EU. It does this by requiring organizations to implement strict policies, procedures and technical controls when processing the personal data of EU citizens. The regulation defines the term “personal data” very broadly. According to the regulation, personal data means “any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.” Examples of personal data would include name, email address, IP address, physical address, photos, gender, health information and national identification number.
The term processing is also defined very broadly. According to the GDPR, processing means “any operation or set of operations which is performed on personal data or on sets of personal data, whether or not by automated means, such as collection, recording, organization, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction.” Examples of processing would including simple storage of the data, sending out marketing emails, collecting personal data when a visitor places and order, processing a credit card transaction, and any other type of storage, processing or manipulation of personal data that occurs during the normal course of business.
Finally, the regulation applies to both the automated processing of data as well as the processing of data by non-automated means. In short, the regulation applies to both digital and non-digital forms of data. Examples of non-digital forms of data would include hard copies of contracts, health records, marketing information and any other type of medium containing the personal data of EU citizens.
Which Organizations Are Affected?
According to Article 3 of the GDPR, the regulation “applies to the processing of personal data in the context of the activities of an establishment of a controller or a processor in the Union, regardless of whether the processing takes place in the Union or not.” Furthermore, it applies “to the processing of personal data of data subjects who are in the Union by a controller or processor not established in the Union, where the processing activities are related to: 1) the offering of goods or services, irrespective of whether a payment of the data subject is required, to such data subjects in the Union; or 2) the monitoring of their behaviour as far as their behaviour takes place within the Union.” Finally, the regulation states that it “applies to the processing of personal data by a controller not established in the Union, but in a place where Member State law applies by virtue of public international law.”
So what does all this mean? First, if your organization collects personal data or behavioral information from someone residing in an EU country when the data was collected, your company is subject to the requirements of the GDPR, regardless of whether or not your organization is based in the EU, or even has a presence in the EU. Second, the law does not require that a financial transaction has to to take place for the scope of the law to kick in. If an organization simply collects the personal data of EU persons, then the requirements of the GDPR apply to the organization, even if the organization is based outside the EU. In sum, if your organization sells or markets goods or services to EU countries, or if your organization collects the personal data of people living in the EU, then the GDPR applies to your organization regardless of whether the organization has a presence in the EU or not.
What Are the Requirements?
The overarching goal of the GDPR is the protection of the personal data of EU citizens. As such, the GDPR requires that organizations take measures to ensure that they are implementing policies and controls that will reduce the risk of potential data breaches and will also provide transparency to the data subjects. Below is a list of the most prominent provisions of the GDPR:
Lawful Basis for Processing – Before an organization can begin processing the personal data of EU citizens, it must first determine if it has a lawful basis to do so. The GDPR outlines six reasons for lawfully processing personal data such as legal obligations, contracts or vital interests. The most common lawful basis that most businesses will rely on is consent from the data subject. The manner for obtaining consent must be clear, concise and transparent. It also must require subjects to explicitly opt-in, not opt-in by default. It is extremely important for each organization to determine the basis on which it may lawfully process the personal data of the subjects.
Privacy and Security – Organizations that collect the personal data of EU citizens may only store and process data when it’s absolutely necessary. Data protection and privacy must be integrated into an organizations data processing activities (privacy by design). Furthermore, organizations must provide protection against unauthorized or unlawful processing and against accidental loss, destruction or damage. It requires that appropriate technical and/or organizational measures are used including a method to anonymize data so that it cannot be tied back to a specific individual (e.g. data encryption). Organizations must also perform a data protection impact assessment (DPIA) for certain types of processing that is likely to result in a high risk to individuals’ interests. Finally, depending on the scale of personal information an organization processes, a data protection officer (DPO) must be assigned within the organization to ensure compliance with the GDPR.
Individual Rights – Data subjects have a number of individual rights according to the GDPR. Mostly importantly, individuals have the right to be informed about the collection and use of their personal data. This includes informing them of the reason for processing their data, the retention policy for storing the data, and who it will be shared with. Organizations must provide an individual residing in the EU with access to the personal data gathered about them upon request. Data subjects have the right to request that their data be erased (known as the “right to be forgotten”). Organizations have one month to respond to such requests. Finally, organizations must provide a way for individuals to transmit or move data collected on them from one data collector or data processor to another.
Breach Notification – The GDPR requires organizations to report data breaches to the relevant supervisory authority within 72 hours of becoming aware of the breach. If the breach is likely to result in a high risk of adversely affecting individuals’ rights and freedoms, the organization must also inform those individuals of the breach “without undue delay”. As a result of the requirement, organizations will need to ensure that they have a robust breach detection, investigation and internal reporting procedure in place. Finally, organizations must keep a record of all data breaches regardless of whether or not notification of any particular breach is required.
Minors – Children are provided additional protections under the GDPR and organizations that collect the personal data of minors must take special care when doing so. When offering an online service directly to a child, only children aged 13 or over are able provide their own consent. For children under age 13, an organization must also obtain the consent the child’s parent or legal guardian. Children merit specific protection when an organization uses their personal data for marketing purposes or creating personality or user profiles. Organizations must write clear privacy notices for children so that they are able to understand what will happen to their personal data, and what rights they have.
What Are the Penalties for Noncompliance?
The fines associated with noncompliance with the GDPR can be quite substantial. The regulation has a two tired system for determining fines based on the severity of the infraction(s). Before assessing fines the supervisory authority may take into account the nature, gravity and duration of the infringement. They may also determine if an organization was willfully negligent. Cooperation with the supervisory authority may also be taken into account when assessing fines. Below are the guidelines stated in the GDPR with regards to the assessment of financial penalties for noncompliance:
Infringements that may be subject to administrative fines of up to 10,000.000 EUR or 2% of the total worldwide annual turnover of the preceding financial year, whichever is higher:
Violations of the provisions regarding data security obligations and privacy-by-default measures that need to be taken to protect data from unauthorized access
Not having an assigned DPO or the DPO not fulfilling her obligations
Violations of the DPIA requirement
Violations of the requirement to conclude a processing agreement with all data processors that are engaged by an organization
Violations of the requirement to keep a record of the processing activities carried out
Infringements that may be subject to administrative fines of up to 20,000,000 EUR or 4% of the total worldwide annual turnover of the preceding financial year, whichever is higher:
Violations of the basic principles for processing personal data (e.g. lawful basis for processing)
Violations of provisions regarding a data subject’s rights such as the right to erasure, access to personal data and the right to receive information regarding the processing of personal data
Violation of the provisions regarding the transfer of personal data to third countries
Noncompliance with an order by a supervisory authority
In addition to the fines outlined above, each EU member state shall also have the right to implement its own fines with regards to noncompliance. Moreover, they may also implement criminal penalties for violations.
How Is the GDPR Enforced?
For those organizations that are based in the EU or who have a legal presence in the EU (e.g. a multinational corporation with an office in an EU member state), the GDPR will be enforced directly by the EU member states’ authorities and their court systems. For organizations that are not based in the EU and also do not have a physical presence in the EU, the GDPR requires them to appoint a “representative” who is located in the EU if the organization is actively doing business in the EU. Presumably this representative will allow the EU to enforce the regulation on such entities.
Finally, the GDPR can be enforced through international law. Written into GDPR itself is a clause stating that any action against a company from outside the EU must be issued in accordance with international law. There has been long term and increasing enforcement cooperation between the United States and EU data protection authorities. For example, there is the EU-U.S. Privacy Shield data sharing agreement which puts systems in place for the EU to issue complaints and fines against U.S. companies. In sum, there are a variety of mechanisms in place for the EU to enforce the GDPR against organizations based outside the EU.
What to Do?
If you are an organization that falls under the scope of the GDPR, then it is in your best interest to comply with the regulation, even if you are not based in the EU and do not have a physical presence there. If you are already processing the data of EU citizens, or plan to in the future, making sure your organization is compliant is good business. Putting the fines aside, residents of the EU will want to make sure that any company they are doing business with is in compliance. Moreover, the privacy and security policies and controls required will help reduce the risk to your organization. There are also potential cost savings by reducing ROT data (redundant, outdated or trivial) in terms of storage and backup costs. Being compliant may also give you a business advantage over competitors who are not.
One of the things that will likely come out of this regulation is a GDPR certification. Businesses who obtain such a certification may be able to display a certification seal on their website and other marketing material which will provide confidence to potential customers. Finally, expect your business partners to start requiring GDPR compliance even if you are not directly impacted. GDPR compliance is here to stay. Given the current events around online privacy in the United States (e.g. Facebook data disclosure), it is not inconceivable that the U.S. could also pass a similar regulation to protect individual privacy. Embracing the GDPR will only help your organization in the long run.
About the Author:Mark Baldwin is the owner and principal consultant at Tectonic Security. He has nearly 20 years of experience in the information security field and holds numerous certifications including CISSP and CISM.
Non-malware attacks are on the rise. According to a study by the Ponemon Institute, 29 percent of the attacks organizations faced in 2017 were fileless. And in 2018, this number may increase up to 35 percent.
So, what are non-malware attacks, how do they differ from traditional threats, why are they so dangerous, and what can you do to prevent them? Keep reading and you’ll learn the answer to each of these questions.
Non-malware attacks: what are they?
Non-malware or fileless attack is a type of cyber attack in which the malicious code has no body in the file system. In contrast to the attacks carried out with the help of traditional malicious software, non-malware attacks don’t require installing any software on a victim’s machine. Basically, hackers have found a way to turn Windows against itself and carry out fileless attacks using built-in Windows tools.
The idea behind non-malware attacks is pretty simple: instead of dropping custom tools that could be flagged as malware, hackers use the tools that already exist on a device, take over a legitimate system process and run the malicious code in its memory space. This approach is also called “living off the land.”
This is how a non-malware attack usually happens:
A user opens an infected email or visits an infected website
An exploit kit scans the computer for vulnerabilities and uses them for inserting malicious code into one of Windows system administration tools
Fileless malware runs its payload in an available DLL and starts the attack in the memory, hiding within a legitimate Windows process
Fileless malware can be downloaded from an infected website or email, introduced as malicious code from an infected application, or even distributed within a zero-day vulnerability.
Why are non-malware attacks so dangerous?
One of the main challenges posed by fileless malware is that it doesn’t use a traditional malware and, therefore, doesn’t have any signatures that an anti-malware software could use to detect it. Thus, detecting fileless attacks is extremely challenging.
To understand better why they pose so much danger, let’s take a look at some of the most recent examples of fileless attacks.
One of the first examples of fileless malware were Terminate-Stay-Resident (TSR) viruses. TSR viruses had a body from which they started, but once the malicious code was loaded to the memory, the executable file could be deleted.
Another example of a non-malware attack is the UIWIX threat. Just like WannaCry and Petya, UIWIX uses the EternalBlue exploit. It doesn’t drop any files on the disk but instead enables the installation of the DoublePulsar backdoor that lives in the kernel’s memory.
How do non-malware attacks work?
Since non-malware attacks use default Windows tools, they manage to hide their malicious activity behind the legitimate Windows processes. As a result, they become nearly undetectable for most anti-malware products.
Main non-malware attack targets
The hackers need to obtain as many resources as possible while keeping their malicious activity undetected. This is why the majority of fileless attacks focuses on one of the two targets:
Windows Management Instrumentation (WMI)
Depending on their targets, fileless attacks may either run in RAM or exploit vulnerabilities in software scripts.
The attackers chose WMI and PowerShell for several reasons. First, both these tools are built into every modern version of Windows OS, making it easier for the hackers to spread their malicious code. Secondly, turning off any of these tools is not a good idea, since it’ll significantly limit what network administrators can do. Some experts, however, suggest disabling WMI and PowerShell anyway as a preventive measure against fileless attacks.
4 common types of non-malware attacks
There are many types and variations of fileless malware. Below, we listed the four most common ones:
Fileless persistence methods ― the malicious code continues to run even after the system reboot. For instance, malicious scripts may be stored in the Windows Registry and re-start the infection after a reboot.
Memory-only threats ― the attack executes its payload in the memory by exploiting vulnerabilities in Windows services. After a reboot, the infection disappears.
Dual-use tools ― the existing Windows system tools are used for malicious purposes.
Non-Portable Executable (PE) file attacks ― a type of dual-use tool attack that uses legitimate Windows tools and applications as well as such scripts as PowerShell, CScript or WScript.
Non-malware attack techniques
In order to perform a non-malware attack, hackers use different techniques. Here are the four most frequently used ones:
WMI persistence ― WMI repository is used for storing malicious scripts that can be periodically invoked via WMI bindings.
Script-based techniques ― hackers may use script files for embedding encoded shellcodes or binaries without creating any files on the disk. These scripts can be decrypted on the fly and executed via .NET objects.
Memory exploits ―fileless malware may be run remotely using memory exploits on a victim’s machine.
Reflective DLL injection ― malicious DLLs are loaded into a process’s memory manually, without the need to save these DLLs on the disk. The malicious DLL can be either embedded in infected macros or scripts, or hosted on a remote machine and delivered through a staged network channel.
Now, it’s time to talks about the ways you can protect your company against non-malware attacks.
5 ways of protection against non-malware attacks
Experts offer different ways of preventing and stopping fileless malware: from disabling the most vulnerable Windows tools to using next-generation anti-malware solutions. The following five suggestions may be helpful in protecting your company network against non-malware attacks.
Restrict unnecessary management frameworks. The majority of non-malware threats is based on the vulnerabilities found in the management frameworks like PowerShell or WMI. The attackers use these frameworks to secretly execute commands on a victim’s machine while the infection lives in its memory. Thus it would be better to disable these tools wherever it’s possible.
Disable macros. Disabling macros altogether prevents unsecure and untrusted code from running on your system. If using macros is a requirement for your enterprise’s end users, you can digitally sign trusted macros and restrict the usage of any other types of macros.
Monitor unauthorized traffic. By constantly monitoring the security appliance logs from different devices, you can detect unauthorized traffic in your company’s network. It would also be helpful to record a set of baselines to understand better the network operating flow and be able to detect any anomalies, such as devices communicating with unauthorized remote devices or transmitting inordinate amounts of data.
Use next-generation endpoint security solutions. In contrast to traditional anti-malware software, some endpoint solutions have a heuristics component able to perform basic system behavior analysis. Since certain types of malware have a specific set of common behavioral characteristics, heuristics-based methods can halt some activities that look like behavior-based threats, thus stopping a possible attack from delivering its full payload. In case of false positive, end users may manually authorize the process to continue.
Keep all the devices updated. Patch management plays a significant role in securing your system and preventing possible breaches. By delivering the latest patches timely, you can effectively increase the level of your system’s protection against non-malware attacks.
Fileless attacks are on the rise mostly because they are so difficult to detect by standard anti-malware solutions. And while effectively detecting non-malware threats remains a challenge, these tips may help you prevent possible attacks from happening.
About the author: Marcell Gogan is a specialist within digital security solution business design and development, virtualization and cloud computing R&D projects, establishment and management of software research direction. He also loves writing about data management and cyber security.
The SAP threat landscape is always expanding thus putting organizations of all sizes and industries at risk of cyber attacks. The idea behind the monthly SAP Cyber Threat Intelligence report is to provide an insight into the latest security vulnerabilities and threats.
This set of SAP Security Notes consists of 16 patches with the majority of them rated medium.
Implementation Flaw is the most common vulnerability type.
A security vulnerability addressing SAP Business Client received the highest CVSS base score of 9.8 this year.
SAP Security Notes – April 2018
SAP has released the monthly critical patch update for April 2018. This patch update closes 16 SAP Security Notes (12 SAP Security Patch Day Notes and 4 Support Package Notes). 5 of all the patches are updates to previously released Security Notes.
4 of all the notes were released after the second Tuesday of the previous month and before the second Tuesday of this month.
One of the released SAP Security Notes was assessed at Hot News, and 4 have High priority rating.
The most common vulnerability type is Implementation Flaw.
SAP users are recommended to implement security patches as they are released as it helps protect the SAP landscape.
Critical issues closed by SAP Security Notes in April
The most dangerous vulnerabilities of this update can be patched with the help of the following SAP Security Notes:
2622660: SAP Business Client has a security vulnerability (CVSS Base Score: 9.8). Depending on the vulnerability, attackers can exploit a Memory corruption vulnerability for injecting specially crafted code into a working memory which will be executed by the vulnerable application. This can lead to taking complete control of an application, denial of service, command execution and other attacks. This fact has a negative influence on business processes and business reputation as a result. Install this SAP Security Note to prevent the risks.
2587985: SAP Business One has an Denial of Service (DOS) vulnerability (CVSS Base Score: 7.5 CVE-2017-7668). An attacker can use Denial of service vulnerability for terminating a process of a vulnerable component. For this time nobody can use this service, this fact negatively influences on a business processes, system downtime and business reputation as result. Install this SAP Security Note to prevent the risks.
2552318: SAP Visual Composer has a Code Injection vulnerability (CVSS Base Score: 7.4 ). Update 1 to Security Note 2376081. Depending on the code, attackers can perform different actions: inject and run their own code, obtain additional information that must be hidden, change or delete data, modify the output of the system, create new users with higher privileges, control the behavior of the system, or can potentially escalate privileges by executing malicious code or even to perform a DOS attack. Install this SAP Security Note to prevent the risks.
Advisories for these SAP vulnerabilities with technical details will be available in three months on erpscan.com. Exploits for the most critical vulnerabilities are already available in ERPScan Security Monitoring Suite.
Once production applications and workloads have been moved to the cloud, re-evaluating the company’s security posture and adjusting the processes used to secure data and applications from cyberattacks are critical next steps.
Cloud infrastructure is ideal for providing resources on demand and significantly reducing the cost of acquiring, deploying and maintaining internal resources.
In addition, organizations can quickly scale cloud resources up or down eliminating the need to over-provision—just in case. But losing control over the physical infrastructure means not being able to use familiar tools to develop insight into what is happening in that infrastructure.
Anyone responsible for IT security needs a strategy for monitoring what is happening in their company’ cloud, so they can shut down any attacks that occur and limit the damage.
The use of log files
While users do not have direct access to public cloud infrastructure, cloud providers do offer access to logs of events that have taken place in the user’s cloud—often for an additional cost. With logs, administrators can view, search, analyze, and even respond to specific events if they use APIs to integrate the event data with a security event and incident management (SEIM) solution. So why aren’t log files sufficient to maintain security?
First, all necessary data may not be collected through log files. While management events are automatically logged, data events are not. Some providers may support collection of custom logs, but users would need to specify and activate the logs ahead of time. This makes it difficult or sometimes impossible to go back and investigate areas that were not already being tracked.
Second, while event logs are useful for identifying when an alert was triggered, they do not provide enough information to determine what caused the alert. More detailed information is needed to perform root cause analysis and execute timely remediation. The rise of advanced persistent threats (APTs) as the most damaging type of breach cannot be stopped by merely analyzing log files. The most advanced network security solutions require detailed data in real-time to have a chance of detecting APTs. Log files are typically generated at specified intervals, depending on the level of service the user pays for. Users then need to set up a mechanism for storing log files for future analysis; this is not the default. So, while data useful in a breach investigation can be collected, it is not available in real-time and limits the speed of containment and recovery.
Third, sophisticated adversaries are increasingly adept at moving inside an organization without triggering any alerts. In many attacks, previously unseen malware enters an enterprise and lurks there undetected, exfiltrating data over a period of many months. Security today requires more rigorous oversight than log files provide.
And finally, in the long run, logs can be expensive to manage. Obtaining sufficient log data and sifting through it demands time, money, and a commitment to data integration. Existing security monitoring tools that use log data may not be sufficient to investigate new threats and investments may be required for additional tools. Security analysts could end up spending more time on complex data administration, rather than focusing on correlation analysis and incident response.
What can packet data do?
Data packets are like nested Russian dolls with the content enclosed inside various headers that work to move the packet efficiently through the network. The headers can be very informative, but security today is dependent on what is called deep packet inspection (DPI) of the packet’s payload or content. DPI exposes the specific websites, users, applications, files, or hosts involved in an interaction—information that is not available by inspecting header data alone.
Cloud environments have many potential vulnerabilities that attackers can exploit. And attacks are frequently conducted in multiple stages that may not be caught by intrusion detection systems or next-generation firewalls. To stay ahead of would-be attackers, security analysts increasingly use data correlation and multi-factor analysis to find patterns associated with illegitimate activity. These sophisticated solutions require granular data to work effectively. Most organizations have solutions like these deployed on-premises to evaluate packet data captured from physical infrastructure.
How to gain access to packet level data in the cloud Unlike physical infrastructure that can be tapped to produce copies of data packets, cloud architecture is not directly accessible. In the event of an ongoing attack or data breach, a user may be frustrated to learn that the data they need to isolate and resolve the issue is not included in the Service Level Agreement they have with their provider. Fortunately, there are new methods to access packet level data in clouds.
Container-based sensors have been developed that sit inside the cloud instances and generate copies of packet data. The sensors are automatically deployed inside every new cloud that is spun up, for unlimited scalability. Because the sensors are inside each cloud instance, they have access to every raw packet that comes or goes from that instance. This cloud-native approach to data access ensures no data is missed, for strong cloud security.
What are the benefits of a cloud visibility platform?
Of course, having access to all the packet-level data from every cloud instance presents another problem—volumes of data that can overwhelm security solutions and even lead them to drop packets. A cloud visibility platform filters the raw packets according to user-defined rules and strips out unnecessary data, to deliver only the relevant data to each security solution. This enables security solutions to work more efficiently.
Today, there are two types of visibility platforms available for cloud workloads. One uses a lift-and-shift approach and takes the visibility engine developed for the data center and moves it to the cloud. The engine itself is a monolithic processor that aggregates and filters all the data in one location.
The other approach distributes data aggregation and filtering to each of the cloud instances and communicates the results to a cloud-based management interface. Data can either be delivered directly from the cloud instances to cloud-based security monitoring solutions or backhauled to the data center. The distributed solution has the advantage of being highly scalable, since the data does not need to be transported to a central location for processing. And the distributed solution is more reliable, since there is no single point of failure.
Whether responding to a security incident, data breach, or in support of litigation, an organization needs to have a highly-effective cloud visibility platform for accessing and preserving the digital traffic that impacts their business. Log files are just not able to fulfill that requirement.
Ultimately, log files are diagnostic tools. They are not security solutions and they cannot facilitate an effective response to a security threat or breach. With the rising use of advanced persistent threats and multi-stage attacks, effective security requires detailed packet-level data, from every interaction that happens in the cloud. The cost of capturing and filtering packet data will be offset by the increased ability of the security team to detect attacks and accelerate incident response.
About the author: Lora is a Cloud Solution Marketing Manager for Ixia, a Keysight Business, where she uses her knowledge of network test, security, and visibility to communicate how Ixia solutions address a range of pressing IT challenges. Lora has more than 20 years of experience in technology management in a variety of domains including networking and network management, cloud and virtualization, servers, data mining, and enterprise resource software, as well as alliance partner development.
Enterprises are moving to the cloud at a breathtaking pace, and they’re taking valuable data with them. Hackers are right behind them, hot on the trail of as much data as they can steal. The cloud upends traditional notions of networks and hosts, and it topples security practices that use them as a proxy to protect data access. In public clouds, networks and hosts are no longer the most adequate control options available for resources and data.
Amazon Web Services (AWS) S3 buckets are the destination for much of the data moving to the cloud. Given how important this sensitive data is, one would expect enterprises to pay close attention to their S3 security posture. Unfortunately, many news stories highlight how many S3 buckets have been mistakenly misconfigured and left open to public access. It’s one of the most common security weaknesses in the great migration to the cloud, leaving gigabytes of data for hackers to grab.
When investigating why cloud teams were making what seemed to be an obvious configuration mistake, two primary reasons surfaced:
1. Too Much Flexibility (Too Many Options) Turns into Easy Mistakes
S3 is the oldest AWS service and was available before EC2 or Identity and Access Management (IAM). Some access controls capabilities were built specifically for S3 before IAM existed. As it stands, there are five different ways to configure and manage access to S3 buckets.
S3 Bucket Policies
Access Control Lists
Query string authentication/ static Web hosting
API access to change the S3 policies
The more ways to configure implies more flexibility but also means that higher chances of making a mistake. The other challenge is that there are two separate policies one for buckets and one for the objects within the bucket which make things more complex.
2. A “User” in AWS is Different from a “User” in your Traditional Datacenter
Amazon allows great flexibility in making sure data sharing is simple and users can easily access data across accounts or from the Internet. For traditional enterprises the concept of a “user” typically means a member of the enterprise. In AWS the definition of user is different. On an AWS account, the “Everyone” group includes all users (literally anyone on the internet) and “AWS Authenticated User” means any user with an AWS account. From a data protection perspective, that’s just as bad because anyone on the Internet can open an AWS account.
The customer moving from traditional enterprise - if not careful - can easily misread the meaning of these access groups and open S3 buckets to “Everyone” or “AWS authenticated User” - which means opening the buckets to world.
S3 Security Checklist
If you are in AWS, and using S3, here is a checklist of things you should configure to ensure your critical data is secure.
Audit for Open Buckets Regularly: On regular intervals check for buckets which are open to the world. Malicious users can exploit these open buckets to find objects which have misconfigured ACL permissions and then can access these compromised objects.
Encrypt the Data: Enable server-side encryption on AWS as then it will encrypt the data at rest i.e. when objects are written and decrypt when data is read. Ideally you should enable client side.
Encrypt the Data in Transit: SSL in transport helps secure data in transit when it is accessed from S3 buckets. Enable Secure Transport in AWS to prevent man in middle attacks.
Enable Bucket Versioning: Ensure that your AWS S3 buckets have the versioning enabled. This will help preserve and recover changed and deleted S3 objects which can help with ransomware and accidental issues.
Enable MFA Delete: The "S3 Bucket" can be deleted by user even if he/she does not login using MFA by default. It is highly recommended that only users authenticated using MFA have ability to delete buckets. Using MFA to protect against accidental or intentional deletion of objects in S3 buckets will add an extra layer of security
Enable Logging: If the S3 buckets has Server Access Logging feature enabled you will be able to track every request made to access the bucket. This will allow user to ability to monitor activity, detect anomalies and protect against unauthorized access
Monitor all S3 Policy Changes: AWS CloudTrail provides logs for all changes to S3 policy. The auditing of policies and checking for public buckets help - but instead of waiting for regular audits, any change to the policy of existing buckets should be monitored in real time.
Track Applications Accessing S3: In one attack vector, hackers create an S3 bucket in their account and send data from your account to their bucket. This reveals a limitation of network-centric security in the cloud: traffic needs to be permitted to S3, which is classified as an essential service. To prevent that scenario, you should have IDS capabilities at the application layer and track all the applications in your environment accessing S3. The system should alert if a new application or user starts accessing your S3 buckets.
Limit Access to S3 Buckets: Ensure that your AWS S3 buckets are configured to allow access only to specific IP addresses and authorized accounts in order to protect against unauthorized access.
Close Buckets in Real time: Even a few moments of public exposure of an S3 bucket can be risky as it can result in leakage. S3 supports tags which allows users to label buckets. Using these tags, administrators can label buckets which need to be public with a tag called “Public”. CloudTrail will alert when policy changes on a bucket and it becomes public which does not have the right tag. Users can use Lambda functions to change the permissions in real-time to correct the policies on anomalous or malicious activity.
About the author: Sanjay Kalra is co-founder and CPO at Lacework, leading the company’s product strategy, drawing on more than 20 years of success and innovation in the cloud, networking, analytics, and security industries. Prior to Lacework, Sanjay was GM of the Application Services Group at Guavus, where he guided the company to market leadership and a successful exit
Throughout the history of mankind, civilizations have risen and fallen due to a variety of factors. For the most part, the collapse of a civilization wasn’t sudden, but a gradual decline brought on by multiple causes like changing culture, climate or even the introduction of a new culture (such as when Europeans came to the “new world”).
The interconnectivity and globalization of our modern society make it less likely for a civilization to collapse due to traditional factors. But the same factors that make a traditional decline less likely also mean a collapse is apt to look quite different. To start, it would be more sudden and less localized – going across multiple regions and perhaps the entire globe. What could cause such a collapse? There are three main threats to our modern civilization that could cause humanity to go the way of the ancient Mayans.
Human life requires a very specific set of environmental circumstances to survive. And while we can withstand some level of extreme temperatures, climate change has the potential to change, or perhaps even damage, civilizations as we know it.
Whether you believe climate change is man-made or a natural part of the earth’s cycle, it is obvious our planet’s climate is changing rapidly. We have already seen an increase in the number and severity of storms across the planet – some with devastating effects. Climate change may also be responsible for a rash of wildfires. As climate change shifts the physical landscape of our planet, the societal impacts will ripple across the globe. Some areas will become inhabitable as rising seas cause them to sink under the waves, or areas will become too hot or cold to live. The increase in temperatures may also increase insect populations and, as a result, insect-borne diseases will skyrocket. This could force people to migrate from their current locations to new locations, increasing the population in the remaining inhabitable locations, and creating a ripe environment for disease. The shifting weather patterns will also put our crops at risk, creating the potential for famine and starvation.
Environmentalist and author Bill McKibben told Business Insider earlier this year that without intervention, the world would be: "If not hell, then a place with a similar temperature."
Ever since the bombs were dropped on Hiroshima and Nagasaki, the world has feared the possibility of nuclear war. The concept of mutual mass destruction caused anxiety and terrifying standoffs throughout the Cold War, but it also helped prevent the use of nuclear weapons (testing notwithstanding). Though the Cold War is over, the threat of nuclear war still looms, as more countries now have the ability to create these powerful weapons. The Doomsday clock – which signifies the potential of a man-made global catastrophe such as nuclear war – has not stood this close to midnight since 1953.
Nuclear war would obviously have a devastating impact on humanity, and this is one of the major factors preventing such a war. All nations know that to use a nuclear weapon means they will become the next target of a nuclear attack. Yet the potential and the possibility for such a war still exist, in part because of unstable governments possessing such weapons.
In the past, attacks in the cyberworld only impacted our digital lives. Consequently, the threat of a cyberattack seems minimal compared to something as major as nuclear war or global climate change. However, our growing dependence on software means the consequences of a digital war could spill over into the physical world.
There is a long history of cyberwar dating back to the early 1980s, the main difference between the cyberwar of the past and the one of today, or the future, is the world we live in. Back in the 1980s, when cyberwar became a growing concern for our government, we did not have the World Wide Web or mobile devices with the power of a super computer. Nor were our businesses, economy and even health devices tied to applications. It would only take another nation, or even a terrorist organization, to target a vulnerability in the software running the power grid, and civilization could be thrown into chaos. We are seeing this on a small scale in Puerto Rico, where power has been out for more than a month. If this were to happen on a world-wide scale, there would be mass rioting, hording of food, and commerce would cease to exist.
There is evidence that cybercriminals are testing the fences for weaknesses already. And we know from research that our software is woefully insecure. Our civilization is dependent on software that is insecure, and all it would take is a coordinated attack to change the way we live. And although we would eventually get the electric grid or other infrastructure back up and running, it could take weeks or months – what would happen to society during this time?
The thought of climate change, nuclear war and cyberwar are all terrifying, and it is tempting to not think about it in an effort to sleep better at night. But we cannot keep our heads in the sand and hope nothing will happen. By ignoring the potential threat of any of these three catastrophes, we are forgoing the opportunity to prevent them – and prevent them we can. We can change the direction of climate change with smart environmental policies and behaviors. We can tone down the rhetoric and adhere to nuclear non-proliferation agreements to lessen the potential for nuclear war. And we can create secure development standards to ensure the software running our world doesn’t have exploitable vulnerabilities. All it takes to accomplish all these things is the desire and the will.
We have the power to ensure our civilization grows, flourishes and is even better than how we found it. The advantage we have over past civilizations is the knowledge to prevent collapse. But first we must recognize the threat so that we can neutralize the risk.
About the atuhor: Jessica Lavery is Director of Corporate Communication and Content Marketing at CA Veracode. In this role Jessica is responsible for overseeing all activities associated with Public Relations, Analyst Relations, Internal Communications, Executive Communications, Content Marketing, Social Media, Visual Identity and Brand. Jessica has nearly 10 years of security experience.