A security intelligence platform goes beyond what a traditional security information and event management (SIEM) tool does. The systems and sensors in your environment report everything from application logs and endpoint alerts to full network packet inspections.
It is also where your intelligence feeds and infrastructure configuration converge to be processed as additional context. The security intelligence platform can digest this information in real time, perform advanced analytics, present prioritized, actionable information and provide both automated and manual guidance to help security analysts remediate incidents.
SOC Resources Wear Thin
A security intelligence platform is the central component of the security immune system. Like the human body, it can sense when an intruder has infiltrated the network and employ various tactics to flush out the threat.
As versatile as this technology is, however, it is not enough on its own to help overworked and understaffed security teams stay on top of security risks. According to IBM research, enterprise security operations centers (SOCs) receive an estimated 200,000 pieces of security event data per day. Only a tiny portion of those events requires immediate and urgent action — but when they lack context, security analysts must treat these alerts equally.
This is a significant problem given the ongoing cybersecurity skills shortage. With so much threat data coming in, analysts need to be able to fill gaps in intelligence and act on security incidents with speed and accuracy. That’s where cognitive security comes into play.
A Security Intelligence Platform Powered by Cognitive Insight
The best way to proactively prevent a security incident is to quickly build the associated attack kill chain from the events and flows gathered during the investigation phase and break it as early as possible. Security teams need cognitive capabilities to qualify, triage and analyze these incidents and provide additional data that is relevant to the investigation.
To extract insights from this external structured and unstructured data, security teams must leverage a wide variety of sources, such as documented software vulnerabilities, research papers, security blogs and threat intelligence feeds.
That is exactly what a cognitive-powered security intelligence platform does. It helps analysts quickly interpret this unstructured data and integrate it with structured data from countless sources. Armed with this collective knowledge and instinct, analysts can respond to threats with unprecedented speed and accuracy and maximize the effectiveness of the security immune system approach.
More organizations are using a threat-modeling approach to identify risks and vulnerabilities and build security into network or application design early on to better mitigate potential threats.
“Threat modeling gives you the way of seeing the forest, and a frame for communicating about the work that you (and your team) are doing and why you’re doing it,” said Adam Shostack, president of Shostack and Associates, in an article for MIS Training Institute. “More concretely, [it] involves developing a shared understanding of a product or service architecture and the problems that could happen.”
Threat Modeling Missteps
The benefits seem clear, but it’s still a relatively new strategy. So, you can expect a few stumbles along the learning curve. Here are four common threat-modeling missteps — and how to avoid them.
1. Thinking One Size Fits All
“There are so many different ways to threat-model,” said Shostack. “I routinely encounter people who read the same advice and find it doesn’t quite work for them.” Approaching threat modeling as a single, massive complex process is overwhelming and sets you off on the wrong foot, he stressed.
“I think the biggest thing I see is people who treat it as a monolith,” said Shostack. “We need to communicate the steps as if they are building blocks. If one doesn’t work for you, don’t throw out threat modeling. There is no one-size-fits-all approach.”
One well-known approach is STRIDE:
Denial of Service
Elevation of privilege
Of course, this may be more appropriate for some teams than others. Regardless of approach, Shostack advises teams to look at the process as a set of building blocks that go together and break the process up into easily digestible chunks.
2. Starting With the Wrong Focus
When getting started, should you focus on assets? No. What about shifting your focus to thinking like an attacker? No again. Why?
“It’s a common recommendation, but the trouble is it’s hard to know what an attacker is going to do. It’s hard to know what their motivations are,” said Shostack. “For example, when SEC [Syrian Electronic Army] took over the Skype Twitter handle (in 2014), no one expected they were going to break into the law enforcement portal at the same time. Focusing in on the attacker might have distracted people from what they would do — rather than theorizing about their motivations.”
Shostack advocates for starting the process with software at most organizations.
“People building software or systems at a financial institution, a supply chain or a healthcare company should start from the software they’re building because it’s what they know best,” he noted in a post for The New School of Information Security blog. “Another way to say this is that they are surrounded by layers of business analysts, architects, project managers and other folks who translate between the business requirements (including assets) and software and system requirements.”
3. Neglecting the Business Side
Threat modeling is pointless if solely focuses on the network and applications, believes Itay Kozuch, director of threat research at IntSights.
“Many teams conduct common assessments from their network,” said Kozuch. “But it must come from the business side too. When an organization is trying to evaluate risk and do threat modeling, they need to understand the complete assets of the organization. That means not just IT — but on the business side as well.”
This means going beyond just the technology in the threat-modeling process. Failing to involve all of the business’s key stakeholders, Kozuch stressed, leads teams to incorrectly calculate the probability of the threats that need to be considered. He believes there are a lot of angles and perspectives for every threat.
“Management must be part of it,” said Kozuch. “It is a business issue. Risk is there because of business.”
4. Miscalculating the Shelf Life of Results
“Threats are always changing,” said Kozuch. “Often — even soon after you’ve completed the process — the results are no longer valid. You can’t base the next few years off of what you’ve uncovered because it doesn’t represent future threats.”
Archie Agarwal, founder and CEO of ThreatModeler Software agrees. A threat model, he said in a post for CSO, cannot be static. He cautioned that you can’t take a critical application, do a threat model on it once and assume you are done.
“Your threat model should be a living document,” Agarwal said. “You cannot just build a threat model and forget about it. Your applications are alive.”
Wherever you are in your exploration or implementation of threat modeling, there are many resources out there to help you get started. Check out this series on threat modeling basics for an overview of approaches and essential elements for a successful program.
The cybersecurity industry today remains fragmented, with some organizations having as many as 85 security tools from 45 different vendors. Many of these technologies have been acquired over multiple years to address specific challenges across the complex threat landscape. Each new product needs to be properly installed, configured and managed over its life cycle — and many of these technologies sit in silos, which limits their ability to deliver more effective security.
At the same time, highly collaborative cybercriminals are launching sophisticated attacks that are hard to see and stop, and traditional security practices are unsustainable. That’s why security teams must adopt a new strategy that is rooted in collaboration — an approach that connects the dots across products, people and processes for faster, more effective threat detection and response.
Collaborative Defense: External and Internal Pressures
Every day, we hear about new breaches that impact organizations’ reputations, bottom lines and supply chains. What’s more, these breaches affect customer sentiment, particularly incidents that expose personally identifiable information (PII).
With the number of Internet of Things (IoT) devices forecast to reach 20.4 billion by 2020, according to Gartner, keeping these devices secure will become an even greater challenge. Cybercriminals will undoubtedly continue to collaborate on the darknet to obtain and exchange this high-value PII and use social engineering to steal records to the tune of trillions of dollars.
Compliance mandates will also be a top priority and challenge for organizations. The General Data Protection Regulation (GDPR), for example, will go into effect on May 25. This mandate doesn’t just impact European countries — any organization that process, stores or uses data related to European Union (EU) citizens must be compliant.
Organizations are also struggling to cope with the growing skills gap in cybersecurity, both in terms of the sheer quantity — there will be 1.8 million unfilled positions over the next few years — and the associated expertise. This lack of resources is compounded by the growing number of disparate security tools and alerts. Still, many organizations attempt to integrate these products themselves by purchasing even more solutions.
There is somewhat of a misconception around product coverage in many of today’s organizations. Are you really protected by simply checking the box and having an array of products across endpoints, networks, users and cloud? It’s absolutely critical to have that coverage, but it needs to be in conjunction with products integrating together to deliver best-of-suite solutions that translate into more effective security.
Here are some key questions to consider:
Are your security products working together across teams — or do your IT and security teams work in silos?
Are those same products working together across all your locations and heterogeneous platforms?
Do your security tools integrate in a manner that provides the security operations center (SOC) with real-time visibility and control across the diverse threat landscape?
Security must become more agile to account for the diverse threat landscape while enabling organizations to thrive. This includes a deeper integration of technologies to deliver repeatable use cases centered on better threat detection and response.
As a foundation for integrated security, organizations should leverage a security intelligence platform that can apply real-time analytics and correlate the massive amount of threat information across users, endpoints, networks and cloud. This comprehensive platform must be able to sense, track and prioritize the most significant alerts that pose the greatest risk to enterprise data.
Additionally, security leaders should infuse artificial intelligence (AI) into their strategy to aid analysts in threat investigation, enabling them to rapidly and confidently understand scope and veracity of threats, including links to broader malware campaigns. This is critical against the backdrop of the cybersecurity skills shortage and the troves of untapped threat intelligence data that AI platforms can ingest, analyze and understand at unprecedented speed and scale.
The above factors can significantly aid security analysts, but what does your incident response plan look like? An orchestration layer that is architected in with a security information and event management (SIEM) solution can help bridge the gaps across people, processes and technology to enable organizations to rapidly respond to threats with confidence.
Collaborative Defense in Depth
A dynamic security analytics platform that embeds AI and integrates orchestration across the diversity of threats (as well as people and processes) can help set the foundation for a strong security strategy. Collaboration is the glue that integrates disparate point products in a manner that extends their security capabilities beyond what each technology could provide on its own.
At the product level, more open collaboration is critical to the evolution of security technology. Over the past few years, IBM has invested in technologies and partnerships to achieve this goal. One powerful collaborative defense technology is the IBM Security App Exchange, an ecosystem for the entire security community, including IBM and its partners and vendors, to develop and share applications that integrate with IBM Security solutions. To date, the App Exchange has 140 partner and IBM apps and over 100,000 downloads. These apps are extensively tested and validated before they are published on the App Exchange.
An example of the value of the IBM Security App Exchange is the recent launch of the Cisco ISE App for QRadar, which gives security analysts insights into risky users and devices, resulting in faster threat detection and containment and policy enforcement. This app enables analysts to rapidly drill down from QRadar into ISE pxGrid for deeper, faster analysis of policy violations and then remediate affected users and devices — all in a single integrated dashboard.
To learn more about the ISE + QRadar app and how collaborative defense in depth can strengthen your security, register for the IBM Security + Cisco webinar on June 15.
PwC released its 2017 Annual Corporate Directors Survey at the end of last year where it polled over 850 board directors from a wide range of organizations across a dozen industries. Among the topics covered in the survey were the usual board-level concerns about executive compensation, diversity, shareholder activism and environmental, social and governance issues.
But there were also two key areas of interest for those concerned about cyber risks: strategy oversight and board oversight of IT and security. “Considering the pace of change, companies and boards need to be agile in addressing threats to executing their current strategy, as well as disruptions to their entire business model,” the survey stressed.
Do You Have Enough Cybersecurity Expertise?
Directors reported very high levels on skill sets related to financial expertise (85 percent), risk management expertise (65 percent) and industry expertise (62 percent). However, when it comes to cybersecurity expertise, only 16 percent of companies report having enough. Thirty-nine percent of boards currently have some expertise in cybersecurity in their ranks but admit to needing more — and one-third of boards currently have no cybersecurity expertise and are seeking it out.
Who is tasked with oversight? Exactly half of the boards have delegated that responsibility to the audit committee, while 30 percent of companies look at cybersecurity as a full-board issue. Another 16 percent have cybersecurity reviewed by a dedicated risk committee or an IT steering committee. When asked whether the board needs to allocate more time to specific topics, the top three items reported are cybersecurity (66 percent), strategic planning (64 percent) and IT and digital strategy (61 percent).
Board Oversight: IT and Security
Board directors are reporting spending more time and attention (with ample room for improvement) on cybersecurity. But are they happy with the information they are receiving? When asked to evaluate the presentation skills of various groups, chief information security officers (CISOs) came in last place with only a 19 percent rating of excellent.
Does the increased level of board engagement translate into breach readiness? While 42 percent of respondents reported being “very comfortable” that their company had “appropriately tested its resistance to cyberattacks,” another 45 percent were only moderately comfortable. Asked about whether the company had adequately tested its cyber incident response plan (CIRP), only 32 percent of respondents reported being very comfortable, 49 percent moderately comfortable and 19 percent clearly labeled their organization’s current efforts as “not sufficient.”
Board Oversight: Strategy
Overall, the board gives management high marks on involving the board on strategy development and communicating the strategy to board members — but the numbers point to a disconnect regarding the quality of the information provided. Twenty-two percent of directors said the quality of the information they received regarding emerging and disruptive technologies — and their impact on enterprise strategy — was “lacking.”
Similarly, 23 percent of boards were not happy with the quality of information shared regarding the strategic options that management considered but ultimately rejected.
Given that the role of the board is to contribute to strategy development; oversee management’s implementation of the chosen strategy; and monitor the alignment of risks, performance and strategy, directors want access to quality information to ensure they achieve organizational objectives. Directors are especially concerned that strategy will need to change in the coming years due to factors like the speed of technological change and cyberthreats.
The Trouble With ‘Don’t Have It, Don’t Need It’
Obviously, IT and cybersecurity aren’t the only concerns on board directors’ minds. However, it is troubling to see that 10 percent of respondents indicated they didn’t have any IT and digital expertise on the board — and didn’t need it. In the same vein, as many as 4 percent of respondents acknowledged that cybersecurity was currently receiving no board oversight at all.
The survey cautions boards to be adequately engaged with the oversight of cybersecurity, noting that cybersecurity is an issue that affects the entire company, calling it a “pervasive risk” that needs the attention of the full board. It also recommends that each director understand the level of preparation of the company to detect, respond to and recover from a cybersecurity event.
Board directors — all the way down to the CISO — should follow these recommendations:
The CISO, in collaboration with top management and the board, should develop key risk indicators to improve the tracking and reporting of cyber risks and ensure quality reporting.
This is the second installment in a two-part series about distributed denial-of-service (DDoS) attacks and mitigation on cloud. Be sure to read part one for an overview of denial-of-service (DoS) and DDoS attack variants and potential consequences for cloud service providers (CSPs) and their clients.
In the first installment of this series, we demonstrated how cybercriminals could circumvent DoS defenses by launching distributed DDoS attacks. The three major types of DDoS variants are:
We can demonstrate how these attacks work in a simulated environment using Graphical Network Simulator-3 (GNS3), a network simulation tool.
To understand this, first let’s break down the network diagram below:
Figure 1: A corporate network configured with OSPF and BGP
The diagram shows a network designed with routers and configured with Open Shortest Path First (OSPF), the company’s internal network, Border Gateway Protocol (BGP), the edge router that reveals the internet service provider (ISP) to the end users and clients and other network devices.
Now let’s examine how threat actors can exploit these systems to launch various types of DoS and DDoS attacks.
Volume-Based DDoS Attacks
Cybercriminals typically leverage tools, such as Low Orbit Ion Cannon (LOIC) and Wireshark to facilitate volume-based attacks through techniques like Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) flooding. Let’s take a closer look at how these attacks work.
In a TCP flooding attack, threat actors generate a large quantity of traffic to block access to the end resource. The magnitude of this type of attack is commonly measured in bits or packets per second. The diagrams below show a TCP flood attack in which the File Transfer Protocol (FTP) service is flooded with huge volumes of TCP traffic, which eventually brings down the service.
Figure 2: A user connecting to an FTP server hosted on a corporate network
Figure 3: An attacker using bots to send malicious traffic to the target port using the LOIC tool
Figure 4: A client unable to access the FTP service after an attacker has flooded it with corrupt FTP packets
UDP flooding means overwhelming the target network with packets to random UDP ports with a forged IP address. It is easy to use a forged IP address in this type of attack since UDP does not require a three-way handshake to establish a connection. These requests force the host to look for the application that is running on those random ports (which may or may not exist) and flood the network with Internet Control Message Protocol (ICMP) destination unreachable packets, thereby blocking legitimate requests.
There are other variations of UDP flooding, such as reflection and amplification attacks. In a reflection attack, a threat actor uses publicly available services, such as the Domain Name System (DNS), to attack the target networks. An amplification attack, on the other hand, targets a protocol in an attempt to amplify the response. For example, an attacker might submit a single query of *.ibm.com to the DNS, which will then gather a massive volume of information related to subdomains of IBM.com.
Figure 5 shows a similar attack using the Network Time Protocol (NTP). This protocol enables network-connected devices to communicate and synchronize time information, which is communicated over UDP. An attacker can forge the source IP address and then use a publicly available NTP application to send queries to the target. Common tools used in this type of attack include Nmap, Metasploit and Wireshark.
Figure 5: An attacker using Nmap to discover hosted NTP servers
Figure 6: An attacker using Metasploit to determine that the target NTP server is vulnerable to a MOD6 UNSETTRAP distributed, reflected denial-of-service (DRDoS) attack, an amplification of 2X packets
In this case, the victim’s response packet would be twice the size of the packet the NTP request sent. By repeatedly sending the request, an attacker could flood the target network with a huge number of responses.
In the scenario shown below, an attacker sends multiple SYN request from several spoofed Internet Protocol (IP) addresses to a corporate network’s Secure Shell (SSH) jump server to disrupt the service. Tools such as Hping3 and Wireshark are commonly used in this type of attack.
Figure 7: A client (Ubuntu Machine) connecting to a company’s jump server (IP: 18.104.22.168) for remote administration
Figure 8: An attacker performing a protocol DDoS attack on a jump server (target IP: 22.214.171.124), preventing the client from accessing the jump server
Figure 9 shows a real-world exploit of a TCP SYNC flood attack performed on a web application as part of a penetration testing (PT) engagement.
Figure 9: A web application becomes unresponsive after a TCP SYNC flood attack
In addition to volume-based and protocol attacks, cybercriminals can also launch DDoS campaigns by targeting the application layer. Below are some variations of this attack type.
Slowloris is a very prominent attack in which the connection is never idle but, as the name suggests, it is slow. The client connects gradually by sending data and connection requests to the server. This keeps the connections open indefinitely and, as a result, the server cannot process any new connections. Threat actors typically use Slowhttptest and Wireshark to facilitate this attack.
Figure 10: A client accessing a web server hosted on a company’s cloud network
Figure 11: A legitimate user unable to access a webpage due to a Slowloris attack
Shown below is a real-world exploit of Slowloris performed on a web application as part of a penetration testing exercise.
Figure 12: A web application becomes unresponsive after a Slowloris attack
In an HTTP flood DDoS attack, the attacker sends an HTTP GET/POST request, which seems to be legitimate, to infiltrate a web server or application. Instead of using a forged IP address, this attack leverages botnets, which require less bandwidth. An HTTP flood attack is most effective when it forces the server or application to allocate the maximum resources possible in response to every single request.
Shown here is a real-world HTTP flood attack performed using a Session Initiation Protocol (SIP) INVITE message flood on port 5060, rendering the phone unresponsive.
Figure 13: An attacker performing a SIP INVITE flood attack on an IP phone
Figure 14: The IP phone becomes unresponsive after the attack
DDoS Mitigation On Cloud
To mitigate DDoS attacks on the cloud, security teams must establish a secure perimeter around the cloud infrastructure and allow or drop packets based on specified rules. Below are some key steps organizations can take to harden their security environments to withstand DDoS attempts.
A next-generation firewall is capable of performing intrusion prevention and inline deep packet inspection. It can also detect and block sophisticated attacks, including DDoS, by enforcing security policies at the application, network and session layers. Next-generation firewalls give security teams granular control to define custom security rules pertaining to network traffic. They also provide myriad security features, such as secure sockets layer (SSL) inspection, web filtering and zero-day attack protection.
Content Delivery Network
A content delivery network (CDN) is a geographically distributed network of proxy servers and their data centers that accelerates the delivery of web content and rich media to users. Although CDNs are not built for DDoS mitigation, they are capable of deflecting network-layer threats and absorbing application-layer attacks at the network edge. A CDN leverages this massive scaling capacity to offer unsurpassed protection against volume-based and protocol DDoS attacks.
DDoS Traffic Scrubbing
A DDoS traffic scrubbing service is a dedicated mitigation platform operated by a third-party vendor. This vendor analyzes incoming traffic to detect and eliminate threats with the least possible downtime for the target network. When a DDoS attack is detected, all incoming traffic to the target network is rerouted to one or more of the globally distributed scrubbing data centers. Malicious traffic is then scrubbed and the remaining clean traffic is redirected to the target network.
An anomaly, such as an unusually high volume of traffic from different IP addresses for the same application, should trigger an alarm. But anomaly detection is not quite that simple since attackers often craft packets to mimic real user transactions. Therefore, detection tools must be based on mathematical algorithms and statistics. This works well for both application-based and protocol attacks.
Source Rate Limiting
As the name suggests, source rate limiting blocks any excess traffic based on the source IP from where the attack originates. This is mainly used to limit volume-based traffic by configuring the thresholds and customizing responses when an attack happens. Source rate limiting provides insights into particular websites or applications on a granular level. The drawback is that this method only works for nonspoofed attacks.
Protocol Rate Limiting
This technique blocks suspicious protocols from any source. For example, the Internet Control Message Protocol (ICMP) can be blocked after a fixed rate — say, 5 megabits per second (Mbps) — thereby blocking bad traffic and allowing legitimate traffic. While it works well for volume-based attacks, the limitation of protocol rate limiting is that sometimes even legitimate traffic will be dropped, requiring security teams to manually analyze logs.
Cloud Security Is More Crucial Than Ever
With more and more applications now migrating to the cloud, it is more crucial than ever to secure cloud infrastructure and the applications hosted therein. The DDoS attacks described above can put CSPs and their clients at great risk of data compromise. By employing various defense mechanisms, such as advanced firewalls, traffic scrubbing and anomaly detection, organizations can take major steps toward securing their cloud environments from DDoS attacks.
It is said that innovation and creativity best flourish under pressure and constraint. Think about what the engineers and flight controllers had to do during the Apollo 13 moon mission after an explosion on the vessel. They were constrained by time, fuel, air and many other factors. They had to do things that had never been done before to save the lives of the astronauts.
Another example is the movie “Jaws.” The mechanical sharks used for the movie were extremely problematic, so director Stephen Spielberg changed the way he made the movie, using the shark only sparingly to create a more dramatic impact. Arguably, this actually created a better movie.
As a final example, American musician Jack White has said that it is essential for him to use things like self-imposed tight deadlines to force his creative hand. He said that having all the money, time or colors in the palette ultimately kills creativity.
The process of complying with General Data Protection Regulation (GDPR) could present organizations with this same type of unexpected opportunity. IBM Security and the IBM Institute for Business Value wanted to understand if there was a group of organizations that was using their GDPR preparations as an opportunity to transform how they were approaching security and privacy; data and analytics; and customer relationships. Were organizations turning this compliance challenge into an impetus for broader transformation?
To answer this question, we surveyed 1,500 GDPR leaders — such as chief privacy officers (CPOs), chief data officers (CDOs), general counsels, chief information security officers (CISOs) and data protection officers — representing 15 industries in 34 countries between February and April 2018. We wanted to capture their practices and opinions as close to the May 25 GDPR compliance deadline as possible.
During the last couple years as organizations have been preparing for GDPR, they have been tested by both the effort involved and the cost of compliance. Organizations have been busy changing processes and developing new ones; creating new roles and building new relationships; training employees; and deploying new tools and technologies. Hopefully, all this can be leveraged for more than just compliance.
IBM’s CPO, Cristina Cabella, agrees. She has said, “In the market, I see GDPR as a great opportunity to make this culture shift and make privacy more understandable and more leveraged as an opportunity to improve the way we protect data, rather than be perceived as a very niche area that is only for technical experts … So, I think it is a great opportunity in that sense.”
The first thing we found was that many organizations still have a lot of work to do before they can achieve full GDPR compliance, even at this late a date. Only 36 percent of surveyed executives say they will be fully compliant with GDPR by the enforcement date and nearly 20 percent told us that they had not started their preparations yet, but planned to before the May deadline. Organizations could be waiting because of a lack of commitment from organizational leadership — or they are willing to risk taking a wait-and-see approach to see how enforcement works.
Using GDPR Compliance as an Opportunity for Innovation
And yet there was some good news in our respondents’ views of GDPR. The majority held a positive view on the potential of the regulation and what it could do for their organizations. Thirty-nine percent saw GDPR as a chance a transform their security, privacy and data management efforts, and 20 percent said it could be a catalyst for new data-led business models. This is evidence that organizations may see GDPR as a means to improve their organizations in the longer term by enabling a stronger overall digital strategy, better security, closer customer relationships, improved efficiency through streamlined data management and increased competitive differentiation.
In our research, we identified a group of leaders who met a specific set of criteria and see GDPR as a spark for change. Among other insights, we found that:
Eighty-three percent of GDPR leaders see security and privacy as key business differentiators.
Nearly three times more GDPR leaders than other surveyed executives believe that GDPR will create new opportunities for data-led business models and data monetization.
Ninety-one percent of GDPR leaders agree that GDPR will enable more trusted relationships and new business opportunities.
We have crossed a threshold and entered a new era for data, security, privacy and digital customer interactions. While many organization may not have completed all GDPR compliance activities yet, it is vital for organizations large and small to ask themselves how GDPR can help position them for long-term success by unlocking new opportunities and unleashing their creativity.
“Things will only get more challenging as businesses continue to move to multi-cloud environments,” said Hurwitz. “Businesses need the ability to manage a collection of different cloud-based services as a single unified environment.”
Despite the tentative position many companies took about transitioning applications, most organizations have gotten on board with embracing cloud computing — and what many are discovering is that they need more than one cloud.
“To further complicate this situation, many organizations faced with deciding where best to run their applications and store their data are now debating whether to work with a single CSP [cloud service provider] or to spread their workloads across multiple clouds,” said Peter Galvin, vice president of strategy at Thales eSecurity, to SC Media UK. “It’s not uncommon, for example, for medium and large enterprises to run SaaS [software-as-a-service], PaaS [platform-as-a-service] and IaaS [infrastructure-as-a-service] with different providers, in parallel with their own on-premise systems.”
As CSO pointed out, these hybrid and multi-cloud environments are often rife with risk, particularly because of poor visibility and lack of coordination.
The Roots of Compromised Records
Of all the compromised records tracked by X-Force in 2017, more than 2 billion were exposed because of misconfigured cloud servers, network-based backup incidents or other improperly configured systems. Many organizations lack a centralized view of all workloads across all of their environments — so it’s a challenge to manage and enforce security policies effectively.
Visibility is compromised when data is moved over to the cloud at a rapid pace. The increased workload creates a growing amount of unmanaged information security risk.
According to a 2017 report from RightScale, the percentage of enterprises that have to use multiple clouds grew to a large majority (85 percent). The report also reflects an increase in the number of enterprises planning for multiple public clouds (up from 16 percent to 20 percent). All signs indicate that skies are getting cloudier — which makes multi-cloud management seem hazier.
It’s no surprise that 39 percent of those who participated in the 2017 Fugue survey, State of Cloud Infrastructure Operations, reported that security compliance slows them down. Trying to implement a comprehensive management platform manually is complicated by the many components of on-premises systems, public cloud services, data services, software services, security components, networks and other connected devices.
Another security risk comes from fickle application programming interfaces (APIs), said Robin Schmitt, general manager at APAC at Neustar, in DatacenterDynamics. “Exposed APIs can leave enterprises vulnerable to breaches as they open the floodgate to DoS [denial-of-service]/DDoS [distributed denial-of-service] attacks. Consequently, poor management of multiple API networks on multiple clouds exponentially increases the risk of cyberattacks for businesses.”
Let the Next Generation Shine
Security is the top challenge related to managing multi-cloud environments. IBM and IDG research showed that the majority of organizations (77 percent) now view security through a different lens. A management platform that incorporates cognitive computing creates a framework that continues to learn and change as the overall environment evolves.
“Organisations operating in a multi-cloud environment will derive the most benefit from a consistent, integrated solution that will offer comprehensive data security along with the ability to effectively manage encryption keys across a range of diverse environments,” said Galvin.
They demand a multi-layered approach, which can very easily start to consume and constrain in-house IT resources. “Current policies that specify using a particular encryption technology or network security technology won’t fly” in a decentralized, multi-cloud environment said Nataraj Nagaratnam, engineer, CTO and director of Cloud Security at IBM.
Fortunately, technology innovators continue to develop tools to help customers meet the security challenges in multi-cloud. One example is the IBM Cloud Private platform, according to ZDNet, which includes the Cloud Automation Manager that scans applications and helps deploy them either on-premises or in a cloud.
One last key consideration when trying to determine the right security solutions for your multi-cloud environment is interoperability. Software-defined networks — along with multi-cloud data encryption and other next-generation technologies that defend across platforms — are layers that you can add on when designing a multi-cloud security strategy. Also, a cloud integration platform provides a single control point for several different technologies, including API management and secure gateway.
A business can certainly benefit from sharing security responsibility via a multiple-cloud-vendor relationship. However, it is critical you carefully evaluate third-party vendors. Everyone wants their tech to be agile and user-friendly — but no one will be able to get anything accomplished if your security is compromised.
found that the majority of organizations (77 percent) now view security through a different lens. A management platform that incorporates cognitive computing creates a framework that continues to learn and change as the overall environment evolves.
Educating employees on security is more crucial than ever. Data from London-based advisory and solutions company Willis Towers Watson points to internal employees — whether through negligence or deliberate offense — as the cause of 66 percent of all cyber breaches. Figures like this are prompting security managers to put more resources into security awareness training.
The Path to Effective Security Awareness Training
When the Financial Services Information Sharing and Analysis Center (FS-ISAC) reached out to security managers about cyberdefense for its 2018 CISO Cybersecurity Trends report, 35 percent said they consider employee training a critically high priority for improving security posture. While awareness training is indeed not a new concept, gone are the days when merely giving employees a series of videos to watch was considered sufficient — especially in the absence of any follow-up measures.
Security awareness training programs need to be interesting, engaging and memorable to be effective, said Lisa Plaggemier, director of security culture and client advocacy at CDK Global. Plaggemier believes the entire concept of awareness programs needs a revamp. (She even gave a talk on the subject, Let’s Blow Up Security Awareness and Start Over, at the 2018 RSA Conference.)
“As far as what is not working — no offense to my technical friends — but I think we are hiring the wrong skill set for this position,” said Plaggemier. “We’re hiring people without the right skill set to be good communicators. I think we need more people who have had experience with selling something. We are trying to influence behavior, and that requires being able to get buy-in from employees.”
“Nobody likes to sit in front of a computer where the speaker does all of the talking. They will be bored easily,” said Aleksandr Yampolskiy, CEO and co-founder of Security Scorecard. “The best presentations show concrete examples. When we conduct training here, I will pull up a website and then show up some tools hackers can use to hack a computer. I always show examples of how they can be phished, and I play a video recording where I show how people try and phish me.”
“Before you can get into tactics, you need good creative,” said Plaggemier. “You need a good character. Something that’s funny or interesting.” At CDK Global, Plaggemier relies on an ad agency with great writers to craft compelling awareness programs.
Yampolskiy has experimented with gamification around awareness lessons at previous organizations where he has run awareness programs. “We bought two iPads and encouraged people to try and hack the company,” he said. “People got creative and would call and pretend to be IT, among other things. This kind of competition resulted in amazing findings that professional demonstrators never discovered.” Yampolskiy said the winner of the competition was titled the company’s security champion and received a plaque from the CEO, which got people excited about the training.
3. Adjust Your Approach: No One Cares
“In awareness, we suffer from the curse of passion,” said Plaggemier. “You presume your audience has certain level of knowledge. I’ve met so many people in security, they want to help everyone. They are really passionate about it, and they presume that the audience cares too. But that’s just not the case. You need to start every awareness campaign with this premise that no one cares.”
This brings us back to that hook that draws the audience in we mentioned earlier: It needs to be funny, interesting and engaging to get them to care in the first place, said Plaggemier. “You can use humor, you can — but you have to start with the premise that no one cares in order to see some success,” she said.
4. Enlist Top-Down Support
Building any culture starts by example, said Yampolskiy. “You need buy-in from the CFO, from general council. If they lead by example, people will copy that behavior and know that gets rewarded. People look at who is being commended,” he said. The push for significant change should come from the top — otherwise, there may be less potential to create a culture of cyber awareness.
“CISOs [chief information security officers] need to get everyone on board with doing something different,” said Plaggemier. “If you’re going to get everyone’s attention, you need to get everyone on board at the outset.”
Most security operations centers (SOCs) today use security information and event management (SIEM) tools — but security is not solely about products and technologies. When designing a SOC, security leaders must consider other factors too. These include business requirements, the skills of the analysts working in the SOC, the team’s scope and responsibilities and the organization’s security budget.
Classifying SOC Investments and Defining Roles
The budget largely depends on the delivery model. For example, while an on-premises SOC requires a substantial initial investment, it can be classified as a capital expenditure. Therefore, it is only subject to depreciation for tax purposes. A software-as-a-service (SaaS) model reduces the initial investment, but it can only be capitalized as an operational expense.
Whether the SOC is delivered on-premises or as a SaaS, it needs to be managed. While the general IT staff can manage the SOC platform, security administrators and analysts must handle security incidents. These two roles require vastly different sets of skills and expertise. The security leaders overseeing the SOC must also have a thorough understanding of who is responsible for what. Administrative tasks include resetting passwords and managing the SIEM, while maintenance tasks include installing patches and ensuring that security controls are properly configured.
Maximizing Incident Response Capabilities
The interaction with the computer security incident response team (CSIRT) process is also very important. By performing an immediate analysis of the security incident at hand (and using a predefined response runbook), the SOC team can be as proactive as possible. During the security incident analysis phase, the use of cognitive technologies can help analysts quickly build the attack pattern and break the kill chain. Integration with a patch management system is also crucial, as this can help analysts block attacks before they cause any damage, saving both money and invaluable time.
While a security administrator can analyze offenses, manage security incidents and install patches, these tasks are particularly time-intensive. During the time it takes to examine a security event, attackers can generate new threats and infiltrate other areas of the network. For this reason, a CSIRT is more capable of managing threats to the entire system. Some individuals on the team might have multiple responsibilities, but it’s important to clearly define those roles.
It’s equally important for service providers to understand their clients. Thus, the SOC platform should support multitenancy to guarantee segregation of data. As a general requirement, the SIEM should fully integrate with other security controls and CSIRT processes.
The fusion SOC — a kind of mega-SOC used to manage multiple security environments — is becoming increasingly popular. In some cases, the fusion-SOC is used to manage security controls within individual organizations. In other cases, it manages different types of SOCs altogether, such as traditional IT, operational technology and more.
Security leaders must also consider the Internet of Things (IoT) when designing an SOC. When a new connected device is introduced into the environment, analysts must ensure that users and manufacturers are held accountable for their security.
Defending the Perimeter
Finally, one of the primary directives of an SOC team is to identify and defend the perimeter. Let’s imagine that an SOC team implemented a physical segmentation, which usually focuses on prevention — as opposed to logical segmentation, which focuses on detection. What information do the analysts need to collect? Where is the information located?
The SOC team should consider:
Network information, such as hashes, URLs, connection details, etc.
Vulnerability information reported by vulnerability scanners
Intrusion prevention (IPS) and detection (IDS) systems
Operating systems (OSs)
The more data and context the SOC collects, the more events per second and flows per interval analysts must manage. This impacts the costs associated with the SIEM and its administration. In general, the security administrator can focus on the most critical incidents by optimizing and tuning SIEM rules.
It goes without saying that reducing the amount of data collected negatively impacts analysts’ ability to detect incidents and minimize false positives. Furthermore, more sophisticated attacks usually require more context to successfully detect. This is why it’s crucial to implement both physical and logical segmentation. The same goes for configuration management — if not properly optimized, some data sources might induce management difficulties. While using fewer sources can simplify the management of this data, it also reduces the SOC’s detection capabilities.
First Line of Defense: The Security Operations Center
Designing a SOC is not as simple as installing an SIEM and watching the gears turn. In addition to investing in the right technology, security leaders must ensure that their strategy aligns with human factors and business needs. They must also make sure their analysts are focusing on collecting the right data.
In today’s volatile cybersecurity landscape, the SOC team is the first line of defense against rapidly evolving threats. The better-equipped analysts are to efficiently manage these threats — and the more security leaders are able to demonstrate the value of the SOC to business leaders — the safer corporate data will be from sophisticated cybercriminals looking to exploit it.
Privileged account management (PAM) is emerging as one of the hottest topics in cybersecurity — and it’s easy to understand why. Cybercriminals are relentless when it comes to finding and compromising their targets’ privileged credentials to gain unfettered access to critical assets.
An attacker with access to these credentials appears as a trusted user and can go undetected for months. Insider attacks can also inflict far more damage when the threat actors have access to privileged accounts.
Manage Privileged Accounts: What’s the Incentive?
The global average cost of a data breach is $3.62 million, so chief information security officers (CISOs) have plenty of incentive to manage access to privileged accounts robustly and comprehensively. However, market drivers for PAM solutions go beyond the risk of financial consequences due to a breach. Other factors include mandates from auditors and regulators, as well as the desire to increase operational efficiencies by leveraging cloud environments — which adds a layer of complexity when it comes to managing third-party access.
Given all this incentive to effectively manage privileged access, where do enterprises stand today? Shockingly, 54 percent of companies today still use paper or Excel to manage privileged credentials. With no shortage of commercially available solutions on the market, why are so many businesses continuing to use manual processes?
Two answers come to mind: Many vendors offer point solutions, such as password managers and session recorders, that only accomplish a portion of what is needed in (yet another) technology silo. Plus, more robust PAM solutions are often hard to deploy, unintuitive and not integrated with related critical technologies that enable security teams to manage privileged accounts holistically. Businesses looking to move beyond spreadsheets should consider new solutions to mitigate risks and gain a rapid return on investment.
Take Privileged Account Management to the Next Level
Best-in-class PAM solutions offer a comprehensive set of functionalities, integrate into the existing security ecosystem and are simple to deploy and use.
As a baseline, these tools help security teams:
Discover all instances of privileged user and application accounts across the enterprise.
Establish custom workflows for obtaining privileged access.
Securely store privileged credentials in a vault with check-in and check-out functionality.
Automatically rotate passwords when needed — either after every use, at regular intervals or when employees leave the company.
Record and monitor privileged session activity for audit and forensics.
Receive out-of-the-box and custom reports on privileged activity.
Enforce least privilege policies on endpoints.
By integrating a PAM solution with identity governance and administration (IGA) tools, security teams can unify processes for privileged and nonprivileged users. They can also ensure privileged users are granted appropriate access permissions based on similar users’ attributes (e.g., job role, department, etc.) and in accordance with the organization’s access policy. Events related to privileged access are sent to a security incident and event management (SIEM) platform to correlate alerts with other real-time threats, which helps analysts prioritize the riskiest incidents. Integration with user behavioral analytics (UBA) solutions, meanwhile, helps security teams identify behavioral anomalies, such as the issuance of a rarely used privilege.
Embracing a Holistic Approach to PAM
IBM Security Secret Server is a new next-generation privileged account management offering that protects privileged accounts from cybercriminals and insider threats, helps ensure compliance with evolving regulations and gives authorized employees access to the tools and information they need to drive productivity. The solution protects privileged accounts from abuse and misuse — and enables organizations to enforce least privilege policies and control applications to reduce the attack surface.
By investing in PAM tools that integrate seamlessly into the existing environment, organizations can put the full power of the security immune system behind the ongoing effort to protect sensitive access credentials from increasingly sophisticated threat actors. This enables security teams to move beyond inefficient, manual processes and embrace a holistic approach to privileged account management.