Loading...

We are sometimes asked to compare our threat detection and response solutions to those custom assembled by security experts using various open source products. With a wide array of quality point solutions available, it’s natural to consider whether a combination of best-of-breed open source solutions can be a better option for a particular organization, rather than an integrated commercial solution.

To start with, RSA is a big fan of open source software and open source threat intelligence, participating in the security sharing process. This collaborative tradition is strong in the security space, as we all battle the same adversaries to protect our organizations, and to keep the internet as safe as possible for everyone.

In practical terms, this is a classic “build vs. buy” choice, and boils down to an organization’s preferences, available skills, and risk tolerance. While strong solutions are possible with either choice, the differences are important to understand.

  • Preferences Some organizations are very comfortable with open source software. They’ve typically built up skills specific to the open source software model, particularly community- and self-support, along with a full understanding of the various licenses including GPL. Other organizations are more comfortable with commercial software, or even actively prefer that approach. For these organizations, the availability of support, predictable upgrades, and lifecycle guarantees offsets potential license savings. Many have explicit rules about this in their governance, risk, and compliance (GRC) playbooks.
  • Available Skills The availability of deep security and integration skills – and the ability to retain them – is an important factor in choosing between custom integration and a commercial platform. If your organization’s skill set is strong and stable, you may feel comfortable integrating different technology for logs, packets, endpoint, and netflow, and possibly separate analysis and remediation tools. Remember that this is not a one-time event, but a continuous process of maintaining integration and adding capabilities as they become available. In the case of a commercial threat detection and response platform, the integration is managed by the vendor. This frees up resources to focus on the threat hunting activity. Furthermore, in the case of RSA offerings, the threat hunting activity can be easily split between analysts of differing skills, making everyone much more productive. Lastly, interoperability with various SIEMs, IPSs, firewalls, etc., is maintained by the vendors so customers don’t need to worry about it.
  • Risk Tolerance For organizations that integrate security strategy with business strategy, IT risk is an important category. Breaches have a potentially huge negative impact, and are appropriately weighted in most risk programs. For the open source version, there are additional risks that must be evaluated. Among these are the continued availability of high-level skills required to manage and maintain the solution. You’ll also want to consider the stability of projects underlying the components used, and the availability of suitable alternative components – as well as the effort required to replace and integrate that component. For a commercial platform, the stability and maturity of the vendor, both from technology and business perspectives, defines the risk in adopting it. Commercial support systems lower the risk of a catastrophic outage, as do support SLAs and the existence of professional services, including incident response support.

So the choice is ultimately dependent on the organization making the decision. If done really well, a custom-integrated solution can be effective. However, with that choice you have to possess (and retain) the skills to do it. In addition, you make yourself dependent on multiple projects/vendors, increasing the risk that one may cease to maintain a solution, or fail altogether.

Our approach is to integrate across our RSA offerings so customers don’t need to worry about that part, and to interoperate with any component a customer chooses to use in place. A common example is a customer adding RSA threat detection and response components to its existing SIEM solution. In this instance the analysis and detection takes place in the RSA framework, so you still get all the benefits of integration.

One good piece of advice for anyone considering a threat detection and response solution – really for any IT decision – is to look out five years into the future, and consider changes that may impact your organization. Certainly internal considerations, such as maintenance of employee skills and organizational risk tolerance, will be important. It’s also critical to evaluate the probability that technology partners will continue to support your activities at a predictable and professional level. Remember that security is a process, not an event. When you choose something as critically important as a threat detection and response solution, you need to treat it as an ongoing commitment. It’s important to choose wisely.

Learn more about our threat detection and response capabilities in RSA NetWitness® Suite, as well as our participation in the security sharing process through RSA® Live and Live Connect, RSA® Link and RSA® Research threat intelligence sharing.

The post A Security Decision – Build or Buy appeared first on Speaking of Security - The RSA Blog.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This week, we continue our journey through the seven steps you can follow to build a risk management framework for information. We’ve already looked at how to identify important information that may be at risk in your organization, where to find the information and how to assess the risk it presents within its business context. If you’ve followed these steps, you know where the risks lie and how big they may be; the next step is to evaluate the risk treatments available to you.

In step 3, we assessed the potential damage that could result from information being lost, stolen, altered or made inaccessible. To evaluate risk treatments, you’ll need to examine that potential damage—i.e., the inherent risk—in relation to your organization’s appetite and tolerance for information risk. All organizations have to be willing to take some risks in order to move forward; “information risk appetite” defines the maximum amount of risk your organization is willing to assume. One way to think about it is to ask yourself at what point the potential impact of lost, stolen, altered or inaccessible information becomes unacceptable to your organization. (Keep in mind that the cost can be expressed in a variety of ways: the type or number of information records; the financial cost to compensate customers, make up for lost revenue or remediate the problem; or something more qualitative, like bad publicity.) At the point where the inherent risk exceeds the organization’s risk tolerance, appropriate technical and organizational risk treatments must be implemented to bring the inherent risk down to an acceptable level.

In evaluating risk treatments, as in the previous steps, documentation is key. Only by documenting what your organization is currently doing to control risk can you assess the effectiveness and shortcomings of those controls, and make informed decisions about how you will treat risk going forward. You’ll be documenting current technical risk treatments (such as authentication methods, encryption, application and device patching, and compliance with password policies), as well as organizational controls (such as employee security awareness and training, physical controls, employee hiring practices, third-party risk management, and security-related policies and procedures). To gather all this information, you’ll need to create questionnaires for the responsible parties of the IT infrastructure—i.e., the owners associated with applications, databases, data stores, web sites, devices and third parties—as well as for those individuals responsible for organizational controls. For example, you might ask application owners questions such as:

  • Is the application constructed in accordance with security policies?
  • Are authentication standards being followed?
  • Are code injection and cross-site scripting blocked?

If it sounds like documenting controls is going to mean a lot of time and effort (both for you and for the people responding to your questionnaires), I’ll admit it can be quite a bit of work. But it doesn’t have to be—and it won’t be, if you make automation part of your process. Automated assessment feeds and other automated tools for collecting and recording relevant information can dramatically reduce the time and effort required to document and evaluate current risk treatments. (Also, because control information is constantly changing, you’ll have to refresh it going forward. Automation allows you to do that without having to start from scratch with each assessment cycle.)

Ultimately, the goal of collecting all this information about existing controls is to establish what your organization is doing to control information risk—and how well it’s working. You’ll want to look not just at where the organization has controls in place, but also where controls are missing or not working properly, who’s responsible for them and what’s being done to address any problems. This information is your baseline for evaluating controls.

For a deeper dive into the process of evaluating risk treatments, download the white paper 7 Steps to Build a GRC Framework. Information for Step 4 includes questions to ask to collect the information you need, a diagram of areas to evaluate and tools for collecting information about them, and a summary of what you should expect to know by the end of the process.

The post 7 STEPS TO A GRC RISK MANAGEMENT FRAMEWORK—4: EVALUATE RISK TREATMENTS appeared first on Speaking of Security - The RSA Blog.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In RSA’s quest to build out a deeper pool of future Defenders of the Digital Universe I had the pleasure of having Meghan O’Connor as a summer intern on my team.   During her exit interview I asked her what she didn’t realize about cybersecurity and fraud prevention prior to her internship and what advice she would now give.

Didn’t realize…

  • How common phishing attacks are, especially to gain access to the wider university network using stolen student credentials.
  • Universities are targeted by cybercriminals because of all the rich personal data they have on students – think of all the information you gave during enrollment: SSN, Medical, transcripts, insurance info, etc.
  • How lazy people can be with regard to cybersecurity. I’m definitely less annoyed now when I have to take extra steps to verify my identity and protect my information.
  • Extent of cybercriminal networks once your information is stolen by one person, it can be traded over and over again.

I thought…

  • Biometrics were a “shortcut” and were less secure than passwords, which in hindsight makes no sense, because my fingerprint and eye print are always with me.
  • I’ll never be a victim of identity theft. No one would want to steal my information because there’s not that much money in my bank account and I don’t have access to any desirable information.
  • Getting prompted to change my password after a certain amount of time or after a suspicious login was annoying.

Things you should know …

  • Authentication is important – if a sites you use offers multi-factor authentication (MFA) – use it!
  • IoT is real – if you are buying new gadgets for school it is likely IoT enabled – ensure you install security from the start – yes that even includes your gaming consoles. And if you are buying a used device ensure you clean it prior to using it, because it may be infected with malware.
  • Ransomware can target anyone – so when I hear “WannaCry” it has a whole new meaning! We all need to be aware of it, and prepare for it. It will lock down your computer and take everything hostage … yes… that includes your pictures, term papers and other personal information. Even worse, you could potentially infect the university’s system. Back-up your data so if you become a victim you won’t feel pressured to pay the ransom fee.
  • Resist the click – think twice about clicking on that link or video in email / text / social media as it might be a Phishing attack. The Anti Fraud Command Center I worked with while at RSA identifies a new phishing attack every 30 seconds.
  • Online is forever – anything you post online will forever be accessible – even if you delete it. That includes all your social media posts, discussions on gaming consoles, texts, emails, etc. Remember your future employer, spouse, or cybercriminal could gain access to these.

As summer wraps up and RSA’s interns go back to school it is great to see our future Defenders of the Digital Universe grow more knowledgeable about cybersecurity and why it is a shared responsibility. To learn more about fraud trends, follow us on Twitter @RSAFraud

The post My Summer Defending the Digital Universe appeared first on Speaking of Security - The RSA Blog.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By Tim Norris

Mobile and Cloud have raised the stakes for security in general and for identity-related security challenges in particular. But while identity-related risk has grown tremendously, in many ways, the risks themselves are ones we’ve long recognized – such as orphaned accounts, segregation of duties (SoD) violations and privileges following users to new roles, among others.

What’s different in this new environment, where there are more users and kinds of user identities than ever, is that those familiar risks have evolved to become much bigger and more difficult to manage. To address them successfully, we need different strategies for the new ways in which they impact organizations.

Here’s a look at some of the main areas where identity risk is causing organizations pain and the strategies to address the risk.

More Users, More Places, More Problems
Today’s identity landscape has more users in more places – on-premises, on mobile devices and in the Cloud – and they’re constantly joining, moving or leaving accounts, applications, file shares and portals. They can generate millions of entitlements, making it a real struggle to manage their identity and access information. Identity risk factors such as orphaned accounts, shared accounts, unauthorized changes, identity movement, separation-of-duty issues, out-of-role access and overprovisioning pose potential problems on a scale that was once unimaginable.

Fear of Audit Failure
Given that 81% of data breaches today involve compromised identities, it should come as no surprise that auditors are scrutinizing access certifications more zealously than ever. And more organizations are failing audits under this deep scrutiny – which can lead to even more scrutiny. Governance processes that have proved adequate in the past may no longer be sufficient to cover the increasing scope of audits.

So Many Entitlements, So Little Time
Shaken by growing numbers of orphaned accounts, overprovisioned users and other problems, many organizations are putting a renewed focus on identity governance – and that’s putting a heavy burden on business managers. They’re increasingly being asked to fill out complex reports to verify appropriate application access, without being given much context for access decisions. Sometimes, doing the best with what they’ve got means rubber-stamping verifications – which can make security vulnerabilities even worse.

Strategies to Address Identity Risk
The situation may sound bleak, but we’ve identified several strategies to help take the anxiety out of mitigating identity risk today:

  • Enable risk-aware, context-driven governance by integrating risk management and access management in identity governance and lifecycle processes – instead of managing them as separate issues.
  • Surface meaningful information for decisions by organizing activities by risk, priority and context, which can help reduce certification fatigue for business managers.
  • Discover outliers and inappropriate access by using a risk-based approach to quickly identify outlying access requests, flag them and prioritize them for remediation.
  • Automate processes so that in addition to providing secure access, you can fulfill it efficiently and effectively.

Take a deeper dive into these strategies and the identity risk factors they address in our new eBook and learn more about how RSA SecurID® Identity Governance and Lifecycle can help.

The post Addressing Identity Risk Factors appeared first on Speaking of Security - The RSA Blog.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In the last couple of weeks, we’ve been talking in this space about the seven steps to building a risk management framework for information, starting with the first step of identifying information that needs to be protected and then going on to the second step, which is determining where that information exists inside your organization and its extended ecosystem, and how much information there is. Once you know those two things, you can move on to step three, which we’ll discuss in this post. In this step, you assess the inherent risk associated with the information you’ve identified, so that you have a meaningful measure of information risk around which to build your framework for risk management.

It’s important to understand that any information you categorize as “important information” carries with it some inherent level of business risk. However, the degree of risk will vary depending on the nature of the information, especially as it relates to how much potential damage to the business would result from the threat of the information being lost, stolen, altered or made inaccessible. Here’s one example of a simple formula you can apply to calculate inherent risk associated with a business process, IT asset, or a third party handling or storing information that needs to be protected:

Inherent Risk = (Criticality of Information x Number of Records)

X Impact per Record Associated with each Type of Threat

To determine the impact per record, you can look to public sources such as survey results that posit a cost per record associated with a breach of various record types. For some types of records, such as intellectual property, there won’t be public information. Then you have to come up with the value on your own. What would the damage be to your organization if its “secret sauce” was lost, stolen, altered, destroyed, or made inaccessible? Determining that value is a good place to engage senior management and your organization’s board; they’ll begin to understand your technical problem in terms they worry about every day. When assessing risk, it’s fine to use qualitative measures of value, as long as everyone agrees on them. For example, if your organization has zero tolerance for ending up on the front page of the Wall Street Journal or the local newspaper, that is a critical level of risk regardless of the monetary cost associated with it.

In the end, you’ll have developed a rating scale that the whole organization can usethe business, technology and information security, and internal audit. This way, when anyone raises an issue, you depict it on a scale and prioritize it the same way in terms everyone understands. Here are a few examples of types of information that can influence information criticality:

  • Employee information
  • Regulated and contractually protected customer information
  • Intellectual property and trade secrets
  • Market strategies, merger and acquisition information, new product information
  • Industrial control systems

In calculating inherent risk, it’s important to take into account the larger picture of inherent risk of the processes, third parties and IT infrastructure that are related to important information. It’s also important to be able to:

  • Incorporate monetary values into the formula for calculating inherent risk (if they are available, and if they are applicable to the type of information)
  • Balance monetary and qualitative considerations in your assessment of inherent risk
  • Create a scale that ranks risk from low to critical based on the potential for financial loss and agreed-upon qualitative measures

Successfully calculating inherent risk enables you to understand where the greatest information risk is located within your organization and your extended ecosystem, where it makes the most sense to invest human and capital resources to address information risk, where technical and organizational risk treatments need to be applied on a prioritized basis, and what the worst-case impact to the organization would be in the event of lost, stolen, altered, or inaccessible information. You can also get an understanding of whether non-public business information is more important than customer information or intellectual property, and where your greatest inherent exposure resides. All of this level of detail is useful in planning and responding to information security incidents. If an incident occurs, you can quickly see the connection points within your infrastructure, empowering you to make quicker and more informed decisions to respond.

To put this discussion of inherent risk in broader context, download Build a GRC Framework for Business Risk Management. This summary of RSA’s seven-step methodology for creating a risk management framework for information provides an overview of the steps in the framework, from identifying and locating important information all the way through documenting controls and reporting on risk.

The post 7 STEPS TO A GRC RISK MANAGEMENT FRAMEWORK—3: ASSESS RISK appeared first on Speaking of Security - The RSA Blog.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Nowadays, it is common to use machine learning to detect online fraud. In fact, machine learning is everywhere. Due to its independent nature and human-like intelligence qualities, machine learning does, at times, seem like an inexplicable “black box.” But truth be told, machine learning doesn’t have to be like that.

Here is what you should know if you decide to give “computers the ability to learn without being explicitly programmed.”

Before choosing fraud detection technologies that leverage machine learning, consider the advantages and disadvantages of the different algorithms together with the demand for transparency, prediction accuracy and ability to adjust to the rapidly changing landscape (be it fraud or others). In order to be truly successful, the specific algorithm you use, combined with your domain knowledge, can make all the difference. Here are a few algorithms and factors you should consider:

Artificial Neural Network (ANN; or their more advanced counterpart – Deep Neural Nets) is considered to be a “universal approximator” as it fits almost any scenario and field. But is it the best one in every field?

While Deep Neural Nets is superior to other algorithms when image or speech recognition is considered (in other words, when working with huge data sets), there are a lot of examples where ANN can produce inferior results to other classification techniques especially if the size of training sample is limited. It requires a large set of training data and is prone to over-fitting. ANN is sometimes referred to as “the second best way to solve any problem” while the best way is to actually to understand the different parameters of the problem you are trying to solve and then implement a model closely resembling reality.

Another statistical method known as naïve Bayes classifier – a probabilistic supervised classifier tool – has been proven mathematically to have a high degree of efficiency and reliability. The Naive Bayes algorithm – leveraged in risk-based authentication technologies – affords fast, highly scalable model building and scoring. Bayesian classifiers are usually faster to learn new fraud patterns on smaller datasets (e.g., when less fraud/genuine feedback is available). They are flexible to additions of new predictors which is crucial in the ever-changing fraud reality, and their simplicity prevents them from fitting their training data too closely.

With the Bayesian approach, the parameters that contribute to the final result can be made visible (not a “black box”). This means that Bayesian classifiers are free from the intrinsic disadvantages of other methods like ANN that cannot provide information about the relative significance of the various parameters – these are the real black box models. To that end, users of risk-based authentication have the ability to understand the top parameters which contributed the most to the risk assessment, and these factors are visualized through a Case Management application.

Relying on artificial intelligence does not necessarily mean living in the dark or losing control over what your system is doing. When you have a robust machine learning algorithm that adapts to changes and partial data, is flexible to constant additions and possesses a unique quality by which predictions are easy to interpret, then leading fraud detection results can coexist with transparency and a clear understanding of machine provided risk assessments. Learn more about RSA’s use of machine learning for fraud detection here.

The content in this blog was contributed by Maya Herskovic, Senior Product Manager for RSA Adaptive Authentication.

The post Demystifying the Black Box of Machine Learning appeared first on Speaking of Security - The RSA Blog.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Many organizations struggle to staff and maintain security operation teams due to a serious shortage of skilled security analysts. The struggle isn’t just about filling open roles; it is equally hard to drive the needed productivity of the resources already in house to make sure the alert that matters doesn’t go unnoticed.

Both new and existing security personnel can’t keep up with the exploding number of alerts and struggle with correlating information generated by disparate tools to understand the full scope of an attack. As a result, time is wasted on manual correlation and analysis, and more experienced analysts on the team have less time to focus on investigating and responding to the advanced incidents that put the organization most at risk.

In fact, 93% of security operation center (SOC) managers are unable to triage all potential threats and are unable to sufficiently investigate 25% of their security alerts.[i] It is no surprise that we continue to see breaches and their damaging business impact rise year over year. If we don’t enable the security analysts protecting our organizations with the right technology, we will never get out of this arms race.

With the massive expansion of the attack surface area, the shortage of security teams, and the exponential increase in threats, security technology must enable our human resources to work more efficiently and effectively. Technology must become a force multiplier – so that no matter how many people are in the security operations team, one part-time security analyst or 20 FTEs in a follow-the-sun model, they are empowered to find the threats that matter most – before they damage the organization they are defending.

So, how can organizations dramatically increase the productivity of the people they have? How can they deploy technology to help turn junior analysts into senior analysts, and senior analysts into true “threat hunters”? Making any analyst – from novice to hunter – more impactful and efficient at their jobs is imperative to help close the human skills gap.

Organizations can achieve these impacts through the intelligent application of force multipliers – strategies that make analysts more effective and efficient.

  1. Automate the threat detection process with advanced analytics, comprehensive threat intelligence, and optimized incident workflows. This ensures that security analysts focus on the real threats lurking in the sea of an organization’s data, and respond swiftly and efficiently.
  2. Broaden the visibility across an organization’s IT infrastructure to include all meaningful sources, including logs, packets, netflow, and endpoints. This allows analysts to correlate multiple indicators of compromise (IOCs) to view the full scope of an attack, and to reconstitute full sessions to see what really took place.

At RSA, we’ve developed an integrated tool set designed to make security analysts more efficient and effective, with much faster incident response. It’s like multiplying existing security staff.

To learn more, visit: RSA NetWitness® Suite

[i] Source: “Information Security Strategies in the Age of Zero-Day Threats,” Gatepoint Research PulseReport commissioned by RSA, April 2017

The post Skills Shortage: The Intelligent Application of Force Multipliers appeared first on Speaking of Security - The RSA Blog.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Standing up a complete enterprise Network Operations Center (NOC) in two days is no small feat, but doing so for one of the biggest security conferences – Black Hat 2017 – is truly daunting. But it’s not just setup, it’s also running the NOC and giving tours. Providing unified log management, network capture and dashboarding for the many tours and media events is an involved process putting analysts’ skill to the test. Creativity is required … appliances but no rack? No problem! Moving carts work just fine in a pinch.

One of the most critical aspects to the NOC analysts’ role is the ability to see across and into the network. The RSA NetWitness® Suite is perfectly suited to provide the combined visibility of network packet capture with centralized logging for switches, firewalls, wireless controllers, RSA SecurID® Access and wireless management as well as malware analysis.

Working with new workflows for log management and an updated version of our ESI log parsing tool enabled custom parsers (Figure 1) to be quickly developed and deployed to accommodate subtleties in the hardware, giving the NOC staff complete visibility into the Black Hat 2017 conference network traffic (Figure 2).

Figure 1. Custom parsing

Figure 2. Network traffic event summary

There is no inline decryption at Black Hat, resulting in limited visibility into SSL traffic. The task then becomes… what other metadata do we have on this session to make investigation easier? Do we have packet data? Odd certificates, threat data or traffic patterns? Do we have logs from the Palo Alto Networks firewalls, or any indications from RGNet’s controllers, that anything unusual is occurring? Do we know anything about the source or destination subnet? Is it a classroom, public Wi-Fi, or management network? The analysts leverage as many potential indicators to gain a complete picture on an event before making a determination. Sounds like any other day in an enterprise NOC, except this one is stood up and torn down inside seven days.

Working in a NOC where timelines are crunched and operational problems get addressed in real-time, challenges the best analysts. Need a mapping for classroom networks to assigned CIDR blocks? Feed creation time. Need to map VLANS to classroom course names? Custom log parser and Feed time. New code from another vendor that updated the logging format? ESI Tool time. All fun challenges with the end goal of providing as much detail to the Black Hat NOC management as possible to ensure a secure and stable network for all attendees.

The post Enterprise Network Security at the Black Hat 2017 NOC appeared first on Speaking of Security - The RSA Blog.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In our first post on the seven steps to building a GRC-based risk management framework for information, we talked about step 1: identifying information that is important enough to warrant protection. Once you’ve identified information important enough to be protected, within its business context, you can move on to determining whether you actually have any of the information, where it lives in your organization and among your third parties, and just how much of it there is in each place it resides.

You may wonder what exactly the difference is between identifying information to be protected and identifying whether you have it, where and in what amounts. Wouldn’t it be more efficient to just do all of that at once? Actually, the effort it takes is necessary to reaching the overriding goal of protecting important information, because before you can identify the information you need to protect in your organization, you have to know what you’re looking for. Only after you’ve defined what you’re looking for, in the first step, can you establish whether your organization has any of that information, where it’s processed and stored, and how much there is—so you can appropriately direct your efforts at protecting the information to manage risk.

Another reason you need to methodically identify the type of information and then its location and amount is that it’s entirely possible that establishing the precise location of the information will be the only way to ensure you’ve identified all of it. You also need to consider that at a later point in the methodology you’re going to be assessing and evaluating the risk associated with this information, as well as the treatments that can be applied to help manage the risk. You can’t do any of that effectively unless you have a clear understanding of how the information is currently being handled.

RSA has published an in-depth paper on the methodology for building a GRC-based risk management framework for information; in it, we provide detailed practical guidance for identifying, locating and quantifying the information you need to protect. After all, it’s one thing to tell you that you have to find the information; it’s another to show you how to do that—and that’s what the paper does at length. It describes:

  • Identifying and documenting the business processes that involve handling important information
  • Documenting how IT supports those business processes
  • Documenting the third party relationships that support the processes

Understanding the business processes surrounding the use of information, including the relationships that develop around those processes, is essential to understanding business context, which—as we touched on in the first post in this series—is what tells us which information is most important to the organization and therefore most in need of protection. Download the paper to learn more about this important step in creating a GRC-based framework for managing information-related risk in your organization.

The post 7 STEPS TO A GRC RISK MANAGEMENT FRAMEWORK—2: LOCATE DATA appeared first on Speaking of Security - The RSA Blog.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In 1860, Belgian inventor Jean Joseph Etienne Lenoir created a gas-fired internal combustion engine; it was the first internal combustion engine to be mass-produced in large numbers. The design wasn’t perfect by any means, but it was a large step forward, and countless engineers have continued to iterate on the concept even to this present day.   One such engineer was Alfred Büchi, who focused his efforts on improving the power and efficiency of the engine. In 1905, this Swiss engineer and inventor was granted patent #204630 by the German patent office. The patent was granted for a “highly supercharged compound engine” or what would later be known as a turbocharger. The idea was simple enough –compress more air into the cylinder, thereby increasing the efficiency and output of an engine. Interestingly enough, it was another twenty years before the technology was available for him to put this idea into action.

Much like that first internal combustion engine, SIEM is in need of a turbo boost. While the first log-based SIEMs came to market roughly about twenty years ago in the mid-90s, this focus on logs continues to be a fundamental constant even today. Logs are important – no doubt – but, unfortunately, this log-centric approach tends to ignore other technologies and data sources that can act as force multipliers to add powerful business context, provide important data correlation, and help reduce inefficiencies.

So, how can we take a page out of Büchi’s book and turbocharge our SIEMs? One way is by leveraging endpoint telemetry. Endpoints are both the first and most vulnerable line of defense for an organization as well as the proverbial “last mile” of an incident investigation. Thus, it’s vitally important for security teams to understand how a threat attacked an endpoint, what was running on that endpoint at the moment of attack, and what happened after the attack.

In an attempt to tackle this problem, many organizations implement different types of endpoint security – from NGAV and vulnerability management to endpoint detection and response. While sounding just fine on paper, the outcome is less than optimal. There are inherent challenges for modern security operations with this approach. First, organizations are unable to get the right data and depth of visibility from their endpoint security to help them effectively detect, understand, and respond to endpoint threats. Secondly, when they do actually get some endpoint visibility, the data is housed in a disparate platform that isn’t fully integrated into their SIEM for proper data correlation.

Like Büchi, many security professionals are recognizing these challenges and are looking at ways to build ONTO (i.e., integrated with) their SIEM “engine”. They have recognized that they need expanded visibility to see threats wherever they reside in a modern IT infrastructure – especially on endpoints. But equally important is having the capabilities to seamlessly correlate all data points that comprise an incident (from logs to NetFlow to endpoint data) in order to more accurately identify the real threats to an organization and respond more rapidly. Additionally, security and risk professionals are looking to vendors who can provide real, meaningful integrations between endpoint security and SIEM, whether it is through tightknit 3rd party partnerships or 1st party, complementary products. The desired end state reduces human processes and elevates the highest risk incidents to the forefront from an ocean of alerts. As an example, ideally, a security analyst could pinpoint exactly when an endpoint was compromised, reconstruct the phishing email that delivered the payload, understand the effect that the attack had on the endpoint, determine where the threat spread to in an organization…and then contain the compromised endpoints and take action to quarantine the entire threat. How’s that for turbocharged output?

Security teams have an extremely difficult job in protecting their organizations. Ultimately, if, together, we can find ways to interweave business context and risk with advanced cybersecurity capabilities (i.e., SIEM, endpoint, cloud) into one finely tuned engine, that job will be easier and less taxing. Additionally, this turbocharged security engine will better enable the entire organization – from the CEO to the SOC – to make stronger decisions to protect themselves from threats, minimize attacker dwell time and mitigate negative business consequences.

If your organization’s security operations are in need of a turbo boost, I encourage you to learn more about how our RSA NetWitness® Suite can SEE more and DO more.

The post Turbocharge your Threat Detection and Response with Endpoint Data appeared first on Speaking of Security - The RSA Blog.

Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview