Loading...

Follow HP Security Voltage | Security, Encryption, Cry.. on Feedspot


Valid
or
Continue with Google
Continue with Facebook

Deputy Attorney General Rod Rosenstein touched on encryption when speaking at the U.S. Naval Academy recently:

Encryption is a foundational element of data security and authentication … But the advent of “warrant-proof” encryption is a serious problem … Our society has never had a system where evidence of criminal wrongdoing was totally impervious to detection, especially when officers obtain a court-authorized warrant.

While the first sentence is hardly controversial, the last is problematic, to put it mildly. First, our society has always had myriad ways to hide secrets: simply not sharing them; whispering in private; and, yes, encryption—from Caesar ciphers to one-time pads. All the warrants in the world cannot force someone to share information they do not wish to share. And it’s fairly well accepted that torture, while often eliciting information, cannot reliably elicit accurate information. In other words, when tortured, people still lie. So even ignoring the legal issues surrounding “advanced interrogation”, secrets can endure.

It’s also worth noting that it’s not clear that encryption is really causing law enforcement that many problems. Sure, we all heard about the San Bernardino iPhone, but a quick look at the U.S. Courts Annual Wiretap Reports shows that the number of wiretaps is not changing much, year over year, and that the number foiled by encryption is miniscule and also quite stable.

Rosenstein went on to suggest that the solution to this “going dark”, to use his phrase, includes some sort of centrally controlled key management. This of course opens up a whole new set of problems, starting with Quis custodiet ipsos custodes?

Of course law enforcement officers don’t like encryption; they don’t like anything that makes their jobs harder. This includes ensuring probable cause, getting warrants, honoring suspects’ Miranda rights, and doing paperwork. This doesn’t make them evil; to the contrary, it shows that they are responsible—doing their best to get the job done. But “it makes the job harder” is no reason to waste time and effort trying to achieve the fundamentally impossible.

The problem is that encryption is mathematics, which is knowledge. All the laws in the world cannot stamp out knowledge: that has been proven repeatedly. So even if U.S. did make a law saying that strong encryption was illegal—which would also be somewhat ironic, given that the most commonly used strong encryption is the U.S. government-sanctioned Advanced Encryption Standard (AES)—programmers from other countries would continue to produce applications employing strong encryption.

Rosenstein concluded with:

There is no constitutional right to sell warrant-proof encryption.

Constitutional, no. Natural, yes.

About the Author
Phil Smith III is a distinguished technologist and Senior Architect & Product Manager, Mainframe & Enterprise, at Micro Focus, formerly HPE Software. He is the author of the popular blog series, Cryptography for Mere Mortals. Learn more about our data encryption technologies.

The post Matter of fact, it’s all dark appeared first on Voltage.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

As previously discussed in my blog post, Do Health Care Providers Need Your SSN?,  your PII (Personally Identifiable Information—please, never “PII data”, which is redundant) can be monetized by evildoers. Given sufficient data and effort, identity theft fraudsters can use your health insurance to fraudulently obtain treatment, exploit your credit rating to take out loans or credit cards in your name using your credit, and even conceivably talk their way into accessing your bank account balance. The Powers That Be recognize this, and the U.S. Senate recently discussed replacing the Social Security Number (SSN) with something higher-tech, presumably some sort of electronic identity using multi-factor authentication.

While initially appealing, such an approach has its own risks, particularly for those trailblazers who will be the first to deal with the inevitable theft of their new credentials: how difficult will it be for convince officials and businesses that they really, really are victims despite the new, “unbeatable” system?

Phased rollout will also be difficult, with two systems necessarily coexisting for some period. We will likely see delays forced by cost considerations, as software and possibly even hardware is upgraded to support the new system. Recall the issues with EMV, a global standard for cards equipped with computer chips and the technology used to authenticate chip-card transactions. One of the reasons EMV took so long to gain U.S. adoption was the cost of replacing billions of credit cards (although banks could have simply started issuing them years ago as part of normal card replacement, and thus avoided the self-inflicted mass rollout they recently endured).

Other countries are ahead of the U.S. in instituting such measures for real or de facto national IDs. Estonia, for example, has a chip-based national ID card that uses digital certificates. Unfortunately, a security flaw was recently discovered in the technology used, and thus many Estonians must update their personal certificates. Of course, nobody planned for such a massive update, and the online service has been crashing, leaving folks in limbo.

None of this should be used to argue against improving on the humble nine-digit SSN. Just as EMV does not magically prevent credit card fraud, an SSN replacement will not abolish identity theft, but it should help. Meanwhile, if you are a victim of identity theft, you actually can get a new SSN. The linked page is forthright about the fact that your old SSN will persist in various systems, and you can expect a certain amount of hassle updating it with banks, credit agencies, et al. But it is possible. However, wouldn’t it be even better if the orgs that had your SSN number protected it from theft in the first place?

About the Author
Phil Smith III is a distinguished technologist and Senior Architect & Product Manager, Mainframe & Enterprise, at Micro Focus, formerly HPE Software. He is the author of the popular blog series, Cryptography for Mere Mortals.

Learn how Voltage can protect social security numbers and other sensitive information with SecureData.

The post Replacing SSNs to combat Identity Theft appeared first on Voltage.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

It was recently discovered that more than one business were surreptitiously using computing power of visitors to their web sites to mine bitcoins. Maybe they did this as an alternative way of paying for their costs instead of using advertising. Maybe they did this for other reasons. But this should not be too surprising. The cost of electric power is the single biggest cost in solving hard cryptographic problems these days, and that is true whether you are trying to crack a key or just to mine bitcoins. And that means that there is a strong incentive to get someone else to pay for that power. But exactly how much power does it take to do cryptographic calculations?

Back in 2012, at DARPA’s “The Impending End of RSA” workshop, Dan Bernstein gave a talk in which he described how much electric power it would take to crack various RSA keys. He assumed that an attacker would spend a fairly modest amount on hardware, say just a few million dollars or so, and would then use that hardware to crack a key, with the goal being to crack a key within one year.

Dan claimed (but I have never checked his calculations) that for a 1,024-bit RSA key, it would take about the entire output of a typical power plant to do this. He also claimed that to do this with a 2,048-bit RSA, it would take roughly the amount of energy that the Earth receives from the sun in that year. He then suggested that DARPA really should have called their event “The Impending End of RSA-1,024” because the energy requirements for cracking an RSA-2,048 key makes doing it pretty much out of the question. Dan’s scenario for cracking a 1,024-bit key is right on the outer edges of plausibility. Doing it for a 2,048-bit key is really well into the realm of science-fiction.

But the idea of measuring the cost of cryptographic attacks in terms of energy instead of other factors like time or money is an interesting one. A typical power plant might put out about 1 gigawatt, which ends up being about 30 petaJoules (3 x 1016 J) over a year if it is operated at full capacity. The massive Three Gorges Dam in China has a maximum capacity of about 22.5 gigawatts, or about 675 petaJoules (6.75 x 1017 J) if it is operated at full capacity. The Itaipu Dam on the border of Brazil and Paraguay has a maximum capacity of about 14 gigawatts, or about 420 petaJoules (4.2 x 1017 J) if it is operated at full capacity, but has actually produced more electric power in a year than the larger Three Gorges Dam – 370 petaJoules versus 340 petaJoules.

Those are unwieldy numbers to deal with. Fortunately, there is a handy yardstick to use for measuring energies that are roughly that big, and that is the megaton (MT).

A megaton is how much energy a million tons of TNT releases when it explodes, and is equal to about 4 petaJoules (4 x 1015 J). The energy outputs of the Itaipu Dam and the Three Gorges Dam come to 92.5 MT and 85 MT respectively.

The very first nuclear explosion, the Trinity test, had a yield of about 20 kilotons (KT), or 0.02 MT. The W87 warhead that the American Peacekeeper missile carried 10 of had a yield of about 300 KT, or 0.3 MT. The American B83, another typical Cold War strategic nuclear weapon, had a yield of about 1.2 megatons. The biggest nuclear bomb ever, the USSR’s Tsar Bomba device, had a yield of about 50 MT. By comparison, the crack of RSA-1,024 that Dan proposed would use about 7.5 megatons of energy, or more energy than several Cold War era strategic nuclear weapons.

That is a lot of energy.

Is the amount of energy needed to mine bitcoins more than that or less than that?

It looks like bitcoin miners spend about 18 terawatt-hours of energy, or about 65 petaJoules (6.5 x 1016 J), per year mining bitcoins. That is roughly the energy from two power plants. Or it is roughly enough energy to crack two RSA-1,024 keys. Or it is about 16 megatons of energy. Or it is about the energy released by the nuclear weapons from five Peackeeper missiles. Or it is about the energy of a couple of young programmers at Silicon Valley start-ups.

No matter how you measure it, that is still a lot of energy.

About the Author
Luther Martin, Micro Focus Distinguished Technologist, is a frequent contributor to articles and blogs. Recent articles include The Security of Cryptography and the Wisdom of Crowds, in the ISSA Journal, The dangers of implementing blockchain technology in Information Age, as well as Are you accidentally paying for BitCoins? and The Real Value of Bitcoin in the voltage.com blog.

The post Now That’s Cryptographic Computing Power appeared first on Voltage.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The General Data Protection Regulation (GDPR) that takes effect next May will require businesses to protect the privacy of any personal data that they manage. There are many ways to do this, but the GDPR strongly encourages the use of pseudonymization, which, depending on how the business currently manages the personal data it acquires, may or may not be easy to accomplish. In particular, the implementation of pseudonymization is likely to be harder in legacy IT environments. But there are ways to do it, even in these challenging cases.

Before looking at the ways to do this, it is useful to address exactly what pseudonymization is. One way to understand it is as though it adds shades of gray between what used to be just black and white. Consider that what we know about the identity of a particular person can range from enough to uniquely identify them to absolutely nothing about them. If we know nothing at all about a person’s identity, they have perfect anonymity but absolutely no accountability. If we know everything about their identity, then we have perfect accountability but absolutely no anonymity. Pseudonymity is the range of possibilities between these two cases (including both of the extreme ones), so it may be useful to think of it as implementing a trade-off between anonymity and accountability. Many examples of the available data about a particular person fall in between the two extremes, even if the data is strongly protected.

While it may be generally considered to be non-identifying information, the very fact that a person is a citizen of France, for example, represents significant identity information because only about 9 percent of EU citizens are French. The set of potential identities has been reduced by approximately 91 percent. Similarly, the simple fact that a person has an account at a particular bank is identifying information. Of the roughly 750 million EU citizens, only a small fraction are likely to have an account with any particular bank. Perfect anonymity is very uncommon, perhaps even impossible. Most cases of what we think of as being anonymity are more appropriately considered to be a form of pseudonymity, and many forms of anonymization of personal information are more appropriately considered to be forms of pseudonymization.

To keep business information useful, the ability to reverse any form of pseudonymization used to protect personal information is often necessary. If we accept this limitation, then there are essentially two ways to implement useful pseudonymization: encryption and tokenization. Encryption provides a way to compute the transformation from unprotected personal information to a pseudonymized version of it while tokenization uses a data store to record the transformation instead of computing it directly. The two techniques are more similar than some technology vendors would like you to think. Tokenization is equivalent to a form of encryption known as the “one-time pad” that was invented by Frank Miller in 1882. Because the two are so similar, we will use the single term “encryption” to refer to both.

One problem with encryption is that it can change the format of data. In IT environments that have many older components, this can cause lots of unforeseen problems. Some systems expect a credit card number to be 16 digits, for example, and will fail in undesirable ways if they try to process a credit card number that is not a 16-digit value. Suppose that you encrypt the value “5610591081018250” (one of the public values that PayPal uses for testing credit card processing). If you encrypt this value using many forms of encryption, you get an output that looks like random zeroes and ones. In general, this output will not even correspond to printable characters, so we would have to use some additional form of encoding (such as Base64 encoding) to represent the encrypted value which might end up looking something like “DuMRdTVZdd2J05D9ns6WWg==” for example.

Note how the encryption and subsequent Base64 encoding has changed the format of the input data, which were 16 decimal digits. The output is no longer just decimal digits. It is also longer than 16 characters. As a consequence, many legacy applications will fail very ungracefully if they need to process such an output as a credit card number because the source code and data structures they use were designed to expect only 16 decimal digits. And if it is even feasible to do so, it can be a very costly and time-consuming exercise to change those applications and data stores to accept the encrypted values. The good news, however, is that it is possible to encrypt data in a way that keeps the format of the data unchanged.

This approach is called “format-preserving encryption” (FPE), and it has been around since 1981, when a US government guide described a use of the venerable DES encryption algorithm (FIPS 74) to encrypt a digit at a time in a way that kept the encrypted value as a digit. This early version of FPE would not be considered a good approach with today’s understanding of what makes encryption secure, but it shows that there has been interest in the general problem of preserving the format of data through encryption for many years./p>

FPE is an elegant solution to the problem caused by encryption changing the format of data. It avoids the expensive problems of modifying legacy systems to give them the ability to process the encrypted data. Instead of adapting the environment to the data, this approach adapts the data to the environment. And by doing this, you dramatically reduce the need to make any modifications to the legacy environment. Using FPE makes it possible to protect your customers’ data using pseudonymization while still keeping costs as low as possible.

So when you investigate a way to pseudonymize data to ensure that you are complying with the GDPR, FPE is a technology worth considering. It has been vetted and approved by the American National Institute of Science and Technology (NIST) and it is possible to use this approach in ways that comply with standards like FIPS 140-2 or ISO/IEC 19790, which means that you should have an easy job convincing your auditors that this approach is sound.

Complying with the GDPR can be hard and expensive. Do not make it harder and more expensive than it needs to be. Protecting your customer’s personal data is good. Protecting it in a way that will minimize the changes required to your current operating environment is even better, and using FPE to implement pseudonymization will help you do just that.

About the Author
Luther Martin, Micro Focus Distinguished Technologist, is a frequent contributor to articles and blogs. Recent articles include The Security of Cryptography and the Wisdom of Crowds, in the ISSA Journal, The dangers of implementing blockchain technology in Information Age, and The Real Value of Bitcoin in the voltage.com blog.

This article originally appeared on the IT Portal Website.

The post GDPR compliance in legacy environments appeared first on Voltage.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The U.S. Social Security Number (SNN) was introduced in the 1930s as an identifier for the (then new) Social Security program, whose official name is actually the “Old-Age, Survivors, and Disability Insurance program” (OASDI). As you’ve no doubt read, the SSN was never intended to be a globally unique identifier (GUID), but has de facto sort of become one, even though it is not necessarily unique and certainly is not global.

SSNs are treated as GUIDs when it comes to credit-related actions, since the “big three-and-a-half” credit agencies (Equifax, Experian, TransUnion, and sometimes Innovis) all use the SSN as a primary identifier, as does Chex Systems, which provides similar services for deposit accounts (for writing checks, etc.). From their perspective, this makes sense: an SSN is a nice, short, invariable value, unlike names (Robert? Rob? Bob? Bobby? Bobbie? etc.).

As privacy threats have evolved, we have become more protective of our SSNs. Most of us can remember when we used to be told to list our SSNs on our checks, to make them easier to use in stores. We don’t do that anymore, nor do most folks write checks for retail purchases, so that usage quietly faded away.

Going back further, I recently came across a box of old papers that included some of my wife’s college papers from the 1970s. There at the top of each was her name and SSN! Universities switched to student ID numbers long ago, so this is also an obsolete usage, as are most other non-government uses outside of credit reporting.

However, one area still uses SSNs far more than it should: health care. Most health insurance companies long ago figured out that using them as GUIDs was a bad idea, both for security reasons and because they have “repeat customers”—folks who have their insurance, leave, and return: the insurance company does not want to intermingle old and new records! But both hospitals and doctors’ offices routinely ask for SSNs. Having spent far more time than I care to think about dealing with health care providers in recent years, it’s been my experience that hospitals are typically fine if you tell them “We do not share that”.

Some doctors, however, are obdurate—forcing a choice between providing the number or leaving. When pressed to justify their position, answers vary. Some of them relate to collecting money they are owed for service. This dates back to a 2007 law called the Red Flags Rule, which attempts to reduce identity theft by requiring creditors to collect SSNs and other PII to prove a borrower’s identity. Failure to comply can result in fines.

However, after protests from health care providers, who do not consider themselves creditors in the traditional sense, Congress exempted them from the Rule in 2010. Some doctors (or their office managers) still like the idea of having this information “just in case”; others don’t much care, but are using practice management software that requires an SSN to create a patient record. (And in case you’re tempted to avoid the problem by providing a false SSN, don’t do that: it’s a crime.)

Once you turn 65, your Medicare number currently contains your SSN, cleverly disguised by adding a trailing character after the last digit. Medicare says this is changing soon, which should be interesting. Since you don’t really have any choice about providing your Medicare number, it at least resolves the dilemma—albeit not happily.

The fundamental problem is that medical practices are not IT shops, and typically either do not have proper protection for sensitive data, or at a minimum cannot describe the protection convincingly. As such, it is entirely reasonable for a security-minded person to be chary of providing an SSN, even to a doctor you would trust with your life. Identity theft is no fun, and medical providers are particularly rich targets, because thieves can also monetize the information to enable health care fraud—folks pretending to be you so they can receive medical treatment and bill it to your insurance. While that’s unlikely to cause you direct problems (it’s typically pretty easy to prove that you did not, indeed, have a brain transplant 1,000 miles from home last week), it does require some effort to straighten out, and of course costs everyone indirectly.

About the Author
Phil Smith III is a distinguished technologist and Senior Architect & Product Manager, Mainframe & Enterprise, at Micro Focus, formerly HPE Software. He is the author of the popular blog series, Cryptography for Mere Mortals.

Learn how Voltage can protect social security numbers and other sensitive information with SecureData.

The post Do Health Care Providers Need Your SSN? appeared first on Voltage.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

We hear many vendors and companies starting to talk about it.  But is all “data-centric” security the same?  Nope!  Frankly it mean different things to different companies.

Data-centric security is an approach to security that emphasizes the security of the data itself rather than the security of networks, servers, or applications.  True data-centric security also helps reduce risk by protecting the “data”, versus the device endpoints.

To really be secure – companies need to become more data-centric in their attitude towards overall cybersecurity.  Knowing where sensitive and high-value data comes from and is stored is key.  Knowing what other end-points have and need access to that information is also key.  Therefore “protecting the data” with a Format-Preserving Encryption (FPE) method helps to ensure that data can be leveraged and used by applications – with little to no change, but renders it useless if breached, stolen, or high-jacked.

The real reason that data-centric security is becoming popular is because it provides a way to extend the security perimeter to where it needs to be. Sensitive data is extremely difficult to keep control of. It’s carried outside the security perimeter on a routine basis by people who need to use it. Laptops are routinely lost or stolen. CDs containing sensitive data are lost in the mail. USB drives are also. So keeping sensitive data inside a protected perimeter is virtually impossible. People need access to sensitive data to do their jobs, and not letting it leave a protected network isn’t practical.

Moving to the cloud is also changing the need for data-centric security.  One of the best ways to leverage the cost and efficiency benefits of the cloud and virtualization while keeping sensitive information secure, is to protect the data using a security solution that delivers data-centric, file-level encryption that is portable across all computing platforms and operating systems and works within a private, public or hybrid cloud computing environment.

Bottom-line … the big problem with protecting sensitive data isn’t that hackers get in, it’s that unprotected data gets out, and data-centric security has the potential to eliminate the problems that unprotected data getting out can cause.  Without data-centric protection that secures your sensitive information throughout its entire lifecycle, you’re at risk.

Organization paying attention are reaping huge security and information-sharing rewards, resulting in greater collaboration and confidence.  Whether a company or agency is providing information to the public, collecting data from the public or sharing information with other departments or agencies, data-centric security gives companies the opportunity to know their data is protected and is being leveraged successfully and securely.

The post So what really is “data-centric” security all about? appeared first on Voltage.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I have a fun question for you: Which of these is the faster of the two cars? Take your time, think about it from different perspectives. I’ll wait… And let me give you a hint—it’s a trick question, so double-check your assumptions.

If you picked the T-Bucket hot rod on the left, you may be technically correct. However, if you believe the mid-1990s Toyota Previa minivan is faster, you might be—more practically-speaking—correct.

So let me explain with a quick story. About 15 years ago I was traveling on the German Autobahn somewhere rural outside Hamburg with my colleagues in the Previa. With no speed limit worries and needing to make good time, we took that minivan up to around 90-100 mph. Now, that said, it should be obvious the T-Bucket can exceed 110, 120, possibly 130 mph, given the amount of muscle you see up front. But let me ask you this—in which car would you feel safer?

For background, a 1994 Previa minivan has a 4-star crash test safety rating with airbags and seatbelts included for the driver. The T-Bucket? Well, it’s an open air, no seat belt, certainly no airbag, machine that is all power and with no concern for survivability in the event of an accident. So let me ask again—which do you feel safer in, say, doing 60 mph? Or 90 mph—the practical limit of the Previa?

This is all just an analogy for data security as an enabler for fast, agile, business. Let me explain…

We see more of our customers today wanting to leverage their customer data for meaningful insights on how to build better product loyalty or optimize business operations. They want to act more quickly, respond with more agility to market change, and move faster than the competition. You may see where I’m going here. Speed of business is not unlike getting where you need to go on the Autobahn to make your destination. But there is a risk of moving quickly without control.

How do businesses, much like cars, move with high performance at top speed, without the risk? When moving fast on the road, you feel more comfortable pushing the pedal down knowing that anti-lock brakes, seatbelts, airbags, laminated glass, rollbars, traction control, and so on will keep you relatively safe in the event of a crash or, better yet, avoid one. But what protects business data the same way? For our customers, Voltage SecureData data-centric encryption is that invisible airbag that travels with every bit of data at a field level. A data breach, much like a crash, may be inevitable—but with SecureData, sensitive information can operate the same way using format-preserving encryption (FPE) within applications, so that if breached, FPE doesn’t compromise the integrity of the original data. The data is unusable when it’s FPE-protected.

It works like this: sensitive data is encrypted using FPE at the source when data is created. No matter which information highway it travels down, the protection remains persistent. Even if it goes from the source application to a new application, the data looks the same, no changes required to the app. FPE allows businesses to expose data to more risks—such as new applications and users—without worrying about exposing the original clear text. And when needed, when the environment is known to be safe, the FPE-protected data could be restored to its original state. But in most cases, applications such as data analytics can operate on the “proxy” (FPE-protected) data surrogates. Now, businesses can simply run their most sensitive data in a protected state perpetually and open up far more potential business value! Wouldn’t it be great to move at the speed of business faster than the competition at lower risk?

So, much like a fast-moving car, businesses want high performance from their data. We’ve evolved car safety to a level now where cars can move faster than ever, but with less risk of injury. With data breaches becoming ever-increasing in the news, do you have “airbags” on your data? If not, it’s time to determine how Voltage SecureData can unleash your data’s full potential without unnecessary risk exposure.

Contact us today to learn more about Data Security!

The post The need for speed – Your data unleashed! appeared first on Voltage.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

With every big data breach, news reports highlight consumers swearing they’ll never use their credit cards again. Of course they’re soon back at it: for most folks, paying for everything using cash or check is tedious in today’s marketplaces.

In fact, there’s opposing pressure, toward a cashless society. Sweden has mostly achieved this already, with 80% of all retail transactions there made with credit or debit cards, and predictions of the country being “all-cashless” by 2030.

This seems less likely in the U.S., but that doesn’t stop it from being a discussion topic. A recent Wall Street Journal article broached the idea by asking, “Should We Move to a Mostly Cashless Society?”, although it largely focused on use of large ($100) bills. Proponents note that crime depends heavily on cash, and claim that its elimination would thus reduce crime. Detractors argue that the non-criminal underclass also depend on cash, and that crime would continue, just moving more into non-cash ways of moving money around.

Perhaps a more interesting question is how going cashless might be implemented, and the side-effects that might cause. Beyond traditional physical credit and debit cards, most consumer-level cash replacements—from Apple Pay and Android Pay to Venmo, Dwolla, iZettle, and even just PayPal in app form—are smartphone-based. It seems quite likely that our smartphones will become even more critical to everyday life as these tools proliferate.

Of course there’s a potential downside, even if you’re not a criminal or otherwise dependent on cash as your monetary instrument of choice. The United States has long resisted the lure of a national identity card, due to citizen concerns about freedom and privacy. This is so ingrained that it seems farfetched for Americans to accept the idea any time soon.

However, if smartphone apps become the most common basis for payments, your phone eventually becomes your de facto identity. This is already true for many businesses, whose apps are all but required.

And government (intelligence agencies and law enforcement) will explore using that identity to track “persons of interest.” Which will increase pressure on courts to allow warrantless searches of at least phone metadata, and perhaps deeper.

So while going cashless may superficially be appealing, it may imply a tradeoff that some of us would rather not make!

About the Author
Phil Smith III is a distinguished technologist and Senior Architect & Product Manager, Mainframe & Enterprise, at Micro Focus, formerly HPE Software. He is the author of the popular blog series, Cryptography for Mere Mortals. Learn more about how Voltage secures payments data with SecureData for Payments.

The post Whither Cash? appeared first on Voltage.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

If you’re interested in safely extracting value from your data to grow the business, you should see this, TechRadar™: Data Security and Privacy, Q4 2017, Forrester Research, Inc., October 4, 2017. You can read all about it in Forbes.com, “Top 10 Hot Data Security and Privacy Technologies”. Or if you have a Forrester subscription, download their report now!

Why pay attention? We know data volumes are exploding as multiple trends converge on businesses, like big data analytics for real-time decisions, gaining elasticity with cloud/hybrid IT, and clearing the high bar for data privacy set by the GDPR (General Data Protection Regulation).

These trends converge on a central point: digital businesses no longer have walls, and Forrester in its report says, “security and privacy pros must take a data-centric approach to make certain that security travels with the data itself–not only to protect it from cyber-criminals but also to ensure that privacy policies remain in effect.”

It can be confusing to navigate vendor offerings but this Forrester “Road map”, as it’s aptly named, offers a great guide to 20 of the key traditional and emerging data security and privacy technologies that business leaders can use to deliver a holistic strategy. 

Encryption is entering a golden age

A holistic strategy should enable extracting business value from data assets, while reducing data breach risk, and enabling privacy compliance. Voltage of Micro Focus is listed in seven technology categories in the Forrester report: Application-level encryption, Big Data encryption, Enterprise key management, Tokenization, Email encryption, File-level encryption and Archiving.

The guidance from Voltage is, don’t get lost in a marketplace; look for the roadmap to data-centric security and privacy protection, end-to-end. One place to start: follow that strong growth curve with big data encryption to a high value position for your business.

For more information, contact us to start your Data Security and Privacy journey.

The post Your Roadmap for Data Security and Privacy appeared first on Voltage.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The Atalla Hardware Security Module (HSM) is the de facto industry standard with many years of payments HSM leadership. This state of the art appliance guards the most sensitive financial data as well as meets industry security and compliance requirements.  So I am very pleased to announce the availability of our new release, the Atalla HSM AT1000 v8.0.

With this release, Atalla extends support for protecting end-to-end payment transactions with its next generation HSM. The hardened FIPS 140-2 Level 3 appliance now supports the payment industry’s acceleration for the adoption of Advanced Encryption Standard (AES) algorithm and higher transaction throughput to drive today’s demanding applications.

What’s new in this release?

  • Support for new payment use cases using AES
    • Advanced Encryption Algorithm is the next encryption standard for new payment use cases defined by the Payment Card Industry Security Standards Council (PCI SSC) requiring organizations to phase out older encryption technologies in the payment network. The inclusion of AES enables financial institutions to expand protection to meet today’s mobile payments/eWallets business demands. Apple Pay, Samsung Pay, Amazon Payments, and Google wallets are being used increasingly in everyday payment transactions. The Atalla HSM AT1000 now incorporates AES to enable protection for mobile and eWallet transactions. 
  • Highest performance HSM
    • As transactions increase in volume with expanded eCommerce adoption, there is a greater throughput requirement to support performance and achieve higher ROI with less hardware. This release greatly expands throughput to 10,000 Visa PIN Translations per second to offer superior price-performance over previous generations.
  • Integration with Voltage SecureData addresses PCI P2PE Requirements
    • Regulations continue to grow and evolve for the payments industry due to risk of data breaches. P2PE (point-to-point encryption) v2 requires solution providers to use an HSM to protect payment data to achieve P2PE validation. With an integrated solution of the Voltage SecureData data encryption software and Atalla HSM, solution providers and merchants are able to address PCI P2PE requirements easier. The integration of Voltage SecureData and Atalla HSM offers a cohesive end-to-end data security approach for storing and managing keys. The integrated solution enables easier installation, configuration, policy enforcement, and connection between the solutions for quicker customer deployments that accelerate time-to-value.
  • Traditional IT-Managed device with expanded logging
    • As organizations streamline IT, the Atalla HSM AT1000 is now manageable by IT operation centers, maintaining proper segmentation of duties with checks and balances, preventing potentially rogue activities. The Atalla AT1000 segments duties by separating key generation from configuration. In addition, SNMP and syslog capabilities can be shared with an event management tool, providing efficient monitoring and insight to reduce data leak response time as the HSM is managed and monitored within existing IT security infrastructure.
  • New 1U field upgradable for higher performance
    • The Atalla HSM is now streamlined to a 1U form factor enabling customers to purchase a smaller unit with field-upgradeable performance to deliver more of the high scalability capabilities you’ve come to know from Atalla.

Availibity
For those of you who haven’t yet discovered the Atalla HSM, or would like to license the products mentioned in this announcement, please contact us right away.

The post New Hardware Security Module to meet Payment Card Industry Data Security Standard and Interbank Network appeared first on Voltage.

Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview