Troy Hunt’s blog showcases a lot of the different issues with which he is familiar. He is a Microsoft MVP and Pluralsight author whose credentials also include working with Pfizer. His blog posts focus on customer and individual user interfaces and security. Written with an approachable tone, this blog is a great one for the non-technical c-suite reader.
Well it's all quietened down here with Scott gone so it's back to business as usual, which means, well, it's not very quiet at all! I've been in Sydney this week talking at one of our big banks and as I say in this week's update, getting out there amongst companies dealing with their unique cyber challenges is always interesting:
In other news, Pwned Passwords is going nuts, there's some awesome cyber comments from The Daily Mail (yep, that's right), I'm doing a bunch of re-engineering work on HIBP, there's the ViewFines data breach and a brand new Pluralsight course on bug bounties. I'll still be home next week so will do it all again from here then.
Try publishing something to the internet - anything - and see how it long it takes before something nasty is probing away at it. Brand new website, new domain and it's mere hours (if not minutes) before requests for wp-admin are in the logs. Yes, I know it's not a Wordpress site but that doesn't matter, the bots don't care. But that's just indiscriminate scanning, nothing personal; how about deliberate and concerted attacks more specifically designed to get into your things? As the value of what you have increases, so do the attacks and there's absolutely nothing you can do about it. There's a lot you can do in terms of defences, but nothing you can do to stop randoms on the internet having a red hot go at breaking into your things.
This is why the discussion around bug bounties confounds me because every time I raise them as being a rather good thing for your security posture, I inevitably get a response along these lines:
But doesn't a bug bounty mean hackers will try and break into our things?
I hate to break it to you, but that's business as usual whether you have a bounty program or not, the only difference is going to be what they do if they successfully get in. That's oversimplifying things, of course, but to me that's always been a cornerstone of why bug bounties make so much sense: they change the ROI of bugs such that it incentivises people of all ethical positions to disclose them to the organisation involved rather than run amuck with them. Plus, it draws people to the site and encourages them to seek out vulnerabilities thus improving the overall security posture. Which brings me to this:
I'm sitting with Casey Ellis in a studio in San Franciso recording a Pluralsight course per the title of this blog post. Casey is the founder of Bugcrowd and they help companies ranging from MasterCard to NETGEAR to Western Union run managed bug bounty programs. Casey and I have been mates for about 5 years now, in fact I went back and checked my email and it was Jan 2013 when we first caught up over beers in Sydney and he shared his vision for Bugcrowd. That vision ultimately got him funded and led him to Silicon Valley. When we caught up to do this course, Bugcrowd had just received another $26M, and that's on top of the $23M that had already been invested in the company. Whilst this course isn't specifically about Bugcrowd, I wanted to share that background because I couldn't think of a better person in the world to have recorded this course with.
So, getting onto the course, I really wanted to tackle the barriers organisations typically see to implementing a bug bounty program; does it really put them at greater risk? How do you price bugs? How do you decide on the scope? When's the right time to run one? Understandably, people have many questions about running a bounty and I reckon we've done a great job of addressing them here. It's a 48 minute "Play by Play" course so it's just Casey and I sitting around chatting (along with visuals from his screen), so it's easily consumable material. This is also the first of 2 courses we recorded with "Bug Bounties for Researchers" still to go live. That one will focus on individuals who want to get involved in bug hunting so it comes at things from quite a different angle.
Incidentally, my favourite password manager 1Password has a $100k top reward on Bugcrowd. They're so confident in the security of their solution and the value of a successful exploit is so high that they'll reward you very handsomely if you manage to break into a 1Password vault. Given the value of my own things I put in their care, I'm pretty happy to see that!
We're on a beach! It's the day after 3 pretty intense days of NDC conference and the day before Scott heads back to the UK so beach was an easy decision. The conference went fantastically well and, in all honesty, was the most enjoyable workshop I think I've done out of ~50 of them these last few years. NDC will be back on the Gold Coast next year, plus of course it will be in Oslo in a few weeks' time then Sydney in September where we'll both do it all again.
This week, we talk a lot about EV certs. As I say in the video, neither of us have anything against commercial CAs or even EV certs per se, but the bullshit that surrounds them is totally out of control. Unsubstantiated claims, unexpected revocations made without warning and a foundation of "people and browsers should work differently to make EV useful" is just polluting the airwaves with FUD. It will become less of an issue if (when) Chrome proceeds with deprecating the visual EV indicator altogether, but until then Scott and I will certainly keep calling it out when crazy claims are made. And this is really the crux of the issue; claims either for or against EV (and indeed visual indicators in general) need to be substantiated. This just simply isn't happening in the pro-EV camp, but it is happening on the Google front by virtue of the ongoing user testing we mention. Let's be pro-evidence and push for solid research and facts.
Also this week, 2 new Pluralsight courses! In fact, there's still 2 more with Casey Ellis on bug bounties yet to come too but I'll write about those next week. I'll be off traveling again next week (albeit only for a day domestically), and I'll do my best to get some more blogging done, but time has been absolutely flying by lately and I need to start thinking about preparation for the Europe trip in June too. Regardless, let's me see what I can pump out and at the very least, there'll always be something crazy and new to talk about next week anyway.
It's a new Pluralsight course! Yes, I know I said that yesterday too, but this is a new new Pluralsight course and it's the second part in our series on Creating a Security-centric Culture. As I wrote there back in Jan, we're doing this course on a quarterly basis and putting it out in front of the paywall so in other words, it's free! It's also a combination of video and screencast which means you see a lot of this:
As for the topic in the title, shadow IT has always been an interesting one and certainly something I spent a great deal of time dealing with in the corporate environment. A quick definition for those who may not be familiar with the term:
Shadow IT is information-technology systems and solutions built and used inside organisations without explicit organisational approval
Frequently, this practice reared its head in discussions like these:
Bob from accounting: Hey Troy, we need help with our Access database
Troy: Uh, ok, first I've heard but what's the issue Bob?
Bob: Well, the system we use to track all our marketing spend on the new campaign has started running really slow. I mean it was fine when just I was using it, but then Jane and the team needed access too so I put it on a file share.
Troy: So hang on - you've got a bunch of people using an Access database on a file share, this doesn't do anything important, right?!
Bob: Oh yeah, it's critical, we can't be without it. The whole team needs it which is why it's on that file share, it's got a heap of really important sensitive info in it they all need to use.
I no longer work there. This sort of thing happens all the time in organisations of all sizes. But it was one thing to have an exposed Access database within an organisation a few years ago, it's quite another thing when today the equivalent is an exposed S3 bucket facing the world! And that's one of the things that's really increased the risk of shadow IT; the easy access to cloud services that by design, allow anyone to publish data to the world. And often they do.
This course looks at how shadow IT is changing, what it means in a cloud era and what practices we can apply to address it. Importantly though, it recognises that shadow IT exists for a reason! For example, I talk a lot about the incident from back in December with British politicians openly admitting to sharing their passwords. This was clearly the wrong thing for them to do but equally, it wasn't done out of malice but rather because they clearly hadn't been given the support they needed to use the delegated access controls built into Office 365.
This is a very pragmatic, practical course and it's also only 39 minutes long so it's easily consumable (we're targeting about 45 mins for these, the first one went over then this one obviously went under). I hope it helps people think differently about shadow IT, not just in terms of the risks it presents, but in terms of the role we as technology professionals have to play by ensuring there are better ways available to the businesses we support.
Just a tad over 5 years ago, I released my first ever Pluralsight course - OWASP Top 10 Web Application Security Risks for ASP.NET. More than 32k people have listened to more than 78k hours of content in this course making it not just the most popular course I've ever released, but also keeping it as my most popular in the library even today by a long way. Developers have a huge appetite for OWASP content and I'm very happy to now give them even more Top 10 goodness in the course I'm announcing here - Play by Play: OWASP Top 10 2017.
This time, I've teamed up with Andrew van der Stock who was an integral part of the team involved in putting the 2017 edition of the Top 10 together.
I can't think of anyone who understands this resource better than him and frankly, it's a bit of a coup for us to have convinced Andrew to do this course. He's added awesome insight including why XSS is now so much further down the list, why CSRF has dropped off entirely and why we now have XXE and insecure deserialisation in the Top 10 for the first time. Plus, he's got some general insights into the changing infosec landscape, for example how the emergence of microservices has meant internal apps that had never previously seen the light of day are now being exposed to risks they'd never seen before.
Because this is a "Play by Play" course, it's only an hour and 12 minutes of easy listening. It's a conversation between Andrew and myself and, of course, we do get into some technical detail but it's designed to be the sort of thing you can watch over lunch, on the daily commute or even just listen to without the video. I've done a heap of these in the past and they've all been well-received so I hope this one goes down equally well.
Oh - and just to save you saying it - yes, I sound terrible. We recorded this in San Francisco in March and I'd just come from a week in Seattle followed by a keynote in Vegas and just got myself run down. But regardless, I battled through and I hope you enjoy the fruits of the labour in this latest course. Play by Play: OWASP Top 10 2017 is now live!
This week, Scott Helme is getting bitten by Aussie critters whilst working from a desert island. He's here on the Gold Coast for the NDC Security event next week so I thought we'd record the update together so we grabbed a couple of cold ones, wandered down to the backyard and recorded there.
We cover off a bunch of bits and pieces related to things we're working on together (workshops and Report URI) as well as some (mostly) commonly held views about HTTPS, EV certs and visual indicators. Oh - and I forgot to mention killing off the non-anonymous endpoints for Pwned Passwords last week so that's in here this week too. Hope you enjoy the banter with Scott, he's still here next Friday so we'll do it all again then too.
Remember when web security was all about looking for padlocks? I mean in terms of the advice we gave your everyday people, that's what it boiled down to - "look for the padlock before entering passwords or credit card info into a website". Back in the day, this was pretty solid advice too as it gave you confidence not just in the usual confidentiality, integrity and authenticity of the web traffic, but in the legitimacy of the site as well. If it had a padlock, you could trust it and there's weren't a lot of exceptions to that.
SUPERCON: The Must-Have Toy This Christmas - YouTube
See the problem? It's this quote:
Look, in there, you need a padlock when you pay for stuff. If there isn't one, the website could be fake.
The implication here is that this site could be fake:
Whilst this site is legitimate:
This led to the Advertising Standards Association in the UK (ASA) classifying the ad as "misleading" and nuking it off the air. Barclays responded by saying they'd merely intended to advise consumers to look for the padlock before paying (which is reasonable) and to "always check the seller's genuine" (which is much harder). But clearly, when you watch that ad and consider what everyday people will take away from it, it is indeed misleading. ASA upheld their assessment and countered with the following very reasonable statement:
consumers were generally unlikely to have a detailed understanding of the website padlock symbol and the general steps required to ensure a website was safe
And let's be fair to Barclays - it's not just them offering outdated and inaccurate advice about the true meaning of the padlock:
Taking a mandatory Cyber Awareness Course. The correct answer to this question is: The traffic between the browser and the webshop is encrypted. But the option does not exist. cc @troyhuntpic.twitter.com/bRM5BnVC6l
Now I'm going to work on the assumption that readers here generally have a good grasp of why this no longer makes any sense, but just as a really quick recap: HTTPS (and consequently the padlock in the browser), is rapidly becoming ubiquitous as the barriers to adoption fall. Most notably, they're now free through services like Let's Encrypt and Cloudflare and they're dead easy to setup so there goes another barrier too. As of a few months ago, 38% of the world's top 1 million websites were served over HTTPS and that figure was up by a third in only 6 months, a trend that's continued for some time now. But the presence of HTTPS is in no way a judgement call on the trustworthiness of the site:
HTTPS & SSL doesn't mean "trust this." It means "this is private." You may be having a private conversation with Satan.
As with other forms of encryption, HTTPS is morally neutral; it could be a good site, it could be a bad site, who knows, the padlock icon doesn't have anything to do with that. Which brings me to the title of this blog post - the positive visual indicators we've become so accustomed to are increasingly useless and instead, we need to be focusing more on about the negative ones. Let me explain:
And George is totally right - a lot of people would fall for this, particularly if they were following Barclay's advice. Clearly, the mere presence of a positive visual indicator is an insufficient means of making a call on the legitimacy of the website. And yes, we all know that the padlock never meant the site wasn't going to be nasty, but we also know the history with the way the masses have been educated about it and the assumptions they consequently draw. Which brings me to the next point - let's talk about negative visual indicators.
The Value of Negative Indicators
The Spotify example above is going to serve multiple purposes in this blog post and the first one is that it shows how misleading the padlock icon can be. But the second purpose it serves is to show what a negative visual indicator looks like, which is exactly what you'll see if you go to membership[.]spotifyuser[.]validationsupport[.]userbilling-secure.com today:
Not real subtle, right? Whilst George was spot on about people falling for the site due to the presence of the positive indicator, nobody is falling for it with the negative indicator! There's a simple and obvious reason why:
Positive security indicators are readily obtainable or spoofable, but nobody ever wants to show a negative indicator on a legitimate site!
Now, I also said "spoofable" because we have situations like this:
That's from my post of many years ago on why I'm the world's greatest lover, a time well before ubiquitous HTTPS but that didn't stop websites proclaiming their security prowess by way of images on the page.
Back to George's Spotify phishing site for a moment, his tweet came through during the night for me and it was already flagged as being a deceptive site by the time I woke up and saw it. The certificate transparency logs suggest the cert was only obtained 10 hours before George's tweet; based on the time I saw Chrome's warning, there was a maximum of about an 18 hour window between the phisher getting the cert and users of Google's browser seeing a massive negative visual indicator. So, the other purpose that this example serves is to illustrate that even in the presence of HTTPS, we have very effective controls available to mitigate phishing attacks. For all the commerical CAs people decrying Let's Encrypt issuing certs to phishing sites, let's not forget this control which, especially in light of revocation being broken anyway, is enormously powerful.
As a more general theme beyond just phishing, negative visual indicators can be enormously effective in other scenarios too:
Want to go to the Daily Mail over the secure scheme? You're going to get a great big warning before you need to drill down into the advanced section and proceed to what's clearly then marked as an unsafe link. And again, we come back to the point about training people to look for negative indicators and act on those rather than to simply assume everything is fine in the presence of a positive one.
The Futility of Neutral, User-Interpreted Indicators (URLs)
Last month, I hopped over to Hawaii for the inaugural Loco Moco Sec conference. One of the most highly anticipated talks for me was Emily Schechter's who's a product manager on the Chrome team. Emily did a talk titled "The Trouble with URLs, and how Humans (Don’t) Understand Site Identity" and if you have any interest in the topic at all, it's essential viewing:
Emily Schechter: The Trouble with URLs, and how Humans (Don't) Understand Site Identity - YouTube
When it comes to the topic of how humans interact with browsers and how they make trust decisions, few people are better equipped to comment than Emily, not least because Google invests a heap of effort into focus groups and other means of measuring how people actually behave. As Emily spoke, I snapped pics from the front row and tweeted a few of them:
Chrome will hide the “https” scheme prefix in the future as it’s redundant with the padlock and “secure” text pic.twitter.com/lYw84gNIYx
Emily talks about why Google is intending to hide the HTTPS scheme at about the 5-minute mark in that video and it's worth a watch. It makes sense to me, but my tweet did result in some rather "enthusiastic" feedback from the Twitters, for example:
Here’s a slide @emschec showed - which one is the correct site? How does a user know? People making security decisions based on the URL alone is fraught with problems. pic.twitter.com/oUDTYi7b9V
I've included my reply in there because Emily's subsequent slide explains the problem perfectly. Humans are lousy at interpreting the URL which is precisely why the aforementioned Netflix phishing site works and same again for the Spotify one. Here's another great example:
Email from Twitter: How do I know an email is from Twitter? Links in this email will start with “https://” and contain “https://t.co/FjKaCAmjKd.”
So https://twitter.com.evil-example.com is safe?
This tweet is bang on and again, it illustrates why the URL alone - which is frequently only partially displayed anyway - is an absolutely lousy indicator of trustworthiness. Still don't believe me? How about this site:
Because this is precisely what EV is - it only works if people change behaviour when they don't see it! The commercial CAs will tell you that you need EV to increase confidence and differentiate yourself from the phishing sites, but it just simply doesn't work that way:
Frankly, the CAs are struggling to find any meaningful role to play as it relates to phishing. Phishing sites have certs, revocation is broken and EV is useless not only due to the points mentioned above, but because even the commercial CAs aren't sure who should have EV certs! For example, from that talk of mine:
That's stripe.ian.sh with an EV cert that shows the name of the company he registered (Stripe Inc) in Safari on iOS but just the domain name itself in Chrome on iOS (it's entirely up to the client how they choose to display the presence of EV, if they display it at all). However, as of today, every browser just displays the domain name because Comodo revoked his EV cert. So he went and got one from GoDaddy and... that one was revoked too! Why? Well apparently, there were a couple of risk factors and whilst they were never clearly defined, it's pretty obvious what they were:
There were risk factors for the EV business model.
When I first saw that Barclays ad appear back in November, I went and registered totally-trustworthy-site-because-it-has-a-padlock.com, fully intending to make a bit of a thing out of how misleading the whole "look for the padlock" message was. That was until ASA beat me to it! But the passage of time has also provided so many of the other examples mentioned above just since last year, not least of which is Google's proposed changes to visual indicators in the browser.
On those changes, there will likely be a time where the positive visual indicator that is the padlock can be removed entirely. Think about it - when (almost) every site is HTTPS anyway, why have it? You could instead fall back to ever more negative visual indicators when sites aren't served over HTTPS and we're only a couple of months out from seeing the beginning of that. Wouldn't it be great if we could kill the padlock and the indication of the HTTPS scheme off altogether and just flag the exceptions? We're getting there.
So what can we conclude from all of this? Pretty much per the title, the education needs to change from looking for those positive visual indicators as a sign of trustworthiness to looking at the negative ones as a sign of a site to approach with caution. The browsers are increasingly helping us to do this and indeed Chrome in particular has led the charge putting warnings on insecure login pages then insecure input pages of any kind and in the very near future, warnings on all insecure pages regardless of what they do. So let's focus on those; drive awareness of that and accept that padlocks icons are rapidly becoming a sign of a bygone era.
When processing requests to establish and change memorized secrets, verifiers SHALL compare the prospective secrets against a list that contains values known to be commonly-used, expected, or compromised.
In other words, once a password has appeared in a data breach and it ends up floating around the web for all sorts of nefarious parties to use, don't let your customers use that password! Now, as I say in the aforementioned blog post (and in the post launching V1 before it), it's not always that black and white and indeed outright blocking every pwned password has all sorts of usability ramifications as well. But certainly, many organisations have taken precisely this approach and have used the service to keep known bad passwords out of their systems.
But I always wondered - what sort of percentage of passwords would this actually block? I mean if you had 1 million people in your system, is it a quarter of them using previously breached passwords? A half? More? What I needed to test this theory was a data breach that contained plain text passwords, had a significant volume of them and it had to be one I hadn't seen before and didn't form part of the sources I used to create the Pwned Passwords list in the first place. (Strictly speaking, I could have used a breach with hashed passwords and used the source Pwned Passwords as a dictionary in a hash cracking exercise, but plain text was always going to be much easier, much faster and would allow me to quickly see which password weren't already in my list.)
And then CashCrate came along:
New breach: CashCrate had 6.8M records breached in November 2016. The data included names, physical and email addresses and a combination of both plain text passwords and MD5 hashes. 71% were already in @haveibeenpwned. Read more: https://t.co/NYUgAiAcdg
Of those 6.8M records, 2,232,284 of the passwords were in plain text. The remainder were MD5 hashes, assumedly because they were in the process of rolling over to this hashing algorithm when the breach occurred (although when you have all the source passwords in plain text to begin with, it's kinda weird they didn't just hash all those in one go). So to the big question raised earlier, how many of these were already in Pwned Passwords? Or in other words, how many CashCrate subscribers were using terrible passwords already known to have been breached?
In total, there were 1,910,144 passwords out of 2,232,284 already in the Pwned Passwords set. In other words, 86% of subscribers were using passwords already leaked in other data breaches and available to attackers in plain text.
So, what sort of passwords are we talking about here? All the usual terrible ones you'd expect people to choose which, by order of prevalence in the Pwned Password data set, means passwords like these:
These are terrible and granted, who knows how far back they date, but as of today you can still sign up with a password of "123456" if you'd like:
You can't use "12345" - that's not long enough - and its appearance in position 10 above likely indicates an even weaker password policy in the past. Obviously, the password criteria is terrible, but I appreciate some people may suggest the nature of the site predisposes people to making terrible password choices (it's a "cash-for-surveys" site).
But I was also interested in some of the more obscure CashCrate passwords that were already in my data set and found ones like these that I've only ever seen once before (I'll substitute several characters in each to protect the source password but still illustrate the point):
nikki i love u
i like to have sex
I didn't substitute any characters in the last 3 because I wanted to illustrate that even pass phrases can be useless once exposed. Having a good password isn't enough, uniqueness still matters enormously.
So which passwords weren't in Pwned Passwords already? Predictably, some of the most popular ones were named after the site itself:
And so on and so forth (the last one makes sense once you think about it). Many of the other most common ones were just outright terrible in other ways, for example number combinations or a person's name followed by a number (some quite unique variants appeared many times over suggesting possible bulk account creation). All of those will go into the next release of Pwned Passwords which will go out once there's a sufficiently large volume of new passwords.
Getting back to the whole point of the service for a moment, traditional password complexity rules are awful and they must die a fiery death:
Getting back to the issue of how terrible passwords are and the impact this then has on individuals and organisations alike, one of the big problems I've seen really accelerate over the last year is credential stuffing. In other words, bad guys grabbing huge stashes of username and password pairs from other data breaches and seeing which ones work on totally unrelated sites. I have a much more comprehensive blog post in this in the works and it's a non-trivial challenge I want to devote more time to, but imagine this:
If you're responsible for running a website, how are you going to be resilient against attackers who come to your site with legitimate usernames and passwords of your members?
And just to make things even harder, the site being attacked isn't necessarily viewed as the victim either. Earlier this year, the FTC had this to say:
The FTC's message is loud and clear: If customer data was put at risk by credential stuffing, then being the innocent corporate victim is no defence to an enforcement case. Rather, in the FTC's view companies holding sensitive customer information should be taking affirmative action to reduce the risk of credential stuffing.
That's a hard challenge and the solution is non-trivial too. Again, I've got something more comprehensive in draft and I'll definitely come back to that but for now, this is a great start:
I like this because it is trivial! It's not the whole picture in terms of defences, but it's a great start. I don't know if EVE Online would have 86% of members using known breached passwords (it's not exactly "cash-for-surveys", but then again, it's also used by a lot of kids), but I do know that it would still be a statistically significant numbers. (Incidentally, this should go live on EVE Online about the same time I plan to publish this blog post.)
As I come across more plain text data breaches (which is inevitable), I'll do the same sanity check again. For now, I've taken the 322,140 passwords not already in Pwned Passwords, distilled it down to 307,016 unique ones and queued those up for version 3 of the password list. While you're waiting for that one, it might be worth thinking about how many subscribers of your own service are using a previously seen password because if it's even a fraction of the CashCrate number, that's rather worrying.
Read Full Article
Scroll to Top
Separate tags by commas
To access this feature, please upgrade your account.