Loading...

Your mobile phone is giving away your approximate location all day long. This isn’t exactly a secret: It has to share this data with your mobile provider constantly to provide better call quality and to route any emergency 911 calls straight to your location. But now, the major mobile providers in the United States — AT&T, Sprint, T-Mobile and Verizon — are selling this location information to third party companies — in real time — without your consent or a court order, and with apparently zero accountability for how this data will be used, stored, shared or protected.

Think about what’s at stake in a world where anyone can track your location at any time and in real-time. Right now, to be free of constant tracking the only thing you can do is remove the SIM card from your mobile device never put it back in unless you want people to know where you are.

It may be tough to put a price on one’s location privacy, but here’s something of which you can be sure: The mobile carriers are selling data about where you are at any time, without your consent, to third-parties for probably far less than you might be willing to pay to secure it.

The problem is that as long as anyone but the phone companies and law enforcement agencies with a valid court order can access this data, it is always going to be at extremely high risk of being hacked, stolen and misused.

Consider just two recent examples. Earlier this month The New York Times reported that a little-known data broker named Securus was selling local police forces around the country the ability to look up the precise location of any cell phone across all of the major U.S. mobile networks. Then it emerged that Securus had been hacked, its database of hundreds of law enforcement officer usernames and passwords plundered. We also found out that Securus’ data was ultimately obtained from a California-based location tracking firm LocationSmart.

On May 17, KrebsOnSecurity broke the news of research by Carnegie Mellon University PhD student Robert Xiao, who discovered that a LocastionSmart try-before-you-buy opt-in demo of the company’s technology was wide open — allowing real-time lookups from anyone on anyone’s mobile device — without any sort of authentication, consent or authorization.

Xiao said it took him all of about 15 minutes to discover that LocationSmart’s lookup tool could be used to track the location of virtually any mobile phone user in the United States.

Securus seems equally clueless about protecting the priceless data to which it was entrusted by LocationSmart. Over the weekend KrebsOnSecurity discovered that someone — almost certainly a security professional employed by Securus — has been uploading dozens of emails, PDFs, password lists and other files to Virustotal.com — a service owned by Google that can be used to scan any submitted file against dozens of commercial antivirus tools.

Antivirus companies willingly participate in Virustotal because it gives them early access to new, potentially malicious files being spewed by cybercriminals online. Virustotal users can submit suspicious files of all kind; in return they’ll see whether any of the 60+ antivirus tools think the file is bad or benign.

One basic rule that all Virustotal users need to understand is that any file submitted to Virustotal is also available to customers who purchase access to the service’s file repository. Nevertheless, for the past two years someone at Securus has been submitting a great deal of information about the company’s operations to Virustotal, including copies of internal emails and PDFs about visitation policies at a number of local and state prisons and jails that made up much of Securus’ business.

Some of the many, many files uploaded to Virustotal.com over the years by someone at Securus Technologies.

One of the files, submitted on April 27, 2018, is titled “38k user pass microsemi.com – joomla_production.mic_users_blockedData.txt”.  This file includes the names and what appear to be hashed/scrambled passwords of some 38,000 accounts — supposedly taken from Microsemi, a company that’s been called the largest U.S. commercial supplier of military and aerospace semiconductor equipment.

Many of the usernames in that file do map back to names of current and former employees at Microsemi. KrebsOnSecurity shared a copy of the database with Microsemi, but has not yet received a reply. Securus also has not responded to requests for comment.

These files that someone at Securus apparently submitted regularly to Virustotal also provide something of an internal roadmap of Securus’ business dealings, revealing the names and login pages for several police departments and jails across the country, such as the Travis County Jail site’s Web page to access Securus’ data.

Check out the screen shot below. Notice that forgot password link there? Clicking that prompts the visitor to enter their username and to select a “security question” to answer. There are but three questions: “What is your pet’s name? What is your favorite color? And what town were you born in?” There don’t appear to be any limits on the number of times one can attempt to answer a secret question.

Choose wisely and you, too, could gain the ability to look up anyone’s precise mobile location.

Given such robust, state-of-the-art security, how long do you think it would take for someone to figure out how to reset the password for any authorized user at Securus’ Travis County Jail portal?

Yes, companies like Securus and Location Smart have been careless with securing our prized location data, but why should they care if their paying customers are happy and the real-time data feeds from the mobile industry keep flowing?

No, the real blame for this sorry state of affairs comes down to AT&T, Sprint, T-Mobile and Verizon. T-Mobile was the only one of the four major providers that admitted providing Securus and LocationSmart with the ability to perform real-time location lookups on their customers. The other three carriers declined to confirm or deny that they did business with either company.

As noted in my story last Thursday, LocationSmart included the logos of the four carriers on their home page — in addition to those of several other major firms (that information is no longer available on the company’s site, but it can still be viewed by visiting this historic record of it over at the Internet Archive).

Now, don’t think for a second that these two tiny companies are the only ones with permission from the mobile giants to look up such sensitive information on demand. At a minimum, each one of these companies can in theory resell (or leak) this information and access to others. On 15 May, ZDNet reported that Securus was getting its data from the carriers by going through an intermediary: 3Cinteractive, which was getting it from LocationSmart.

However, it is interesting that the first insight we got that the mobile firms were being so promiscuous with our private location data came in the Times story about law enforcement officials seeking the ability to access any mobile device’s location data in real time.

All technologies are double-edged swords, which means that each can be used both for good and malicious ends. As much as police officers may wish to avoid the hassle and time constraints of having to get a warrant to determine the precise location of anyone they please whenever they wish, those same law enforcement officers should remember that this technology works both ways: It also can just as easily be abused by criminals to track the real-time movements of police and their families, informants, jurors, witnesses and even judges.

Consider the damage that organized crime syndicates — human traffickers, drug smugglers and money launderers — could inflict armed with an app that displays the precise location of every uniformed officer from within 300 ft to across the country. All because they just happened to know the cell phone number tied to each law enforcement official.

Maybe you have children or grandchildren who — like many of their peers these days — carry a mobile device at all times for safety and for quick communication with parents or guardians. Now imagine that anyone in the world has the instant capability to track where your kid is at any time of day. All they’d need is your kid’s digits.

Maybe you’re the current or former target of a stalker, jilted ex-spouse, or vengeful co-worker. Perhaps you perform sensitive work for the government. All of the above-mentioned parties and many more are put at heightened personal risk by having their real-time location data exposed to commercial third parties.

Some people might never sell their location data for any price: I suspect most of us would like this information always to be private unless and until we change the defaults (either in a binary “on/off” way or app-specific). On the other end of the spectrum there are probably plenty of people who don’t care one way or another provided that sharing their location information brings them some real or perceived financial or commercial benefit.

The point is, for many of us location privacy is priceless because, without it, almost everything else we’re doing to safeguard our privacy goes out the window.

And this sad reality will persist until the mobile providers state unequivocally that they will no longer sell or share customer location data without having received and validated some kind of legal obligation — such as a court-ordered subpoena.

But even that won’t be enough, because companies can and do change their policies all the time without warning or recourse (witness the current reality). It won’t be enough until lawmakers in this Congress step up and do their jobs — to prevent the mobile providers from selling our last remaining bastion of privacy in the free world to third party companies who simply can’t or won’t keep it secure.

The next post in this series will examine how we got here, and what Congress and federal regulators have done and might do to rectify the situation.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

T-Mobile is investigating a retail store employee who allegedly made unauthorized changes to a subscriber’s account in an elaborate scheme to steal the customer’s three-letter Instagram username. The modifications, which could have let the rogue employee empty bank accounts associated with the targeted T-Mobile subscriber, were made even though the victim customer already had taken steps recommended by the mobile carrier to help minimize the risks of account takeover. Here’s what happened, and some tips on how you can protect yourself from a similar fate.

Earlier this month, KrebsOnSecurity heard from Paul Rosenzweig, a 27-year-old T-Mobile customer from Boston who had his wireless account briefly hijacked. Rosenzweig had previously adopted T-Mobile’s advice to customers about blocking mobile number port-out scams, an increasingly common scheme in which identity thieves armed with a fake ID in the name of a targeted customer show up at a retail store run by a different wireless provider and ask that the number to be transferred to the competing mobile company’s network.

So-called “port out” scams allow crooks to intercept your calls and messages while your phone goes dark. Porting a number to a new provider shuts off the phone of the original user, and forwards all calls to the new device. Once in control of the mobile number, thieves who have already stolen a target’s password(s) can request any second factor that is sent to the newly activated device, such as a one-time code sent via text message or or an automated call that reads the one-time code aloud.

In this case, however, the perpetrator didn’t try to port Rosenzweig’s phone number: Instead, the attacker called multiple T-Mobile retail stores within an hour’s drive of Rosenzweig’s home address until he succeeded in convincing a store employee to conduct what’s known as a “SIM swap.”

A SIM swap is a legitimate process by which a customer can request that a new SIM card (the tiny, removable chip in a mobile device that allows it to connect to the provider’s network) be added to the account. Customers can request a SIM swap when their existing SIM card has been damaged, or when they are switching to a different phone that requires a SIM card of another size.

However, thieves and other ne’er-do-wells can abuse this process by posing as a targeted mobile customer or technician and tricking employees at the mobile provider into swapping in a new SIM card for that customer on a device that they control. If successful, the SIM swap accomplishes more or less the same result as a number port out (at least in the short term) — effectively giving the attackers access to any text messages or phone calls that are sent to the target’s mobile account.

Rosenzweig said the first inkling he had that something wasn’t right with his phone was on the evening of May 2, 2018, when he spotted an automated email from Instagram. The message said the email address tied to the three-letter account he’d had on the social media platform for seven years — instagram.com/par — had been changed. He quickly logged in to his Instagram account, changed his password and then reverted the email on the account back to his original address.

By this time, the SIM swap conducted by the attacker had already been carried out, although Rosenzweig said he didn’t notice his phone displaying zero bars and no connection to T-Mobile at the time because he was at home and happily surfing the Web on his device using his own wireless network.

The following morning, Rosenzweig received another notice — this one from Snapchat — stating that the password for his account there (“p9r”) had been changed. He subsequently reset the Instagram password and then enabled two factor authentication on his Snapchat account.

“That was when I realized my phone had no bars,” he recalled. “My phone was dead. I couldn’t even call 611,” [the mobile short number that all major wireless providers make available to reach their customer service departments].”

It appears that the perpetrator of the SIM swap abused not only internal knowledge of T-Mobile’s systems, but also a lax password reset process at Instagram. The social network allows users to enable notifications on their mobile phone when password resets or other changes are requested on the account.

But this isn’t exactly two-factor authentication because it also lets users reset their passwords via their mobile account by requesting a password reset link to be sent to their mobile device. Thus, if someone is in control of your mobile phone account, they can reset your Instagram password (and probably a bunch of other types of accounts).

Rosenzweig said even though he was able to reset his Instagram password and restore his old email address tied to the account, the damage was already done: All of his images and other content he’d shared on Instagram over the years was still tied to his account, but the attacker had succeeded in stealing his “par” username, leaving him with a slightly less sexy “par54384321,” (apparently chosen for him at random by either Instagram or the attacker).

As I wrote in November 2015, short usernames are something of a prestige or status symbol for many youngsters, and some are willing to pay surprising sums of money for them. Known as “OG” (short for “original” and also “original gangster”) in certain circles online, these can be usernames for virtually any service, from email accounts at Webmail providers to social media services like InstagramSnapchatTwitter and Youtube.

People who traffic in OG accounts prize them because they can make the account holder appear to have been a savvy, early adopter of the service before it became popular and before all of the short usernames were taken.

Rosenzweig said a friend helped him work with T-Mobile to regain control over his account and deactivate the rogue SIM card. He said he’s grateful the attackers who hijacked his phone for a few hours didn’t try to drain bank accounts that also rely on his mobile device for authentication.

“It definitely could have been a lot worse given the access they had,” he said.

But throughout all of this ordeal, it struck Rosenzweig as odd that he never once received an email from T-Mobile stating that his SIM card had been swapped.

“I’m a software engineer and I thought I had pretty good security habits to begin with,” he said. “I never re-use passwords, and it’s hard to see what I could have done differently here. The flaw here was with T-Mobile mostly, but also with Instagram. It seems like by having the ability to change one’s [Instagram] password by email or by mobile alone negates the second factor and it becomes either/or from the attackers point of view.”

Sources close to the investigation say T-Mobile is investigating a current or former employee as the likely culprit. The mobile company also acknowledged that it does not currently send customers an email to the email address on file when SIM swaps take place. A T-Mobile spokesperson said the company was considering changing the current policy, which sends the customer a text message to alert them about the SIM swap.

“We take our customers privacy and security very seriously and we regret that this happened,” the company said in a written statement. “We notify our customers immediately when SIM changes occur, but currently we do not send those notifications via email. We are actively looking at ways to improve our processes in this area.”

In summary, when a SIM swap happens on a T-Mobile account, T-Mobile will send a text message to the phone equipped with the new SIM card. But obviously that does not help someone who is the target of a SIM swap scam.

As we can see, just taking T-Mobile’s advice to place a personal identification number (PIN) on your account to block number port out scams does nothing to flag one’s account to make it harder to conduct SIM swap scams.

Rather, T-Mobile says customers need to call in to the company’s customer support line and place a separate “SIM lock” on their account, which can only be removed if the customer shows up at a retail store with ID (or, presumably, anyone with a fake ID who also knows the target’s Social Security Number and date of birth).

I checked with the other carriers to see if they support locking the customer’s current SIM to the account on file. I suspect they do, and will update this piece when/if I hear back from them. In the meantime, it might be best just to phone up your carrier and ask.

Please note that a SIM lock on your mobile account is separate from a SIM PIN that you can set via your mobile phone’s operating system. A SIM PIN is essentially an additional layer of physical security that locks the current SIM to your device, requiring you to input a special PIN when the device is powered on in order to call, text or access your data plan on your phone. This feature can help block thieves from using your phone or accessing your data if you lose your phone, but it won’t stop thieves from physically swapping in their own SIM card.

iPhone users can follow these instructions to set or change a device’s SIM PIN. Android users can see this page. You may need to enter a carrier-specific default PIN before being able to change it. By default, the SIM PIN for all Verizon and AT&T phones is “1111;” for T-Mobile and Sprint it should default to “1234.”

Be advised, however, that if you forget your SIM PIN and enter the wrong PIN too many times, you may end up having to contact your wireless carrier to obtain a special “personal unlocking key” (PUK).

At the very least, if you haven’t already done so please take a moment to place a port block PIN on your account. This story explains exactly how to do that.

Also, consider reviewing twofactorauth.org to see whether you are taking full advantage of any multi-factor authentication offerings so that your various accounts can’t be trivially hijacked if an attacker happens to guess, steal, phish or otherwise know your password.

One-time login codes produced by mobile apps such as Authy, Duo or Google Authenticator are more secure than one-time codes sent via automated phone call or text — mainly because crooks can’t steal these codes if they succeed in porting your mobile number to another service or by executing a SIM swap on your mobile account [full disclosure: Duo is an advertiser on this blog].

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

LocationSmart, a U.S. based company that acts as an aggregator of real-time data about the precise location of mobile phone devices, has been leaking this information to anyone via a buggy component of its Web site — without the need for any password or other form of authentication or authorization — KrebsOnSecurity has learned. The company took the vulnerable service offline early this afternoon after being contacted by KrebsOnSecurity, which verified that it could be used to reveal the location of any AT&T, Sprint, T-Mobile or Verizon phone in the United States to an accuracy of within a few hundred yards.

On May 10, The New York Times broke the news that a different cell phone location tracking company called Securus Technologies had been selling or giving away location data on customers of virtually any major mobile network provider to a sheriff’s office in Mississippi County, Mo.

On May 15, ZDnet.com ran a piece saying that Securus was getting its data through an intermediary — Carlsbad, CA-based LocationSmart.

Wednesday afternoon Motherboard published another bombshell: A hacker had broken into the servers of Securus and stolen 2,800 usernames, email addresses, phone numbers and hashed passwords of authorized Securus users. Most of the stolen credentials reportedly belonged to law enforcement officers across the country — stretching from 2011 up to this year.

Several hours before the Motherboard story went live, KrebsOnSecurity heard from an independent security researcher who’d read the coverage of Securus and LocationSmart and had been poking around a demo tool that LocationSmart makes available on its Web site for potential customers to try out its mobile location technology.

LocationSmart’s demo is a free service that allows anyone to see the approximate location of their own mobile phone, just by entering their name, email address and phone number into a form on the site. LocationSmart then texts the phone number supplied by the user and requests permission to ping that device’s nearest cellular network tower.

Once that consent is obtained, LocationSmart texts the subscriber their approximate longitude and latitude, plotting the coordinates on a Google Street View map. [It also potentially collects and stores a great deal of technical data about your mobile device. For example, according to their privacy policy that information “may include, but is not limited to, device latitude/longitude, accuracy, heading, speed, and altitude, cell tower, Wi-Fi access point, or IP address information”].

But according to Robert Xiao, a PhD candidate at Carnegie Mellon University‘s Human-Computer Interaction Institute, this same service failed to perform basic checks to prevent anonymous and unauthorized queries. Translation: Anyone with a modicum of knowledge about how Web sites work could abuse the LocationSmart demo site to figure out how to conduct mobile number location lookups at will, all without ever having to supply a password or other credentials.

“I stumbled upon this almost by accident, and it wasn’t terribly hard to do,” Xiao said. “This is something anyone could discover with minimal effort. And the gist of it is I can track most peoples’ cell phone without their consent.”

Xiao said his tests showed he could reliably query LocationSmart’s service to ping the cell phone tower closest to a subscriber’s mobile device. Xiao said he checked the mobile number of a friend several times over a few minutes while that friend was moving. By pinging the friend’s mobile network multiple times over several minutes, he was then able to plug the coordinates into Google Maps and track the friend’s directional movement.

“This is really creepy stuff,” Xiao said, adding that he’d also successfully tested the vulnerable service against one Telus Mobility mobile customer in Canada who volunteered to be found.

Before LocationSmart’s demo was taken offline today, KrebsOnSecurity pinged five different trusted sources, all of whom gave consent to have Xiao determine the whereabouts of their cell phones. Xiao was able to determine within a few seconds of querying the public LocationSmart service the near-exact location of the mobile phone belonging to all five of my sources.

LocationSmart’s demo page.

One of those sources said the longitude and latitude returned by Xiao’s queries came within 100 yards of their then-current location. Another source said the location found by the researcher was 1.5 miles away from his current location. The remaining three sources said the location returned for their phones was between approximately 1/5 to 1/3 of a mile at the time.

Reached for comment via phone, LocationSmart Founder and CEO Mario Proietti said the company was investigating.

“We don’t give away data,” Proietti said. “We make it available for legitimate and authorized purposes. It’s based on legitimate and authorized use of location data that only takes place on consent. We take privacy seriously and we’ll review all facts and look into them.”

LocationSmart’s home page features the corporate logos of all four the major wireless providers, as well as companies like Google, Neustar, ThreatMetrix, and U.S. Cellular. The company says its technologies help businesses keep track of remote employees and corporate assets, and that it helps mobile advertisers and marketers serve consumers with “geo-relevant promotions.”

LocationSmart’s home page lists many partners.

It’s not clear exactly how long LocationSmart has offered its demo service or for how long the service has been so permissive; this link from archive.org suggests it dates back to at least January 2017.

LocationSmart’s privacy policy says the company has security measures in place…”to protect our site from the loss or misuse of information that we have collected. Our servers are protected by firewalls and are physically located in secure data facilities to further increase security. While no computer is 100% safe from outside attacks, we believe that the steps we have taken to protect your personal information drastically reduce the likelihood of security problems to a level appropriate to the type of information involved.”

But these assurances may ring hollow to anyone with a cell phone who’s concerned about having their physical location revealed at any time. The component of LocationSmart’s Web site that can be abused to look up mobile location data at will is an insecure “application programming interface” or API — an interactive feature designed to display data in response to specific queries by Web site visitors. Although the LocationSmart’s demo page required users to consent to having their phone located by the service, LocationSmart apparently did nothing to prevent or authenticate direct interaction with the API itself.

API authentication weaknesses are not uncommon, but they can lead to the exposure of sensitive data on a great many people in a short period of time. In April 2018, KrebsOnSecurity broke the story of an API at the Web site of fast-casual bakery chain PaneraBread.com that exposed the names, email and physical addresses, birthdays and last four digits of credit cards on file for tens of millions of customers who’d signed up for an account at PaneraBread to order food online.

In a May 9 letter sent to the top four wireless carriers and to the U.S. Federal Communications Commission in the wake of revelations about Securus’ alleged practices, Sen. Ron Wyden (D-Ore.) urged all parties to take “proactive steps to prevent the unrestricted disclosure and potential abuse of private customer data.”

“Securus informed my office that it purchases real-time location information on AT&T’s customers — through a third party location aggregator that has a commercial relationship with the major wireless carriers — and routinely shares that information with its government clients,” Wyden wrote. “This practice skirts wireless carrier’s legal obligation to be the sole conduit by which the government may conduct surveillance of Americans’ phone records, and needlessly exposes millions of Americans to potential abuse and unchecked surveillance by the government.”

Securus, which reportedly gets its cell phone location data from LocationSmart, told The New York Times that it requires customers to upload a legal document — such as a warrant or affidavit — and to certify that the activity was authorized. But in his letter, Wyden said “senior officials from Securus have confirmed to my office that it never checks the legitimacy of those uploaded documents to determine whether they are in fact court orders and has dismissed suggestions that it is obligated to do so.”

Securus did not respond to requests for comment.

THE CARRIERS RESPOND

It remains unclear what, if anything, AT&T, Sprint, T-Mobile and Verizon plan to do about any of this. A third-party firm leaking customer location information not only would almost certainly violate each mobile providers own stated privacy policies, but the real-time exposure of this data poses serious privacy and security risks for virtually all U.S. mobile customers (and perhaps beyond, although all my willing subjects were inside the United States).

None of the major carriers would confirm or deny a formal business relationship with LocationSmart, despite LocationSmart listing them each by corporate logo on its Web site.

AT&T spokesperson Jim Greer said AT&T does not permit the sharing of location information without customer consent or a demand from law enforcement.

“If we learn that a vendor does not adhere to our policy we will take appropriate action,” Greer said.

T-Mobile referred me to their privacy policy, which says T-Mobile follows the “best practices” document (PDF) for subscriber location data as laid out by the CTIA, the international association for the wireless telecommunications industry.

A T-Mobile spokesperson said that after receiving Sen. Wyden’s letter, the company quickly shut down any transaction of customer location data to Securus.

“We are continuing to investigate this matter,” a T-Mobile spokesperson wrote via email. T-Mobile has not yet responded to requests specifically about LocationSmart.

Verizon also referred me to their privacy policy.

Sprint officials shared the following statement:

“Protecting our customers’ privacy and security is a top priority, and we are transparent about our Privacy Policy. To be clear, we do not share or sell consumers’ sensitive information to third parties. We share personally identifiable geo-location information only with customer consent or in response to a lawful request such as a validated court order from law enforcement.”

“We will answer the questions raised in Sen. Wyden’s letter directly through appropriate channels. However, it is important to note that Sprint’s relationship with Securus does not include data sharing, and is limited to supporting efforts to curb unlawful use of contraband cellphones in correctional facilities.”

WHAT NOW?

Stephanie Lacambra, a staff attorney with the the nonprofit Electronic Frontier Foundation, said that wireless customers in the United States cannot opt out of location tracking by their own mobile providers. For starters, carriers constantly use this information to provide more reliable service to the customers. Also, by law wireless companies need to be able to ascertain at any time the approximate location of a customer’s phone in order to comply with emergency 911 regulations.

But unless and until Congress and federal regulators make it more clear how and whether customer location information can be shared with third-parties, mobile device customers may continue to have their location information potentially exposed by a host of third-party companies, Lacambra said.

“This is precisely why we have lobbied so hard for robust privacy protections for location information,” she said. “It really should be only that law enforcement is required to get a warrant for this stuff, and that’s the rule we’ve been trying to push for.”

Chris Calabrese is vice president of the Center for Democracy & Technology, a policy think tank in Washington, D.C. Calabrese said the current rules about mobile subscriber location information are governed by the Electronic Communications Privacy Act (ECPA), a law passed in 1986 that hasn’t been substantially updated since.

“The law here is really out of date,” Calabrese said. “But I think any processes that involve going to third parties who don’t verify that it’s a lawful or law enforcement request — and that don’t make sure the evidence behind that request is legitimate — are hugely problematic and they’re major privacy violations.”

“I would be very surprised if any mobile carrier doesn’t think location information should be treated sensitively, and I’m sure none of them want this information to be made public,” Calabrese continued. “My guess is the carriers are going to come down hard on this, because it’s sort of their worst nightmare come true. We all know that cell phones are portable tracking devices. There’s a sort of an implicit deal where we’re okay with it because we get lots of benefits from it, but we all also assume this information should be protected. But when it isn’t, that presents a major problem and I think these examples would be a spur for some sort of legislative intervention if they weren’t fixed very quickly.”

For his part, Xiao says we’re likely to see more leaks from location tracking companies like Securus and LocationSmart as long as the mobile carriers are providing third party companies any access to customer location information.

“We’re going to continue to see breaches like this happen until access to this data can be much more tightly controlled,” he said.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Much of the fraud involving counterfeit credit, ATM debit and retail gift cards relies on the ability of thieves to use cheap, widely available hardware to encode stolen data onto any card’s magnetic stripe. But new research suggests retailers and ATM operators could reliably detect counterfeit cards using a simple technology that flags cards which appear to have been altered by such tools.

A gift card purchased at retail with an unmasked PIN hidden behind a paper sleeve. Such PINs can be easily copied by an adversary, who waits until the card is purchased to steal the card’s funds. Image: University of Florida.

Researchers at the University of Florida found that account data encoded on legitimate cards is invariably written using quality-controlled, automated facilities that tend to imprint the information in uniform, consistent patterns.

Cloned cards, however, usually are created by hand with inexpensive encoding machines, and as a result feature far more variance or “jitter” in the placement of digital bits on the card’s stripe.

Gift cards can be extremely profitable and brand-building for retailers, but gift card fraud creates a very negative shopping experience for consumers and a costly conundrum for retailers. The FBI estimates that while gift card fraud makes up a small percentage of overall gift card sales and use, approximately $130 billion worth of gift cards are sold each year.

One of the most common forms of gift card fraud involves thieves tampering with cards inside the retailer’s store — before the cards are purchased by legitimate customers. Using a handheld card reader, crooks will swipe the stripe to record the card’s serial number and other data needed to duplicate the card.

If there is a PIN on the gift card packaging, the thieves record that as well. In many cases, the PIN is obscured by a scratch-off decal, but gift card thieves can easily scratch those off and then replace the material with identical or similar decals that are sold very cheaply by the roll online.

“They can buy big rolls of that online for almost nothing,” said Patrick Traynor, an associate professor of computer science at the University of Florida. “Retailers we’ve worked with have told us they’ve gone to their gift card racks and found tons of this scratch-off stuff on the ground near the racks.”

At this point the cards are still worthless because they haven’t yet been activated. But armed with the card’s serial number in PIN, thieves can simply monitor the gift card account at the retailer’s online portal and wait until the cards are paid for and activated at the checkout register by an unwitting shopper.

Once a card is activated, thieves can encode that card’s data onto any card with a magnetic stripe and use that counterfeit to purchase merchandise at the retailer. The stolen goods typically are then sold online or on the street. Meanwhile, the person who bought the card (or the person who received it as a gift) finds the card is drained of funds when they eventually get around to using it at a retail store.

The top two gift cards show signs that someone previously peeled back the protective sticker covering the redemption code. Image: Flint Gatwell.

Traynor and a team of five other University of Florida researchers partnered with retail giant WalMart to test their technology, which Traynor said can be easily and quite cheaply incorporated into point-of-sale systems at retail store cash registers. They said the WalMart trial demonstrated that researchers’ technology distinguished legitimate gift cards from clones with up to 99.3 percent accuracy.

While impressive, that rate still means the technology could still generate a “false positive” — erroneously flagging a legitimate customer as using a fraudulently obtained gift card — in about one in every XXX times. But Traynor said the retailers they spoke with in testing their equipment all indicated they would welcome any additional tools to curb the incidence of gift card fraud.

“We’ve talked with quite a few retail loss prevention folks,” he said. “Most said even if they can simply flag the transaction and make a note of the person [presenting the cloned card] that this would be a win for them. Often, putting someone on notice that loss prevention is watching is enough to make them stop — at least at that store. From our discussions with a few big-box retailers, this kind of fraud is probably their newest big concern, although they don’t talk much about it publicly. If the attacker does any better than simply cloning the card to a blank white card, they’re pretty much powerless to stop the attack, and that’s a pretty consistent story behind closed doors.”

BEYOND GIFT CARDS

Traynor said the University of Florida team’s method works even more accurately in detecting counterfeit ATM and credit cards, thanks to the dramatic difference in jitter between bank-issued cards and those cloned by thieves.

The magnetic material on most gift cards bears a quality that’s known in the industry as “low coercivity.” The stripe on so-called “LoCo” cards is usually brown in color, and new data can be imprinted on them quite cheaply using a machine that emits a relatively low or weak magnetic field. Hotel room keys also rely on LoCo stripes, which is why they tend to so easily lose their charge (particularly when placed next to something else with a magnetic charge).

In contrast, “high coercivity” (HiCo) stripes like those found on bank-issued debit and credit cards are usually black in color, hold their charge much longer, and are far more durable than LoCo cards. The downside of HiCo cards is that they are more expensive to produce, often relying on complex machinery and sophisticated manufacturing processes that encode the account data in highly uniform patterns.

These graphics illustrate the difference between original and cloned cards. Source: University of Florida.

Traynor said tests indicate their technology can detect cloned bank cards with virtually zero false-positives. In fact, when the University of Florida team first began seeing positive results from their method, they originally pitched the technique as a way for banks to cut losses from ATM skimming and other forms of credit and debit card fraud.

Yet, Traynor said fellow academicians who reviewed their draft paper told them that banks probably wouldn’t invest in the technology because most financial institutions are counting on newer, more sophisticated chip-based (EMV) cards to eventually reduce counterfeit fraud losses.

“The original pitch on the paper was actually focused on credit cards, but academic reviewers were having trouble getting past EMV — as in, “EMV solves this and it’s universally deployed – so why is this necessary?'”, Traynor said. “We just kept getting reviews back from other academics saying that credit and bank card fraud is a solved problem.”

The trouble is that virtually all chip cards still store account data in plain text on the magnetic stripe on the back of the card — mainly so that the cards can be used in ATM and retail locations that are not yet equipped to read chip-based cards. As a result, even European countries whose ATMs all require chip-based cards remain heavily targeted by skimming gangs because the data on the chip card’s magnetic stripe can still be copied by a skimmer and used by thieves in the United States.

The University of Florida researchers recently were featured in an Associated Press story about an anti-skimming technology they developed and dubbed the “Skim Reaper.” The device, which can be made cheaply using a 3D printer, fits into the mouth of ATM’s card acceptance slot and can detect the presence of extra card reading devices that skimmer thieves may have fitted on top of or inside the cash machine.

The AP story quoted a New York Police Department financial crimes detective saying the Skim Reapers worked remarkably well in detecting the presence of ATM skimmers. But Traynor said many ATM operators and owners are simply uninterested in paying to upgrade their machines with their technology — in large part because the losses from ATM card counterfeiting are mostly assumed by consumers and financial institutions.

“We found this when we were talking around with the cops in New York City, that the incentive of an ATM bodega owner to upgrade an ATM is very low,” Traynor said. “Why should they go to that extent? Upgrades required to make these machines [chip-card compliant] are significant in cost, and the motivation is not necessarily there.”

Retailers also could choose to produce gift cards with embedded EMV chips that make the cards more expensive and difficult to counterfeit. But doing so likely would increase the cost of manufacturing by $2 to $3 per card, Traynor said.

“Putting a chip on the card dramatically increases the cost, so a $10 gift card might then have a $3 price added,” he said. “And you can imagine the reaction a customer might have when asked to pay $13 for a gift card that has a $10 face value.”

A copy of the University of Florida’s research paper is available here (PDF).

The FBI has compiled a list of recommendations for reducing the likelihood of being victimized by gift card fraud. For starters, when buying in-store don’t just pick cards right off the rack. Look for ones that are sealed in packaging or stored securely behind the checkout counter. Also check the scratch-off area on the back to look for any evidence of tampering.

Here are some other tips from the FBI:

-If possible, only buy cards online directly from the store or restaurant.
-If buying from a secondary gift card market website, check reviews and only buy from or sell to reputable dealers.
-Check the gift card balance before and after purchasing the card to verify the correct balance on the card.
-The re-seller of a gift card is responsible for ensuring the correct balance is on the gift card, not the merchant whose name is listed. If you are scammed, some merchants in some situations will replace the funds. Ask for, but don’t expect, help.
-When selling a gift card through an online marketplace, do not provide the buyer with the card’s PIN until the transaction is complete.
-When purchasing gift cards online, be leery of auction sites selling gift cards at a steep discount or in bulk.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I spent a few days last week speaking at and attending a conference on responding to identity theft. The forum was held in Florida, one of the major epicenters for identity fraud complaints in United States. One gripe I heard from several presenters was that identity thieves increasingly are finding ways to open new lines of credit for things like mobile phones on people who have already frozen their credit files with the big-three credit bureaus. Here’s a look at what may be going on, and how you can protect yourself.

Carrie Kerskie is director of the Identity Fraud Institute at Hodges University in Naples, and a big part of her job is helping local residents respond to identity theft and identity fraud complaints. Kerskie said she’s had multiple victims in her area recently complain of having cell phone accounts opened in their names even though they had already frozen their credit files at the big three credit bureaus — Equifax, Experian and Trans Union (as well as distant fourth bureau Innovis).

The freeze process is designed so that creditor should not be able to pull a copy of your report unless you unfreeze the account. A credit freeze blocks potential creditors from being able to view or “pull” your credit file, making it far more difficult for identity thieves to apply for new lines of credit in your name.

A freeze works to protect one’s credit file only if a potential creditor (or ID thief) tries to open a new line of credit at a company that uses one of the big three bureaus or Innovis. But Kerskie’s investigation revealed that the mobile phone merchants weren’t asking any of those four credit bureaus. Rather, the mobile providers that dinged the credit of Kerskie’s clients instead were making consumer credit queries with the National Consumer Telecommunications and Utilities Exchange (NCTUE), or nctue.com.

Source: nctue.com

“We’re finding that a lot of phone carriers — even some of the larger ones — are relying on NCTUE for credit checks,” Kerskie said. “Phone carriers, utilities, power, water, cable, any of those. They’re all starting to use this more.”

The NCTUE is a consumer reporting agency founded by mobile provider AT&T in 1997 that maintains data such as payment and account history, reported by telecommunication, pay TV and utility service providers that are members of NCTUE.

Who are the NCTUE’s members? If you call the 800-number that NCTUE makes available to get a free copy of your NCTUE credit report, the option for “more information” about the organization says there are four “exchanges” that feed into the NCTUE’s system: the NCTUE itself; something called “Centralized Credit Check Systems“; the New York Data Exchange; and the California Utility Exchange.

According to a partner solutions page at Verizon, the New York Data Exchange is a not-for-profit created in 1996 that provides participating local exchange carriers with access to local telecommunications service arrears (accounts that are unpaid) and final account information on residential end user accounts. The NYDE is operated by Equifax Credit Information Services Inc. (yes, that Equifax). Verizon is one of many telecom providers that use the NYDE (and recall that AT&T was the founder of NCTUE).

The California Utility Exchange collects customer payment data from dozens of local utilities in the state, and also is operated by Equifax (Equifax Information Services LLC).

Google has virtually no useful information available about an entity called Centralized Credit Check Systems. If anyone finds differently, please leave a note in the comments section.

When I did some more digging on the NCTUE, I discovered…wait for it…Equifax also is the sole contractor that manages the NCTUE database. The entity’s site is also hosted out of Equifax’s servers. Equifax’s current contract to provide this service expires in 2020, according to a press release posted in 2015 by Equifax.

RED LIGHT. GREEN LIGHT. RED LIGHT.

Fortunately, the NCTUE makes it fairly easy to obtain any records they may have on you. Simply phone them up  at 1-866-349-5185 and provide your Social Security number and the numeric portion of your registered street address.

Assuming it can verify you with that information, the system then orders a credit report to be sent to the address on file. You can also request to be sent a free “risk score” assigned by the NCTUE for each credit file it maintains.

The NCTUE also offers a free online process for freezing one’s report. Perhaps unsurprisingly, however, the process for ordering a freeze through the NCTUE appears to be completely borked at the moment, thanks no doubt to Equifax’s well documented abysmal security practices.

Or it could all be part of a willful and negligent strategy to continue discouraging Americans from freezing their credit files.

On April 29, I had an occasion to visit Equifax’s credit freeze application page, and found that the site was being served with an expired SSL encryption certificate from Symantec. This happened because I browsed the site using Google Chrome, and Google announced a decision in September 2017 to no longer trust SSL certs issued by Symantec prior to June 1, 2016. Google said it would do this starting with Google Chrome version 66. It did not keep this plan a secret.

On April 18, Google pushed out Chrome 66. Despite all of the advance warnings, the security people at Equifax apparently missed the memo and in so doing probably scared most people away from its freeze page for several weeks (Equifax fixed the problem on its site sometime after I tweeted about the borked certificate on April 29). That’s because when one uses Chrome to visit a site whose encryption certificate is validated by one of these unsupported Symantec certs, Chrome puts up a dire security warning that would almost certainly discourage most casual users from continuing.

The insecurity around Equifax’s own freeze site likely discouraged people from requesting a freeze on their credit files.

On May 7, when I visited the NCTUE’s page for freezing my credit file with them I was presented with the very same connection SSL security alert from Chrome, warning that any data I share with the NCTUE’s freeze page might not be encrypted in transit.

The security alert generated by Chrome when visiting the freeze page for the NCTUE, whose database (and apparently web site) also is run by Equifax.

When I clicked through past the warnings and proceeded to the insecure NCTUE freeze form (which is worded and stylized almost exactly like Equifax’s credit freeze page and is hosted on Equifax’s own servers), I filled out the required information to freeze my NCTUE file. No dice. I was unceremoniously declined the opportunity to do that. “We are currently unable to service your request,” read the resulting Web page, without suggesting alternative means of obtaining its report. “Please try again later.”

The message I received after trying to freeze my file with the NCTUE.

This scenario will no doubt be familiar to many readers who tried (and failed in a similar fashion) to file freezes on their credit files with Equifax after the company divulged that hackers had relieved it of Social Security numbers, addresses, dates of birth and other sensitive data on nearly 150 million Americans last September. I attempted to file a freeze via the NCTUE’s site with no fewer than three different browsers, and each time the form reset itself upon submission or took me to a failure page.

So let’s review. Many people who have succeeded in freezing their credit files with Equifax have nonetheless had their identities stolen and new accounts opened in their names thanks to a lesser-known credit bureau that seems to rely entirely on credit checking entites operated by Equifax.

“This just reinforces the fact that we are no longer in control of our information,” said Kerskie, who is also a founding member of Griffon Force, a Florida-based identity theft restoration firm.

I find it difficult to disagree with Kerskie’s statement. What chaps me about this discovery is that countless Americans are in many cases plunking down $2-$10 per bureau to freeze their credit files, and yet a huge player in this market is able to continue to profit off of identity theft on those same Americans.

EQUIFAX RESPONDS

I reached out to Equifax to understand why the credit bureau operating the NCTUE’s data exchange (and those of at least two other contributing members) couldn’t detect when consumers had placed credit freezes with Equifax. As you might imagine, Equifax responded that NCTUE is separate entity from Equifax. It also said NCTUE doesn’t include Equifax credit information.

Here is Equifax’s full statement on the matter:

·        The National Consumer Telecom and Utilities Exchange, Inc. (NCTUE) is a nationwide, member-owned and operated, FCRA-compliant consumer reporting agency that houses both positive and negative consumer payment data reported by its members, such as new connect requests, payment history, and historical account status and/or fraudulent accounts.  NCTUE members are providers of telecommunications and pay/satellite television services to consumers, as well as utilities providing gas, electrical and water services to consumers. 

·        This information is available to NCTUE members and, on a limited basis, to certain other customers of NCTUE’s contracted exchange operator, Equifax Information Services, LLC (Equifax) – typically financial institutions and insurance providers.  NCTUE does not include Equifax credit information, and Equifax is not a member of NCTUE, nor does Equifax own any aspect of NCTUE.  NCTUE does not provide telecommunications pay/ satellite television or utility services to consumers, and consumers do not apply for those services with NCTUE.

·        As a consumer reporting agency, NCTUE places and lifts security freezes on consumer files in accordance with the state law applicable to the consumer.  NCTUE also maintains a voluntary security freeze program for consumers who live in states which currently do not have a security freeze law. 

·        NCTUE is a separate consumer reporting agency from Equifax and therefore a consumer would need to independently place and lift a freeze with NCTUE.

·        While state laws vary in the manner in which consumers can place or lift a security freeze (temporarily or permanently), if a consumer has a security freeze on his or her NCTUE file and has not temporarily lifted the freeze, a creditor or other service provider, such as a mobile phone provider, generally cannot access that consumer’s NCTUE report in connection with a new account opening.  However, the creditor or provider may be able to access that consumer’s credit report from another consumer reporting agency in order to open a new account, or decide to open the account without accessing a credit report from any consumer reporting agency, such as NCTUE or Equifax. 

PLACING THE FREEZE

I was finally able to successfully place a freeze on my NCTUE report by calling their 800-number — 1-866-349-5355. The message said the NCTUE might charge a fee for placing or lifting the freeze, in accordance with state freeze laws.

Depending on your state of residence, the cost of placing a freeze on your credit file at Equifax, Experian or Trans Union can run between $3 and $10 per credit bureau, and in many states the bureaus also can charge fees for temporarily “thawing” and removing a freeze (according a list published by Consumers Union, residents of four states — Indiana, Maine, North Carolina, South Carolina — do not need to pay to place, thaw or lift a freeze).

While my home state of Virginia allows the bureaus to charge $10 to place a freeze, for whatever reason the NCTUE did not assess that fee when I placed my freeze request with them. When and if your freeze request does get approved using the NCTUE’s automated phone system, make sure you have pen and paper or a keyboard handy to jot down the freeze PIN, which you will need in the event you ever wish to lift the freeze. When the system read my freeze PIN, it was read so quickly that I had to hit “*” on the dial pad several times to repeat the message.

It’s frankly laughable that consumers should ever have to pay to freeze their credit files at all, and yet a recent study indicates that almost 20 percent of Americans chose to do so at one or more of the three major credit bureaus since Equifax announced its breach last fall. The total estimated cost to consumers in freeze fees? $1.4 billion. With a freeze on your files, the major credit bureaus stand to lose about one dollar for each time they might have been able to sell your credit report to a potential creditor, or potential identity thief.

But wishing Equifax and its ilk were finally and completely exposed for the digital dinosaurs that they are will not change the fact that if you care about your identity, you now may have another freeze to worry about. If you decide to take this step, please sound off about your experience in the comments below.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Microsoft today released a bundle of security updates to fix at least 67 holes in its various Windows operating systems and related software, including one dangerous flaw that Microsoft warns is actively being exploited. Meanwhile, as it usually does on Microsoft’s Patch Tuesday — the second Tuesday of each month — Adobe has a new Flash Player update that addresses a single but critical security weakness.

First, the Flash Tuesday update, which brings Flash Player to v. 29.0.0.171. Some (present company included) would that Flash Player is itself “a single but critical security weakness.” Nevertheless, Google Chrome and Internet Explorer/Edge ship with their own versions of Flash, which get updated automatically when new versions of these browsers are made available.

You can check if your browser has Flash installed/enabled and what version it’s at by pointing your browser at this link. Adobe is phasing out Flash entirely by 2020, but most of the major browsers already take steps to hobble Flash. And with good reason: It’s a major security liability.

Google Chrome blocks Flash from running on all but a handful of popular sites, and then only after user approval. Disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist/blacklist specific sites. If you spot an upward pointing arrow to the right of the address bar in Chrome, that means there’s an update to the browser available, and it’s time to restart Chrome.

For Windows users with Mozilla Firefox installed, the browser prompts users to enable Flash on a per-site basis.

Through the end of 2017 and into 2018, Microsoft Edge will continue to ask users for permission to run Flash on most sites the first time the site is visited, and will remember the user’s preference on subsequent visits. Microsoft users will need to install this month’s batch of patches to get the latest Flash version for IE/Edge, where most of the critical updates in this month’s patch batch reside.

According to security vendor Qualys, one Microsoft patch in particular deserves priority over others in organizations that are testing updates before deploying them: CVE-2018-8174 involves a problem with the way the Windows scripting engine handles certain objects, and Microsoft says this bug is already being exploited in active attacks.

Some other useful sources of information on today’s updates include the Zero Day Initiative and Bleeping Computer. And of course there is always the Microsoft Security Update Guide.

As always, please feel free to leave a comment below if you experience any issues applying any of these updates.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

A monster distributed denial-of-service attack (DDoS) against KrebsOnSecurity.com in 2016 knocked this site offline for nearly four days. The attack was executed through a network of hacked “Internet of Things” (IoT) devices such as Internet routers, security cameras and digital video recorders. A new study that tries to measure the direct cost of that one attack for IoT device users whose machines were swept up in the assault found that it may have cost device owners a total of $323,973.75 in excess power and added bandwidth consumption.

My bad.

But really, none of it was my fault at all. It was mostly the fault of IoT makers for shipping cheap, poorly designed products (insecure by default), and the fault of customers who bought these IoT things and plugged them onto the Internet without changing the things’ factory settings (passwords at least.)

The botnet that hit my site in Sept. 2016 was powered by the first version of Mirai, a malware strain that wriggles into dozens of IoT devices left exposed to the Internet and running with factory-default settings and passwords. Systems infected with Mirai are forced to scan the Internet for other vulnerable IoT devices, but they’re just as often used to help launch punishing DDoS attacks.

By the time of the first Mirai attack on this site, the young masterminds behind Mirai had already enslaved more than 600,000 IoT devices for their DDoS armies. But according to an interview with one of the admitted and convicted co-authors of Mirai, the part of their botnet that pounded my site was a mere slice of firepower they’d sold for a few hundred bucks to a willing buyer. The attack army sold to this ne’er-do-well harnessed the power of just 24,000 Mirai-infected systems (mostly security cameras and DVRs, but some routers, too).

These 24,000 Mirai devices clobbered my site for several days with data blasts of up to 620 Gbps. The attack was so bad that my pro-bono DDoS protection provider at the time — Akamai — had to let me go because the data firehose pointed at my site was starting to cause real pain for their paying customers. Akamai later estimated that the cost of maintaining protection against my site in the face of that onslaught would have run into the millions of dollars.

We’re getting better at figuring out the financial costs of DDoS attacks to the victims (5, 6 or 7 -digit dollar losses) and to the perpetrators (zero to hundreds of dollars). According to a report released this year by DDoS mitigation giant NETSCOUT Arbor, fifty-six percent of organizations last year experienced a financial impact from DDoS attacks for between $10,000 and $100,000, almost double the proportion from 2016.

But what if there were also a way to work out the cost of these attacks to the users of the IoT devices which get snared by DDos botnets like Mirai? That’s what researchers at University of California, Berkeley School of Information sought to determine in their new paper, “rIoT: Quantifying Consumer Costs of Insecure Internet of Things Devices.

If we accept the UC Berkeley team’s assumptions about costs borne by hacked IoT device users (more on that in a bit), the total cost of added bandwidth and energy consumption from the botnet that hit my site came to $323,973.95. This may sound like a lot of money, but remember that broken down among 24,000 attacking drones the per-device cost comes to just $13.50.

So let’s review: The attacker who wanted to clobber my site paid a few hundred dollars to rent a tiny portion of a much bigger Mirai crime machine. That attack would likely have cost millions of dollars to mitigate. The consumers in possession of the IoT devices that did the attacking probably realized a few dollars in losses each, if that. Perhaps forever unmeasured are the many Web sites and Internet users whose connection speeds are often collateral damage in DDoS attacks.

Image: UC Berkeley.

Anyone noticing a slight asymmetry here in either costs or incentives? IoT security is what’s known as an “externality,” a term used to describe “positive or negative consequences to third parties that result from an economic transaction. When one party does not bear the full costs of its actions, it has inadequate incentives to avoid actions that incur those costs.”

In many cases negative externalities are synonymous with problems that the free market has a hard time rewarding individuals or companies for fixing or ameliorating, much like environmental pollution. The common theme with externalities is that the pain points to fix the problem are so diffuse and the costs borne by the problem so distributed across international borders that doing something meaningful about it often takes a global effort with many stakeholders — who can hopefully settle upon concrete steps for action and metrics to measure success.

The paper’s authors explain the misaligned incentives on two sides of the IoT security problem:

-“On the manufacturer side, many devices run lightweight Linux-based operating systems that open doors for hackers. Some consumer IoT devices implement minimal security. For example, device manufacturers may use default username and password credentials to access the device. Such design decisions simplify device setup and troubleshooting, but they also leave the device open to exploitation by hackers with access to the publicly-available or guessable credentials.”

-“Consumers who expect IoT devices to act like user-friendly ‘plug-and-play’ conveniences may have sufficient intuition to use the device but insufficient technical knowledge to protect or update it. Externalities may arise out of information asymmetries caused by hidden information or misaligned incentives. Hidden information occurs when consumers cannot discern product characteristics and, thus, are unable to purchase products that reflect their preferences. When consumers are unable to observe the security qualities of software, they instead purchase products based solely on price, and the overall quality of software in the market suffers.”

The UK Berkeley researchers concede that their experiments — in which they measured the power output and bandwidth consumption of various IoT devices they’d infected with a sandboxed version of Mirai — suggested that the scanning and DDoSsing activity prompted by a Mirai malware infection added almost negligible amounts in power consumption for the infected devices.

Thus, most of the loss figures cited for the 2016 attack rely heavily on estimates of how much the excess bandwidth created by a Mirai infection might cost users directly, and as such I suspect the $13.50 per machine estimates are on the high side.

No doubt, some Internet users get online via an Internet service provider that includes a daily “bandwidth cap,” such that over-use of the allotted daily bandwidth amount can incur overage fees and/or relegates the customer to a slower, throttled connection for some period after the daily allotted bandwidth overage.

But for a majority of high-speed Internet users, the added bandwidth use from a router or other IoT device on the network being infected with Mirai probably wouldn’t show up as an added line charge on their monthly bills. I asked the researchers about the considerable wiggle factor here:

“Regarding bandwidth consumption, the cost may not ever show up on a consumer’s bill, especially if the consumer has no bandwidth cap,” reads an email from the UC Berkeley researchers who wrote the report, including Kim Fong, Kurt Hepler, Rohit Raghavan and Peter Rowland.

“We debated a lot on how to best determine and present bandwidth costs, as it does vary widely among users and ISPs,” they continued. “Costs are more defined in cases where bots cause users to exceed their monthly cap. But even if a consumer doesn’t directly pay a few extra dollars at the end of the month, the infected device is consuming actual bandwidth that must be supplied/serviced by the ISP. And it’s not unreasonable to assume that ISPs will eventually pass their increased costs onto consumers as higher monthly fees, etc. It’s difficult to quantify the consumer-side costs of unauthorized use — which is likely why there’s not much existing work — and our stats are definitely an estimate, but we feel it’s helpful in starting the discussion on how to quantify these costs.”

Measuring bandwidth and energy consumption may turn out to be a useful and accepted tool to help more accurately measure the full costs of DDoS attacks. I’d love to see these tests run against a broader range of IoT devices in a much larger simulated environment.

If the Berkeley method is refined enough to become accepted as one of many ways to measure actual losses from a DDoS attack, the reporting of such figures could make these crimes more likely to be prosecuted.

Many DDoS attack investigations go nowhere because targets of these attacks fail to come forward or press charges, making it difficult for prosecutors to prove any real economic harm was done. Since many of these investigations die on the vine for a lack of financial damages reaching certain law enforcement thresholds to justify a federal prosecution (often $50,000 – $100,000), factoring in estimates of the cost to hacked machine owners involved in each attack could change that math.

But the biggest levers for throttling the DDoS problem are in the hands of the people running the world’s largest ISPs, hosting providers and bandwidth peering points on the Internet today. Some of those levers I detailed in the “Shaming the Spoofers” section of The Democraticization of Censorship, the first post I wrote after the attack and after Google had brought this site back online under its Project Shield program.

By the way, we should probably stop referring to IoT devices as “smart” when they start misbehaving within three minutes of being plugged into an Internet connection. That’s about how long your average cheapo, factory-default security camera plugged into the Internet has before getting successfully taken over by Mirai. In short, dumb IoT devices are those that don’t make it easy for owners to use them safely without being a nuisance or harm to themselves or others.

Maybe what we need to fight this onslaught of dumb devices are more network operators turning to ideas like IDIoT, a network policy enforcement architecture for consumer IoT devices that was first proposed in December 2017.  The goal of IDIoT is to restrict the network capabilities of IoT devices to only what is essential for regular device operation. For example, it might be okay for network cameras to upload a video file somewhere, but it’s definitely not okay for that camera to then go scanning the Web for other cameras to infect and enlist in DDoS attacks.

So what does all this mean to you? That depends on how many IoT things you and your family and friends are plugging into the Internet and your/their level of knowledge about how to secure and maintain these devices. Here’s a primer on minimizing the chances that your orbit of IoT things become a security liability for you or for the Internet at large.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Twitter just asked all 300+ million users to reset their passwords, citing the exposure of user passwords via a bug that stored passwords in plain text — without protecting them with any sort of encryption technology that would mask a Twitter user’s true password. The social media giant says it has fixed the bug and that so far its investigation hasn’t turned up any signs of a breach or that anyone misused the information. But if you have a Twitter account, please change your account password now.

Or if you don’t trust links in blogs like this (I get it) go to Twitter.com and change it from there. And then come back and read the rest of this. We’ll wait.

In a post to its company blog this afternoon, Twitter CTO Parag Agrawal wrote:

“When you set a password for your Twitter account, we use technology that masks it so no one at the company can see it. We recently identified a bug that stored passwords unmasked in an internal log. We have fixed the bug, and our investigation shows no indication of breach or misuse by anyone.

A message posted this afternoon (and still present as a pop-up) warns all users to change their passwords.

“Out of an abundance of caution, we ask that you consider changing your password on all services where you’ve used this password. You can change your Twitter password anytime by going to the password settings page.”

Agrawal explains that Twitter normally masks user passwords through a state-of-the-art encryption technology called “bcrypt,” which replaces the user’s password with a random set of numbers and letters that are stored in Twitter’s system.

“This allows our systems to validate your account credentials without revealing your password,” said Agrawal, who says the technology they’re using to mask user passwords is the industry standard.

“Due to a bug, passwords were written to an internal log before completing the hashing process,” he continued. “We found this error ourselves, removed the passwords, and are implementing plans to prevent this bug from happening again.”

Agrawal wrote that while Twitter has no reason to believe password information ever left Twitter’s systems or was misused by anyone, the company is still urging all Twitter users to reset their passwords NOW.

A letter to all Twitter users posted by Twitter CTO Parag Agrawal

Twitter advises:
-Change your password on Twitter and on any other service where you may have used the same password.
-Use a strong password that you don’t reuse on other websites.
-Enable login verification, also known as two factor authentication. This is the single best action you can take to increase your account security.
-Use a password manager to make sure you’re using strong, unique passwords everywhere.

This may be much ado about nothing disclosed out of an abundance of caution, or further investigation may reveal different findings. It doesn’t matter for right now: If you’re a Twitter user and if you didn’t take my advice to go change your password yet, go do it now! That is, if you can.

Twitter.com seems responsive now, but just for a while Thursday afternoon Twitter has problems displaying many Twitter profiles, or even its homepage. Just a few moments ago, I tried to visit the Twitter CTO’s profile page and got this (ditto for Twitter.com):

What KrebsOnSecurity and other Twitter users got when we tried to visit twitter.com and the Twitter CTO’s profile page late in the afternoon ET on May 3, 2018.

If for some reason you can’t reach Twitter.com, try again soon. Put it on your to-do list or calendar for an hour from now. Seriously, do it now or very soon.

And please don’t use a password that you have used for any other account you use online, either in the past or in the present. A non-comprehensive list (note to self) of some password tips are here.

I have some more specific questions about this incident sent in to Twitter already. More updates as available.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Storing passwords in plaintext online is never a good idea, but it’s remarkable how many companies have employees who are doing just that using online collaboration tools like Trello.com. Last week, KrebsOnSecurity notified a host of companies that employees were using Trello to share passwords for sensitive internal resources. Among those put at risk by such activity included an insurance firm, a state government agency and ride-hailing service Uber.com.

By default, Trello boards for both enterprise and personal use are set to either private (requires a password to view the content) or team-visible only (approved members of the collaboration team can view).

But that doesn’t stop individual Trello users from manually sharing personal boards that include proprietary employer data, information that may be indexed by search engines and available to anyone with a Web browser. And unfortunately for organizations, far too many employees are posting sensitive internal passwords and other resources on their own personal Trello boards that are left open and exposed online.

A personal Trello board created by an Uber employee included passwords that might have exposed sensitive internal company operations.

KrebsOnSecurity spent the past week using Google to discover unprotected personal Trello boards that listed employer passwords and other sensitive data. Pictured above was a personal board set up by some Uber developers in the company’s Asia-Pacific region, which included passwords needed to view a host of internal Google Documents and images.

Uber spokesperson Melanie Ensign said the Trello board in question was made private shortly after being notified by this publication, among others.

“We had a handful of members in random parts of the world who didn’t realize they were openly sharing this information,” Ensign said. “We’ve reached out to these teams to remind people that these things need to happen behind internal resources. Employee awareness is an ongoing challenge, but so far we haven’t found any user data on any of the exposed boards. We may have dodged a bullet here, and it definitely could have been worse.”

Ensign said the initial report about the exposed board came through the company’s bug bounty program, and that the person who reported it would receive at least the minimum bounty amount — $500 — for reporting the incident (Uber hasn’t yet decided whether the award should be higher for this incident).

The Uber employees who created the board “used their work email to open a public board that they weren’t supposed to,” Ensign said. “They didn’t go through our enterprise account to create that. We first found out about it through our bug bounty program, and while it’s not technically a vulnerability in our products, it’s certainly something that we would pay for anyway. In this case, we got multiple reports about the same thing, but we always pay the first report we get.”

Of course, not every company has a bug bounty program to incentivize the discovery and private reporting of internal resources that may be inadvertently exposed online.

Screenshots that KrebsOnSecurity took of many far more shocking examples of employees posting dozens of passwords for sensitive internal resources are not pictured here because the affected parties still have not responded to alerts provided by this author.

Trello is one of many online collaboration tools made by Atlassian Corporation PLC, a technology company based in Sydney, Australia. Trello co-founder Michael Pryor said Trello boards are set to private by default and must be manually changed to public by the user.

“We strive to make sure public boards are being created intentionally and have built in safeguards to confirm the intention of a user before they make a board publicly visible,” Pryor said. “Additionally, visibility settings are displayed persistently on the top of every board.”

If a board is Team Visible it means any members of that team can view, join, and edit cards. If a board is Private, only members of that specific board can see it. If a board is Public, anyone with the link to the board can see it.

Interestingly, updates made to Trello’s privacy policy over the past weekend may make it easier for companies to locate personal boards created by employees and pull them behind company resources.

A Trello spokesperson said the privacy changes were made to bring the company’s policies in line with new EU privacy laws that come into enforcement later this month. But they also clarify that Trello’s enterprise features allow the enterprise admins to control the security and permissions around a work account an employee may have created before the enterprise product was purchased.

Uber spokesperson Ensign called the changes welcome.

“As a result companies will have more security control over Trello boards created by current/former employees and contractors, so we’re happy to see the change,” she said.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

On two occasions this past year I’ve published stories here warning about the prospect that new European privacy regulations could result in more spams and scams ending up in your inbox. This post explains in a question and answer format some of the reasoning that went into that prediction, and responds to many of the criticisms leveled against it.

Before we get to the Q&A, a bit of background is in order. On May 25, 2018 the General Data Protection Regulation (GDPR) takes effect. The law, enacted by the European Parliament, requires companies to get affirmative consent for any personal information they collect on people within the European Union. Organizations that violate the GDPR could face fines of up to four percent of global annual revenues.

In response, the Internet Corporation for Assigned Names and Numbers (ICANN) — the nonprofit entity that manages the global domain name system — has proposed redacting key bits of personal data from WHOIS, the system for querying databases that store the registered users of domain names and blocks of Internet address ranges (IP addresses).

Under current ICANN rules, domain name registrars should collect and display a variety of data points when someone performs a WHOIS lookup on a given domain, such as the registrant’s name, address, email address and phone number. Most registrars offer a privacy protection service that shields this information from public WHOIS lookups; some registrars charge a nominal fee for this service, while others offer it for free.

But in a bid to help registrars comply with the GDPR, ICANN is moving forward on a plan to remove critical data elements from all public WHOIS records. Under the new system, registrars would collect all the same data points about their customers, yet limit how much of that information is made available via public WHOIS lookups.

The data to be redacted includes the name of the person who registered the domain, as well as their phone number, physical address and email address. The new rules would apply to all domain name registrars globally.

ICANN has proposed creating an “accreditation system” that would vet access to personal data in WHOIS records for several groups, including journalists, security researchers, and law enforcement officials, as well as intellectual property rights holders who routinely use WHOIS records to combat piracy and trademark abuse.

But at an ICANN meeting in San Juan, Puerto Rico last month, ICANN representatives conceded that a proposal for how such a vetting system might work probably would not be ready until December 2018. Assuming ICANN meets that deadline, it could be many months after that before the hundreds of domain registrars around the world take steps to adopt the new measures.

In a series of posts on Twitter, I predicted that the WHOIS changes coming with GDPR will likely result in a noticeable increase in cybercrime — particularly in the form of phishing and other types of spam. In response to those tweets, several authors on Wednesday published an article for Georgia Tech’s Internet Governance Project titled, “WHOIS afraid of the dark? Truth or illusion, let’s know the difference when it comes to WHOIS.”

The following Q&A is intended to address many of the more misleading claims and assertions made in that article.

Cyber criminals don’t use their real information in WHOIS registrations, so what’s the big deal if the data currently available in WHOIS records is no longer in the public domain after May 25?

I can point to dozens of stories printed here — and probably hundreds elsewhere — that clearly demonstrate otherwise. Whether or not cyber crooks do provide their real information is beside the point. ANY information they provide — and especially information that they re-use across multiple domains and cybercrime campaigns — is invaluable to both grouping cybercriminal operations and in ultimately identifying who’s responsible for these activities.

To understand why data reuse in WHOIS records is so common among crooks, put yourself in the shoes of your average scammer or spammer — someone who has to register dozens or even hundreds or thousands of domains a week to ply their trade. Are you going to create hundreds or thousands of email addresses and fabricate as many personal details to make your WHOIS listings that much harder for researchers to track? The answer is that those who take this extraordinary step are by far and away the exception rather than the rule. Most simply reuse the same email address and phony address/phone/contact information across many domains as long as it remains profitable for them to do so.

This pattern of WHOIS data reuse doesn’t just extend across a few weeks or months. Very often, if a spammer, phisher or scammer can get away with re-using the same WHOIS details over many years without any deleterious effects to their operations, they will happily do so. Why they may do this is their own business, but nevertheless it makes WHOIS an incredibly powerful tool for tracking threat actors across multiple networks, registrars and Internet epochs.

All domain registrars offer free or a-la-carte privacy protection services that mask the personal information provided by the domain registrant. Most cybercriminals — unless they are dumb or lazy — are already taking advantage of these anyway, so it’s not clear why masking domain registration for everyone is going to change the status quo by much. 

It is true that some domain registrants do take advantage of WHOIS privacy services, but based on countless investigations I have conducted using WHOIS to uncover cybercrime businesses and operators, I’d wager that cybercrooks more often do not use these services. Not infrequently, when they do use WHOIS privacy options there are still gaps in coverage at some point in the domain’s history (such as when a registrant switches hosting providers) which are indexed by historic WHOIS records and that offer a brief window of visibility into the details behind the registration.

This is demonstrably true even for organized cybercrime groups and for nation state actors, and these are arguably some of the most sophisticated and savvy cybercriminals out there.

It’s worth adding that if so many cybercrooks seem nonchalant about adopting WHOIS privacy services it may well be because they reside in countries where the rule of law is not well-established, or their host country doesn’t particularly discourage their activities so long as they’re not violating the golden rule — namely, targeting people in their own backyard. And so they may not particularly care about covering their tracks. Or in other cases they do care, but nevertheless make mistakes or get sloppy at some point, as most cybercriminals do.

The GDPR does not apply to businesses — only to individuals — so there is no reason researchers or anyone else should be unable to find domain registration details for organizations and companies in the WHOIS database after May 25, right?

It is true that the European privacy regulations as they relate to WHOIS records do not apply to businesses registering domain names. However, the domain registrar industry — which operates on razor-thin profit margins and which has long sought to be free from any WHOIS requirements or accountability whatsoever — won’t exactly be tripping over themselves to add more complexity to their WHOIS efforts just to make a distinction between businesses and individuals.

As a result, registrars simply won’t make that distinction because there is no mandate that they must. They’ll just adopt the same WHOIS data collection and display polices across the board, regardless of whether the WHOIS details for a given domain suggest that the registrant is a business or an individual.

But the GDPR only applies to data collected about people in Europe, so why should this impact WHOIS registration details collected on people who are outside of Europe?

Again, domain registrars are the ones collecting WHOIS data, and they are most unlikely to develop WHOIS record collection and dissemination policies that seek to differentiate between entities covered by GDPR and those that may not be. Such an attempt would be fraught with legal and monetary complications that they simply will not take on voluntarily.

What’s more, the domain registrar community tends to view the public display of WHOIS data as a nuisance and a cost center. They have mainly only allowed public access to WHOIS data because ICANN’s contracts state that they should. So, from registrar community’s point of view, the less information they must make available to the public, the better.

Like it or not, the job of tracking down and bringing cybercriminals to justice falls to law enforcement agencies — not security researchers. Law enforcement agencies will still have unfettered access to full WHOIS records.

As it relates to inter-state crimes (i.e, the bulk of all Internet abuse), law enforcement — at least in the United States — is divided into two main components: The investigative side (i.e., the FBI and Secret Service) and the prosecutorial side (the state and district attorneys who actually initiate court proceedings intended to bring an accused person to justice).

Much of the legwork done to provide the evidence needed to convince prosecutors that there is even a case worth prosecuting is performed by security researchers. The reasons why this is true are too numerous to delve into here, but the safe answer is that law enforcement investigators typically are more motivated to focus on crimes for which they can readily envision someone getting prosecuted — and because very often their plate is full with far more pressing, immediate and local (physical) crimes.

Admittedly, this is a bit of a blanket statement because in many cases local, state and federal law enforcement agencies will do this often tedious legwork of cybercrime investigations on their own — provided it involves or impacts someone in their jurisdiction. But due in large part to these jurisdictional issues, politics and the need to build prosecutions around a specific locality when it comes to cybercrime cases, very often law enforcement agencies tend to miss the forest for the trees.

Who cares if security researchers will lose access to WHOIS data, anyway? To borrow an assertion from the Internet Governance article, “maybe it’s high time for security researchers and businesses that harvest personal information from WHOIS on an industrial scale to refine and remodel their research methods and business models.”

This is an alluring argument. After all, the technology and security industries claim to be based on innovation. But consider carefully how anti-virus, anti-spam or firewall technologies currently work. The unfortunate reality is that these technologies are still mostly powered by humans, and those humans rely heavily on access to key details about domain reputation and ownership history.

Those metrics for reputation weigh a host of different qualities, but a huge component of that reputation score is determining whether a given domain or Internet address has been connected to any other previous scams, spams, attacks or other badness. We can argue about whether this is the best way to measure reputation, but it doesn’t change the prospect that many of these technologies will in all likelihood perform less effectively after WHOIS records start being heavily redacted.

Don’t advances in artificial intelligence and machine learning obviate the need for researchers to have access to WHOIS data?

This sounds like a nice idea, but again it is far removed from current practice. Ask anyone who regularly uses WHOIS data to determine reputation or to track and block malicious online threats and I’ll wager you will find the answer is that these analyses are still mostly based on manual lookups and often thankless legwork. Perhaps such trendy technological buzzwords will indeed describe the standard practice of the security industry at some point in the future, but in my experience this does not accurately depict the reality today.

Okay, but Internet addresses are pretty useful tools for determining reputation. The sharing of IP addresses tied to cybercriminal operations isn’t going to be impacted by the GDPR, is it? 

That depends on the organization doing the sharing. I’ve encountered at least two cases in the past few months wherein European-based security firms have been reluctant to share Internet address information at all in response to the GDPR — based on a perceived (if not overly legalistic) interpretation that somehow this information also might be considered personally identifying data. This reluctance to share such information out of a concern that doing so might land the sharer in legal hot water can indeed have a chilling effect on the important sharing of threat intelligence across borders.

According to the Internet Governance article, “If you need to get in touch with a website’s administrator, you will be able to do so in what is a less intrusive manner of achieving this purpose: by using an anonymized email address, or webform, to reach them (The exact implementation will depend on the registry). If this change is inadequate for your ‘private detective’ activities and you require full WHOIS records, including the personal information, then you will need to declare to a domain name registry your specific need for and use of this personal information. Nominet, for instance, has said that interested parties may request the full WHOIS record (including historical data) for a specific domain and get a response within one business day for no charge.”

I’m sure this will go over tremendously with both the hacked sites used to host phishing and/or malware download pages, as well as those phished by or served with malware in the added time it will take to relay and approve said requests.

According to a Q3 2017 study (PDF) by security firm Webroot, the average lifespan of a phishing site is between four and eight hours. How is waiting that long before being able to determine who owns the offending domain going to be helpful to either the hacked site or its victims? It also doesn’t seem likely that many other registrars will volunteer for this 24-hour turnaround duty — and indeed no others have publicly demonstrated any willingness to take on this added cost and hassle.

I’ve heard that ICANN is pushing for a delay in the GDPR as it relates to WHOIS records, to give the registrar community time to come up with an accreditation system that would grant vetted researchers access to WHOIS records. Why isn’t that a good middle ground?

It might be if ICANN hadn’t dragged its heels in taking GDPR seriously until perhaps the past few months. As it stands, the experts I’ve interviewed see little prospect for such a system being ironed out or in gaining necessary traction among the registrar community to accomplish this anytime soon. And most experts I’ve interviewed predict it is likely that the Internet community will still be debating about how to create such an accreditation system a year from now.

Hence, it’s not likely that WHOIS records will continue to be anywhere near as useful to researchers in a month or so than they were previously. And this reality will continue for many months to come — if indeed some kind of vetted WHOIS access system is ever envisioned and put into place.

After I registered a domain name using my real email address, I noticed that address started receiving more spam emails. Won’t hiding email addresses in WHOIS records reduce the overall amount of spam I can expect when registering a domain under my real email address?

That depends on whether you believe any of the responses to the bolded questions above. Will that address be spammed by people who try to lure you into paying them to register variations on that domain, or to entice you into purchasing low-cost Web hosting services from some random or shady company? Probably. That’s exactly what happens to almost anyone who registers a domain name that is publicly indexed in WHOIS records.

The real question is whether redacting all email addresses from WHOIS will result in overall more bad stuff entering your inbox and littering the Web, thanks to reputation-based anti-spam and anti-abuse systems failing to work as well as they did before GDPR kicks in.

It’s worth noting that ICANN created a working group to study this exact issue, which noted that “the appearance of email addresses in response to WHOIS queries is indeed a contributor to the receipt of spam, albeit just one of many.” However, the report concluded that “the Committee members involved in the WHOIS study do not believe that the WHOIS service is the dominant source of spam.”

Do you have something against people not getting spammed, or against better privacy in general? 

To the contrary, I have worked the majority of my professional career to expose those who are doing the spamming and scamming. And I can say without hesitation that an overwhelming percentage of that research has been possible thanks to data included in public WHOIS registration records.

Is the current WHOIS system outdated, antiquated and in need of an update? Perhaps. But scrapping the current system without establishing anything in between while laboring under the largely untested belief that in doing so we will achieve some kind of privacy utopia seems myopic.

If opponents of the current WHOIS system are being intellectually honest, they will make the following argument and stick to it: By restricting access to information currently available in the WHOIS system, whatever losses or negative consequences on security we may suffer as a result will be worth the cost in terms of added privacy. That’s an argument I can respect, if not agree with.

But for the most part that’s not the refrain I’m hearing. Instead, what this camp seems to be saying is if you’re not on board with the WHOIS changes that will be brought about by the GDPR, then there must be something wrong with you, and in any case here a bunch of thinly-sourced reasons why the coming changes might not be that bad.

Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview