A now-patched vulnerability in the Amadeus flight reservation system – used by airlines around the planet – could, or may, have been exploited by miscreants to view strangers’ boarding passes.
David Stubley, CEO at UK security consultancy 7 Elements, told us last night he discovered the privacy-busting flaw, which was present in the Amadeus check-in application used by airlines.
Specifically, Stubley explained, when a traveler went to view their boarding pass, Amadeus presented the paperwork on a page with a URL that includes the passenger’s ID number. This ID number could be changed to another number to call up other boarding passes from other Amadeus customers, such as British Airways, Air France, and United Airlines, without any further authentication. Just change the number in the web address bar and hit enter to fetch the pass for that ID number.
This is a classic insecure direct object reference (IDOR) vulnerability, which can be exploited to enumerate through records that otherwise should be off limits. Here is an example check-in URL with the passenger’s ID number in bold:
Stubley told The Register the flaw could be exploited in both websites and apps for airlines that use Amadeus’s technology to handle their reservations and boarding passes – that’s roughly half of the world’s major carriers.
“Originally it was found when using an airline’s mobile app for check-in,” the CEO said. “Once you have the URL you can then access directly without needing to use the website or mobile app.”
The bug was privately disclosed to Amadeus and was patched prior to public disclosure, so airlines and their customers are already protected. Still, the disclosure is hardly a ringing endorsement for Amadeus in the wake of the company’s previous infosec gaffes.
The ability to pull up boarding passes would, at best, be a potential disclosure of personal information as a snoop could see things like flight dates and times, and possibly use that to collect other information.
More seriously, the downloaded boarding passes would be valid, meaning a scumbag who printed out the pass, arrived before the actual customer, and was able to somehow get past security could use it to get into restricted areas or a flight.
“It should be noted that additional security controls may restrict the successful use of a boarding pass that has already been used to gain access airside,” said Stubley. “However, those controls are not uniformly deployed across all airports.”
Amadeus sent us the following statement:
“Amadeus recently became aware of a configuration flaw affecting its Altéa Self Service Check-In solution. Our security teams took immediate action and the vulnerability is now fixed. We are not aware of there having been any further unauthorized access resulting from the vulnerability, beyond the activity of the security researcher. We regret any inconvenience this might cause to our customers.”
Symantec’s share price has plunged on reports that its planned merger with Broadcom has fallen through.
According to CNBC, several sources have confirmed that the deal is off after Symantec insisted on too high a price – $28 a share – to sell up. That report, and the claim that it was asking too much, appear to have been validated when Symantec’s stock price immediately dropped 12 per cent and has continued to slowly slide all morning. At the time of writing, it is down 15.5 per cent at $21.64.
Adding insult to injury, Broadcom’s share price has risen slightly – up 1.7 per cent at the time of writing – demonstrating that as far as analysts are considered Symantec is not exactly a shining tech target.
Its CEO Greg Clark stepped down in May with no permanent replacement; something Symantec has had to get used to, losing five chief executives now in eight years. The security shop is also plagued with allegations of dodgy accounting, into which investigations are ongoing.
That said, when reports of the proposed deal first appeared, Symantec’s price went up 18 per cent and Broadcom’s went down 4 per cent, so Symantec has become mildly more interesting to the markets, presumably because they suspect someone else may look at buying the legacy security outfit.
The Broadcom/Symantec love-in was short-lived. Just two weeks ago, it was reported that the two companies were in “advanced talks” with Broadcom planning to pay $15bn for control.
What went wrong? Well, CNBC reports that Symantec wanted $28 a share and Broadcom thought that was too high. Bloomberg has added more context by reporting that the deal was set higher than that – $28.25 – but Broadcom insisted on a drop of $1.50 – ie: down to $26.75 a share – after it had done its due diligence. Symantec wasn’t happy with that and walked away. The deal was due to be announced this week.
The rationale for the deal was that Broadcom wants to get into higher margin software, a year after it bought CA Technologies for $18.9bn. Broadcom has had a relatively tough run of late as semiconductors slumped 12 per cent year-on-year in the first quarter of this year.
Broadcom CEO Hock Tan is also under pressure to justify his wage packet: he was the highest paid exec in the US in 2017 with a salary of $103.2m a year. Symantec will be the second failed takeover in the past year after Broadcom missed out on Qualcomm after US authorities blocked the deal, citing national security concerns.
A team of US academics have proposed a simple method to defeat the Bluetooth LE standard’s anti-tracking measures.
David Strobinski, David Li, and Johannes Becker at Boston University told The Register how they found that the MAC randomization system of Bluetooth LE, designed to thwart the tracking of devices, transmits packages of data that can still be used to uniquely identify, and thus track the location of a mobile phone or PC.
As Bluetooth SIG explains, the randomization allows Bluetooth devices to wirelessly communicate (via data transmissions known as advertising packets) while still staying anonymous:
This feature causes the MAC address within the advertising packets to be replaced with a random value that changes at timing intervals determined by the manufacturer. Any malicious device(s), placed at intervals along your travel route, would not be able to determine that the series of different, randomly generated MAC addresses received from your device actually relates to the same physical device. It actually looks like a series of different devices, thus, it will not be possible to track you using the advertised MAC address.
In order to allow connections, however, the device still has to be able to identify itself with other hardware. To do this, it sends a string of unique data along with the MAC address in the advertising packet
“The payload sent here is information only relevant to the device,” Becker told El Reg today. “We are just interested in how unique this information is.”
More importantly, like the MAC address, the unique payload can be set by the vendor to refresh itself at regular intervals.
And here, the researchers found, was where the fundamental weakness of the system was exposed. Starobinski, Li, and Becker discovered that because the MAC address and the payload do not change over at the same time, each could be used to continue identifying the gadget or computer.
In other words, as long as the listener knows either the MAC address or the unique payload, they can keep identifying the device even when one of the strings changes. Thus, the listener can keep tabs on a nearby gizmo, knowing it is still within range even as the MAC or payload changes.
In a series of tests spanning several months, Starobinski, Li, and Becker passively eavesdropped on mobile phones (Android and iPhone), PCs (Windows 10 and MacOS) and smartwatches (BlackBerry) to monitor when and how both the MAC addresses and payloads changed over time.
While the time intervals depended on the vendor, in most cases the researchers said they could reliably identify a device over readings taken minutes, and sometimes hours, apart. In the field, this means a listener with one or more devices could still track the movements of a device over time even if its MAC address kept changing.
“The experiments were done listening continuously, but what we found is these addresses change in minutes or hours depending on the devices,” Becker explained. “It is enough to listen in periodically, as long as we don’t miss any changes, then piece them together.”
Fixing the design flaw will not be a simple task, however. While having both the MAC address and the unique payload change at similar times would close the overlap period that allows identification, even that could be predictable if not properly implemented.
“If everybody does this in a predictable manner you can see when they change,” noted Starobinski, “so the second component is to randomize the change.”
What is perhaps even more concerning, say the Boston Uni trio, is the message Bluetooth vendors are putting out to the public when they advertise Bluetooth LE as being an untrackable standard.
“The fact is this protocol allows this behavior and doesn’t point out these issues,” said Becker. “The Bluetooth specification allows bad behavior, or just negligence, to defeat the anti-tracking measure.”
Starobinski, Li, and Becker plan to present their paper [PDF], Tracking Anonymized Bluetooth Devices, at the 19th Privacy Enhancing Technologies Symposium in Sweden on July 17.
Infosec company ESET is reportedly suing a member of the Slovakian Parliament for insulting it over social media.
According to Slovakian news outlet SME, ESET became fed up with the antics of local Marxist politician Ľuboš Blaha, who was allegedly describing the threat intelligence outfit’s owners as oligarchs who “own the media and pay politicians”.
He is also said to have claimed that ESET is linked to the US CIA, presumably to discredit the company as some kind of willing pawn to US foreign policy. The company has an extensive presence in the US market.
SME reporter Martina Raabova told The Register that the lawsuit against Blaha was filed by ESET because, in the infosec firm’s view, “he wrote several misleading statuses on Facebook about the company”.
Further reporting from Slovakia revealed that ESET asked Blaha to stop publishing the accusations of bribery, corruption and foreign collaboration or they’d sue. This prompted him to publish a Facebook video in which he said: “ESET is trying to silence me. This is their vision of freedom and democracy!”, going on to brand them “outrageous fascists”.
El Reg has not encountered any fascists, outrageous or otherwise, representing ESET at infosec industry events, that we know of at least.
Blaha’s beef with ESET seems to be that, as a local big business, they must be exploiting their wealth to support politicians who oppose his point of view. He appears to use social media to communicate directly and informally with his fan base, much like some other popular politicians in the English-speaking world.
Instances of security companies initiating lawsuits against politicians are really rather rare. Kaspersky has taken the US government to court for banning the public sector from using its products, though that is a biz-to-gov lawsuit rather than against the politicians who caused it.
A security product testing company, NSS Labs, sued Crowdstrike, ESET and a bunch of other firms last year, claiming that they were conspiring to stop product deficiencies becoming public – though that lawsuit has nothing to do with politicians being rude on Facebook.
ESET refused to comment, citing the ongoing legal case. ®
Google Translate renders ESET to “Isis” when translating from Slovakian to English. The name Eset is Slovakian for the Egyptian goddess Isis, who is the goddess of marriage and femininity. While the SME headline “Isis ran out of patience and sued Blaha” might suggest that the murderous bastards of the Islamic State terror group have finally turned to peaceful means of conflict resolution, sadly it’s just an error in translation.
A serving Metropolitan police officer who illegally accessed a police database to monitor a criminal investigation into his own conduct has pleaded guilty to crimes under the UK’s Computer Misuse Act.
Sergeant Okechukwu Efobi, of Byron Road, Wealdstone, Harrow, was ordered to complete 150 hours of community service and pay a total of £540, comprising a £90 victim surcharge tax and £450 of prosecution costs.
Efobi, who remains employed by the Met and is currently on restricted duty, had been accessing a police database to view details of suspects in an ongoing criminal investigation.
Between November 2017 and October 2018, at the force’s high security Empress State Building HQ in southwest London, Efobi trawled the unidentified database, sending himself documents from it and viewing details of other suspects in criminal investigations.
He pleaded guilty to three charges under sections 1(1) and (3) of the Computer Misuse Act 1990 last week at Westminster Magistrates’ Court. An internal misconduct review into Efobi’s actions is currently under way, the Met told The Register.
Police misuse of their access to internal databases continues to be an ongoing problem, quite separate from the one of British cops using Chinese-inspired mass surveillance tech whose legality has been repeatedly questioned by the public and the authorities alike.
Back in 2015, the Met recorded a tripling of computer misuse allegations over the year, with police employees alleged to have abused their privileges 173 times. That picture was mirrored more widely across the country in 2017, when it was found that police forces had investigated a total of 779 cases of potential data misuse within their own ranks. Even the police trade union confessed that same year that their members were “persistently” committing data breaches.
A few years ago HM Inspectorate of Constabulary discovered that a number of non-police organisations were merrily trawling through the Police National Computer at will. Legal agreements intended to regulate that access were vague and in many cases had been allowed to expire.
Apple has pushed a silent update to Macs, disabling the hidden web server installed by the popular Zoom web-conferencing software.
A security researcher this week went public with his finding that the mechanism used to bypass a Safari prompt before entering a Zoom conference was a hidden local web server.
Jonathan Leitschuh focused largely on the fact that a user’s webcam would likely be ON automatically, meaning that a crafty bit of web coding would give an attacker a peek into your room if you simply visit their site.
But the presence of the web server was a more serious issue, especially since uninstalling Zoom did not remove it and the web server would reinstall the Zoom client – which is malware-like behaviour.
Although no remote execution vulnerability has been published, a web server with an unpublished API is a risk in itself. An element on a web page could link to localhost on the known Zoom port with whatever arguments it chooses.
In response to the bad publicity, Zoom posted a series of on-the-hoof updates. Its initial reaction was to justify the hidden web server as “a legitimate solution to a poor user experience problem, enabling our users to have faster, one-click-to-join meetings”.
This soon changed. On 9 July the company updated its Mac app to remove the local web server “via a prompted update”.
The next day Apple itself took action, by instructing macOS’s built-in antivirus engine to remove the web server on sight from Macs. Zoom CEO Eric Yuan added on Wednesday:
Apple issued an update to ensure the Zoom web server is removed from all Macs, even if the user did not update their Zoom app or deleted it before we issued our July 9 patch. Zoom worked with Apple to test this update, which requires no user interaction.
Further, Zoom promised an update in a couple of days intending that users who select “Always turn off my video” on first use will have that preference saved automatically.
Apple appears to have concluded that it is better to protect users by silently disabling this component than to respect the wishes of those who like to think they are in control of what gets installed and removed. Few would disagree.
There is another matter, though. On Windows, users may still be joined automatically to conferences, and with their webcam on, unless they have been careful to configure browser preferences otherwise. It is all a matter of how the .zoommtg extension is handled. Convenient, but still leaves users vulnerable to some webcam surprises.
A nasty new variant of the FinSpy snoopware tool that infects and slurps data from Android and iOS phones and tablets is being peddled, we’re told.
Kaspersky said this week the notorious commercial spyware, developed by Gamma Group and sold by its subsidiary Gamma International to allegedly respectable governments, has been showing up in the wild since late last year, most recently in a group of devices located in Myanmar this June.
While FinSpy, also known as FinFisher, has been touted as mobile device surveillanceware as far back as 2012, the Kaspersky research team said this latest version is particularly invasive in its ability to collect user chats, physical movements, and stored files from a wide range of applications. The new code been spotted in 20 countries, with the actual reach likely being much greater, it is claimed.
Bear in mind this software is typically deployed against selected targets, such as foreign agents, journalists, activists, and so on: it’s not usually lobbed at the masses.
“Mobile implants for iOS and Android have almost the same functionality,” Kaspersky said in its report on the matter.
“They are capable of collecting personal information such as contacts, SMS/MMS messages, emails, calendars, GPS location, photos, files in memory, phone call recordings and data from the most popular messengers.”
Long a favorite tool of oppressive government regimes, FinSpy is classified as malware by most security firms and has been implicated in human-rights abuses.
Getting the malware onto the target’s gizmo is, however, up to FinSpy’s buyers. Kaspersky notes that while FinSpy uses a number of tricks to elevate its privileges once installed, actually getting the malware onto a mobile device will require spies to either have direct access to the handheld (not particularly hard for most dictatorships to accomplish) or utilize an exploit from a third-party.
For iOS devices, the attacker will have to take the extra step of first jailbreaking the Apple phone or tablet – this is because the snooping capabilities of FinSpy depend on the Cydia package manager in iOS. On Android, the malware will attempt to get complete access through elevating itself to root by deploying the DirtyCow exploit on unpatched handsets.
“The Android implant has functionality to gain root privileges on an unrooted device by abusing known vulnerabilities. As for the iOS version, it seems that Gamma’s solution doesn’t provide infection exploits for its customers, as their product seems to be fine-tuned to clean traces of publicly available jailbreaking tools,” Kaspersky explains.
“That might imply physical access to the victim in cases where devices are not already jailbroken. At the same time, multiple features that we haven’t observed before in malware designed for this platform are implemented.”
Once the malware is placed on the handheld, it looks not only for locally stored media and SMS messages, but also puts out feelers for any number of popular messaging apps like WhatsApp, Skype, BlackBerry Messenger and Signal. The spyware attempts to collect communications from those applications, and siphon them off to a server belonging to whoever bought and deployed the software nasty.
The spyware’s customer is also given a set of tools to fine-tune the code for each infection, defining precisely which applications they want to target and what information they need to harvest. This makes FinSpy more practical for governments in geographical areas where one messaging app or means of communication is more popular than others.
The malware is also able to log keystrokes and record voice calls both on the cell network and over VoIP calling services, as well as access to GPS tracking and the ability to hide specific files and utilities on the device.
In short, the Kaspersky team says that despite being around for the better part of a decade, FinSpy remains as invasive and capable as it ever was.
“Since the leak in 2014, Gamma Group has recreated significant parts of its implants, extended supported functionality (for example, the list of supported instant messengers has been significantly expanded) and at the same time improved encryption and obfuscation (making it harder to analyze and detect implants), which made it possible to retain its position in the market,” they note.
Five boffins from four US universities have explored AMD’s Secure Encrypted Virtualization (SEV) technology – and found its defenses can be, in certain circumstances, bypassed with a bit of effort.
In a paper [PDF] presented Tuesday at the ACM Asia Conference on Computer and Communications Security in Auckland, New Zealand, computer scientists Jan Werner (UNC Chapel Hill), Joshua Mason (University of Illinois), Manos Antonakakis (Georgia Tech), Michalis Polychronakis (Stony Brook University), and Fabian Monrose (UNC Chapel Hill) detail two novel attacks that can undo the privacy of protected processor enclaves.
The paper, “The SEVerESt Of Them All: Inference Attacks Against Secure Virtual Enclaves,” describes techniques that can be exploited by rogue cloud server administrators, or hypervisors hijacked by hackers, to figure out what applications are running within an SEV-protected guest virtual machine, even when its RAM is encrypted, and also extract or even inject data within those VMs.
This is possible, we’re told, by monitoring, and altering if necessary, the contents of the general-purpose registers of the SEV guest’s CPU cores, gradually revealing or messing with whatever workload the guest may be executing. The hypervisor can access the registers, which typically hold temporary variables of whatever software is running, by briefly pausing the guest and inspecting its saved state. Efforts by AMD to prevent this from happening, by hiding the context of a virtual machine while the hypervisor is active, can also, it is claimed, be potentially thwarted.
SEV is supposed to safeguard sensitive workloads, running in guest virtual machines, from the prying eyes and fingers of malware and rogue insiders on host servers, typically machines located off-premises or in the public cloud.
The techniques, specifically, undermine the data confidentiality model of guest virtual machines by enabling miscreants to “recover data transferred over TLS connections within the encrypted guest, retrieve the contents of sensitive data as it is being read from disk by the guest, and inject arbitrary data within the guest,” according to the study.
As a result, the paper calls into question the confidentiality promises of cloud service providers. Pulling off these techniques, in our view, is non-trivial, so if anyone does fancy exploiting these weaknesses in SEV in real-world scenarios, they’ll need to be determined and suitably resourced.
In 2016, AMD introduced two memory encryption capabilities to protect sensitive data in multi-tenant environments, Secure Memory Encryption (SME) and Secure Encrypted Virtualization (SEV). The former protects memory against physical attacks like cold boot and direct memory access attacks. The latter mixes memory encryption and virtualization, allowing each virtual machine to be protected from other virtual machines and underlying hypervisors and their admins.
Other vendors have their own secure enclave systems, like Intel SGX, which offers a different set of potential attack paths.
SEV, says AMD, protects customers’ guest VMs from one another, and from software running on the underlying host and its administrators. Whatever happens in these virtual machines should be off limits to other customers as well as the host machine’s operating system, hypervisor, and admins. However, the researchers have demonstrated that this threat model fails to ward off register inference attacks and structural inference attacks by malicious hypervisors.
“By passively observing changes in the registers, an adversary can recover critical information about activities in the encrypted guest,” the researchers explain in their paper.
A variant technique even works against Secure Encrypted Virtualization Encrypted State (SEV-ES), an extended memory protection technique that not only encrypts RAM but encrypts the guest’s virtual machine control block: this is an area of memory that stores a virtual machine’s CPU register contents when it is forced to yield to the hypervisor. This encryption should thus stop the hypervisor from making any sense of the paused VM’s context, though its contents can still be inferred, we’re told.
“We show how one can use data provided by the Instruction Based Sampling (IBS) subsystem (e.g. to learn whether an executed instruction was a branch, load, or store) to identify the applications running within the VM,” the paper says. “Intuitively, one can collect performance data from the virtual machine and match the observed behavior to known signatures of running applications.”
To conduct their work, the boffins used a Silicon Mechanics aNU-12-304 server with dual AMD Epyc 7301 processors and 256GB of RAM, running Ubuntu 16.04 and a custom 64-bit Linux kernel v4.15. Guest VMs received a single vCPU with 2GB of RAM, running Ubuntu 16.04 with the same kernel as the host.
While the security implications of accessing encrypted data and injecting arbitrary data are obvious, even exposing what applications are running in a guest VM has potentially undesirable consequences. Service providers could use the technique for application fingerprinting and banning unwanted software; malicious individuals could conduct reconnaissance to target exploits, to developing return-oriented programming (ROP) attacks or to undermine Address Space Layout Randomization (ASLR) defenses.
The researchers recommend the IBS subsystem be changed so that guest readings are discarded when secure encrypted virtualization is enabled.
The Register asked AMD for comment, and we’ve not heard back.
The Information Commissioner’s Office issued £3m worth of fines for data breaches in the year to April 2018 – a mere fraction of its recent proposed GDPR-enabled penalties on British Airways and Marriott.
The UK data watchdog’s annual report for 2018/19 (PDF) reveals that it imposed a financial slap on the wrist on 22 occasions.
That includes the £500,000 fine against Equifax for its security debacle affecting the personal data of up to 15 million UK residents, and the same amount against Facebook over its data-harvesting scandal that affected an estimated 87 million users.
Under the UK’s Data Protection Act, the maximum fine was £500,000. But since the EU’s GDPR came into force on 25 May last year, companies are now liable to a penalty of up to 4 per cent of turnover.
Just this week, the ICO flexed its GDPR enforcement muscles for the first time. British Airways is facing a record fine of £183m for last year’s data leakage (1.5 per cent of its turnover), and yesterday it was revealed that hotel chain Marriott could be stung for £99m (3 per cent).
Although GDPR powers were in place during 2018/19, an ICO spokesman said none were used in that period due to the time it takes to investigate breaches.
Though last year’s fine might seem small, they are an increase on 2017/18, when the ICO issued just 11 fines totalling £1.3m.
During 2018/19, the ICO also issued 23 monetary punishments under the Privacy and Electronic Communications Regulation, for nuisance calls, totalling over £2m.
In a foreword to its annual report, Information Commissioner Elizabeth Denham said: “The ICO has covered an enormous amount of ground over the last year – from the introduction of a new data protection law, to our calls to change the freedom of information law, from record-setting fines to a record number of people raising data protection concerns.
“The biggest moment of the year was the General Data Protection Regulation (GDPR) coming into force. This saw people wake up to the potential of their personal data, leading to greater awareness of the role of the regulator when their data rights aren’t being respected. The doubling of concerns raised with our office reflects that.”
Other large fines included a £385,000 against Uber, relating to a security incident affecting the personal data of 2.7 million users and 82,000 drivers, and a £325,000 fine against the Crown Prosecution Service for losing unencrypted DVDs containing recordings of police interviews.
It also slapped Yahoo! UK Services Ltd with a £250,000 penalty relating to a breach affecting the data of approximately 500 million users worldwide.
Mozilla on Tuesday added digital certificates belonging to security biz DarkMatter and its subsidiaries to Firefox’s OneCRL blocklist, based on concerns that the UAE-based company will misuse its power as a certificate authority (CA) to intercept online communications.
In a post to Mozilla’s security policy forum, Wayne Thayer, certification authority program manager for the public benefit browser and software maker, said multiple independent reports have raised credible allegations that DarkMatter has been involved in spying.
“While there are solid arguments on both sides of this decision, it is reasonable to conclude that continuing to place trust in DarkMatter is a significant risk to our users,” said Thayer.
“I will be opening a bug requesting the distrust of DarkMatter’s subordinate CAs pending Kathleen’s concurrence. I will also recommend denial of the pending inclusion request, and any new requests from DigitalTrust.”
DigitalTrust is the name of DarkMatter’s CA business; “Kathleen” refers to Mozilla program manager Kathleen Wilson.
Web browsers depend on a list of authorities that vouch for the authenticity and integrity of the digital certificates presented by websites. An untrustworthy CA could issue a fake certificate to a website that allowed it to spy on interactions between the site and its visitors, even if the connection appeared to be secure.
DarkMatter has been trying to become a root certificate authority for the past two years. In January, Reuters reported that DarkMatter personnel assisted in a hacking operation called Project Raven, run by an Emirati intelligence agency and assisted by former US intelligence officials. The goal of Project Raven involved compromising the internet accounts of journalists, human rights activists and foreign government officials, it’s alleged.
DarkMatter has denied that report; the company didn’t immediately respond to a request for comment from The Register.
In February, the Electronic Frontier Foundation urged Mozilla and other maintainers of root certificate databases like Apple, Google and Microsoft to reject DarkMatter’s bid to become a root certificate authority and to revoke its intermediate certificate, which allows the issuance of certificates under the oversight of a recognized root CA.
“Giving DarkMatter a trusted root certificate would be like letting the proverbial fox guard the henhouse,” said Cooper Quintin, senior staff technologist at the EFF at the time.
In a statement emailed to The Register, Selena Deckelmann, senior director of engineering at Mozilla, defended DarkMatter’s banishment, a punishment meted out to China’s CNNIC in 2015.
“We made the decision to revoke trust in DarkMatter’s intermediate certificates and to deny the pending inclusion request,” she said. “We are confident this is the right decision, but it was not made lightly. Two important obligations guided our decision: first, that trust in our CA root store is a critical component of the security underpinnings of the web and second, our responsibility to protect individuals who rely on Mozilla products.”
Deckelmann said in light of credible evidence from multiple sources that DarkMatter participates in spying, Mozilla’s responsibilities to the web and those who rely on its software have led it to conclude that continuing to trust the security biz would endanger the web and users of Mozilla products.