PKI Mistakes That Were So Bad They Made Headlines (12 Examples)


PKI is a critical part of most IT systems. When it works well, it’s largely invisible — authenticating connections and encrypting data without most users knowing it’s there. But when things go wrong, the results can be devastating. Let’s take a look at three common PKI mistakes and the (bad news) headlines they create.

Now, we’re not here to bash organizations who made a PKI “oopsie.” However, we’re sharing all of this information to help you learn from their mistakes and avoid having it happen to your organization in the future.

Let’s hash it out.

PKI Mistake #1: Poorly Managing Your PKI Certificates Leads to Outages & Downtime

It doesn’t matter whether you’re a mom-and-pop store or one of the world’s leading governments; you’re not infallible. Important tasks — such as managing your certificates to ensure they’re replaced before their expiration dates — will fall through the cracks and get forgotten if you don’t have a management system and documented processes in place to help you manage them.

All x.509 digital certificates have an expiration date. Industry leaders have agreed that website security certificates (SSL/TLS certificates) must have a maximum validity period of no more than 398 days. This means that by no later than the end of the 398th day, your digital certificate will expire and leave you floundering to find out why your app, website, or other online services associated with that certificate are inaccessible.

If you don’t manage your lifecycle properly (meaning you lack visibility of the PKI assets within your network and IT ecosystem and don’t have the processes and tools in place to manage them), then I’m certain things won’t go the way you hope.

Certificate expirations alone lead to serious issues, including service outages and risks of man-in-the-middle attacks. AppViewX data from a Forrester Consulting study shows that nearly 60% of reported data breaches were related to digital certificates. Of those, more than half of survey respondents indicated those incidents cost upwards of $100,000 per outage.

One of the biggest challenges for big organizations and enterprises is that they’re responsible for managing so many digital certificates. KeyFactor’s State of Machine Identity Management 2023 report puts the average number of certificates issued within an organization at 256,000. That’s the equivalent of issuing 701 certificates per day for a year (or one certificate every two or so minutes).

5 Real-World Examples of What Happens When Certificates Are Allowed to Expire

Yup, there’s a reason why we always emphasize the importance of proper certificate lifecycle management and the use of lifecycle automation. We’ve repeatedly seen what happens when public and private organizations lose sight of the certificates within their networks and systems…


We all know the story about Equifax’s headline-making data breach. The data breach heard ‘round the world resulted in the compromise of more than 145.5 million consumers’ personal data. As it turns out, a PKI certificate had expired 10 months before the breach occurred. This expired asset prevented the credit agency from inspecting its traffic, during which time some advantageous cybercriminals found and exploited an Apache Struts-related vulnerability on Equifax’s network.

The result? They were able to hide in the company’s encrypted network traffic and remained undiscovered for more than two-and-a-half months (76 days). Imagine having some unknown and unwanted intruder in your network. How would you respond to the situation?


We’ve also previously shared about Ericsson’s expired certificate issue that knocked out services for tens of millions of cellular users in December 2018. As a result, customers in upwards of 11 countries were unable to make or receive phone calls or texts. The cause? An expired certificate tied to Ericsson’s management software that was used by several European and Japanese telecommunication companies, including O2 (the United Kingdom’s biggest mobile service provider).

Needless to say, it wasn’t a good look for Ericsson or the other cellular service providers using their software.


Last year, SpaceX’s Starlink satellite internet services system experienced downtime for several hours due to an expired digital certificate in an unspecified system.

A screenshot from Elon Musk's X post regarding the expired digital certificate that caused a Starlink service outage.
Image caption: A screenshot from Elon Musk’s X (formerly Twitter) post.

It goes to show how something seemingly so simple — a single certificate reaching its end-of-life date — could impede an application or service for a time. That downtime translates to inaccessibility of services for Starlink’s customers and their customers’ customers.


The music streaming service Spotify experienced a certificate expiration that knocked out its streaming services for an hour back in August 2020. The culprit? An expired SSL/TLS certificate. Thankfully, the downtime was over pretty quick, and listeners could get back to enjoying their music.

A screenshot from Spotify's then-Twitter (now X) post regarding the expired SSL/TLS certificate that caused a Spotify service outage.
Image source: Spotify’s X account (formerly Twitter).

Spotify’s online podcast platform, Megaphone, also reported experiencing a service outage in May 2022 that lasted nearly 10 hours in total. The downtime that cut podcasters off from their listeners was an expired certificate that someone hadn’t renewed ahead of time. 

U.S. Government

Even the federal government can be remiss when managing expiring certificates. In early 2019, we reported that 130 U.S. government websites’ SSL/TLS certificates weren’t renewed in time. The expired certificates left websites and their associated services unreachable to U.S. users during the critical period of a federal government shutdown. Furthermore, once the certificates expired, the sites’ infrastructures were vulnerable to threat actors who might have wanted to take advantage of the situation.

Because many of the websites have implemented HTTP strict transport security (HSTS), which forces secure HTTPS connections, it means that browsers using Google Chromium’s HSTS preload list won’t provide users with an option to bypass the security warning. (NOTE: Bypassing insecure website warnings is never a good idea.)

Consequences: Why It’s a “Big Deal” When Your Certificates Expire

When a PKI digital certificate expires, is revoked, or otherwise invalidated, it means the system will become inaccessible and/or go down. The result? A really bad time for you and your customers — you’ll experience everything from downtime and service outages to compliance concerns and lost customer relationships.

The best defense is a good offense. What I mean by that is being proactive in your certificate lifecycle management is crucial to enabling you to protect the security of your organization’s most sensitive data.

The Solution: Use Certificate Lifecycle Automation

A good certificate lifecycle management system makes it a lot easier to avoid certificate expirations, with:

  • Certificate discovery scanners to keep unknown certificates from slipping through the cracks,
  • Centralized dashboard where you can view all certificates across your organization (and set up notifications), and
  • Lifecycle automation to automatically renew and re-install certificates before they expire.

PKI Mistake #2: Poor Key Management Lets Bad Guys Steal Your Keys

Everyone makes mistakes. Sometimes, it’s as simple as not double-checking a setting or accidentally clicking a toggle that changes a setting from “private” to “public.” But the stakes are pretty high when it comes to encryption keys.

3 Real-World Examples of When Organizations Didn’t Protect Their Keys

Hopefully, we all know that passwords and cryptographic private keys should never be shared or posted openly. Unfortunately, some companies make mistakes that wind up sharing their cryptographic keys and other secrets on the internet.

Regardless of whether it’s an accident or done intentionally by a malicious insider, the fact is that key exposures due to security misconfigurations and poor key security practices keep happening. Here are a handful of such examples…


SOCRadar researcher Can Yoleri discovered that the popular European automaker had a misconfigured server that was accessible on the internet. The Microsoft Azure storage dev bucket, which contained all sorts of sensitive data, including private keys, was set to “public access” rather than “private.”

But just what did this bucket contain? Some of this sensitive company data included access information for Azure containers, specifics on cloud services, and private keys that are used to access other private buckets. This means that whatever data was in those buckets that the keys were used to secure were also at risk of compromise.

The good news? According to BMW’s statement in a TechCrunch report on the incident, “no customer or personal data was impacted as a result.” But that still doesn’t excuse storing keys in an Azure storage bucket instead of a secure key management solution such as a hardware security module (HSM) or a key vault.


Incident #1

When it comes to movement in the realm of artificial intelligence (AI), it’s no secret that leaders within the global tech industry are engaged in a “rat race.” Companies are scrambling to build bigger and better mousetraps, meaning better and more effective AI technologies. As part of these efforts, they invest vast amounts of time, people, and financial resources to the cause.  

Unfortunately, things go wrong sometimes. In this case, Wiz’s research team discovered that 38 terabytes (TB) of Microsoft’s sensitive data, which included private keys, passwords, and private communications, were exposed on GitHub. The data exposure occurred when Microsoft’s AI team published a bucket of open-source training data using the wrong security configuration setting.

But the bad news doesn’t end there for Microsoft…

Incident #2

In April 2021, the company’s consumer signing system crashed. It’s thought that this crash, which triggered a data dump that included a signing key that should have been (but was not) redacted, is what gave the threat actor an opportunity to nab the secret. However, Microsoft doesn’t know with certainty whether this was the true cause of the key compromise: “Due to log retention policies, we don’t have logs with specific evidence of this exfiltration by this actor, but this was the most probable mechanism by which the actor acquired the key.”  

If this is the case, Elias Groll at Cyberscoop summed up the situation nicely:

“If a crash dump is the garbage generated by a failing computer system, stealing a signing key via a crash dump is like rifling through a garbage can and discovering the key to the family safe.”

While our long-term readers are intimately familiar with this subject, the impact of key compromises might be a new concept for some of our newer readers. So, let’s quickly cover what all of this means.

When private key compromises and exposures occur, any data secured using the associated key pairs is at risk of compromise. The exposed data can be stolen, bartered, sold, or used by nefarious individuals to commit any number of crimes. Depending on the types of keys exposed, they also can be used to impersonate whatever entities are associated with them.

Another issue with poorly managing your keys is that when private keys get lost, then you can kiss any data that they keys were used to encrypt goodbye.

What You Can Do to Avoid Suffering a Similar Fate

When it comes to securely managing secrets such as keys and passwords, the best thing to do is adhere to industry best practices. This includes securely storing your keys using a key management system or key management service. For passwords, you’ll want to adhere to strong password security guidelines.

Securely Store Your Keys

Depending on the use case, there are several types of secure key storage mechanisms:

  • Secure USB token, which can be lost or stolen if not securely stored.
  • Hardware security module (HSM), an on-prem solution that’s expensive to purchase and maintain.
  • Key vault or cloud key storage mechanism, which stores your key data in a FIPS 130-2 level 2 or level 3 compliant device.

Rotate Your Keys

Yes, cryptographic keys need to be rotated. It’s never a good idea to keep using the same keys. A good practice is to rotate them out for new ones periodically. However, the industry seems to lack consensus regarding key rotation cycles. For example, NIST says key rotation should take place every 12 months, whereas Trend Micro recommends approximately every 45 days.

Restrict Access to Your Keys

While securely storing and rotating your keys is important, that’s not everything you need to do. This is an important reminder to be equally (if not even more) careful when assigning access to your key management system or devices.

Access privileges should be assigned on a highly limited basis. Only give key access to those who need that access to perform their job. That’s it — no one else should have access (no matter how nicely they ask).

PKI Mistake #3: Publishing Your Keys Where Anyone Can Find Them

Some software developers, firmware creators, and network admins have a nasty habit they refuse to quit: hard-coding secrets into projects and systems. This practice, which involves embedding secrets (i.e., cryptographic keys, credentials, passwords, etc.) into your codes, scripts, and systems in order to facilitate authentication, is bad on its own, but it becomes a severe issue when those secrets migrate to your production environment and, eventually, to an open repository like GitHub.

Still not sure why this is such a big issue? Because, as succinctly stated by Thomas Segura at GitGuardian, “hardcoded secrets go where source code goes.” So, any time it’s cloned, leaked, or otherwise shared, then the embedded secrets go with it. And if you don’t realize that the secrets have been exposed, then you’re unaware that they need to be revoked ASAP. 

Welp, that sucks. But just how common is this bad coding practice? According to data from researcher Tom Forbes and GitGuardian, it’s more widespread than we’d like. Of the 3,938 exposed secrets they tracked down, nearly 20% (768) of them were still valid. These exposed secrets include Azure AD API keys, Auth0 keys, SSH credentials, and many other examples of sensitive data that, as a business, you’d never want to see outside your secure environment.

4 Real-World Examples of Organizations Whose Hard-Coded Credentials Were Made Publicly Available

Unfortunately, it seems that not everyone has gotten the memo that hard-coded credentials = bad news. Now, it’s time to take a look at some examples of what happens when employees don’t follow industry advice (or, in some cases, their organizations’ internal policies)…


A major European auto manufacturer found itself in hot water when an employee accidentally posted their authentication token in a public GitHub repository. The authentication token provided “unrestricted” and “unmonitored” access to the company’s GitHub Enterprise Server.

Unfortunately, this exposed private source code repository was filled with a plethora of intellectual property and other sensitive data, including API keys and single sign-on (SSO) passwords. The breach remained undetected for more than three months before it was discovered by researchers at RedHunt Labs.

While that’s a long time to leave any key exposed, it’s nothing compared to the nearly five years that the sensitive customer data of one of its competitors was left exposed in a similar fashion…


Yup, the Japanese automaker had to warn customers that their personal info may have been exposed after discovering that one of its access keys was left publicly available on GitHub for almost half a decade.

According to Toyota’s official statement (as translated by Google Translate):

“On September 15, 2022, we confirmed that part of the source code (text that describes computer processing) of the “T-Connect” user site has been published on GitHub (a software development platform). […] As a result, it was discovered that a third party had been able to access part of the source code on GitHub from December 2017 to September 15, 2022. It has been discovered that the released source code contains an access key to the data server, which can be used to access email addresses and customer management numbers stored on the data server. On the same day, we immediately made the source code private on GitHub, and on September 17th we took measures such as changing the data server access key, and no secondary damage has been confirmed.”

It was thought that the leak was the result of a third-party dev contractor that was hired to develop the “T-Connect” website uploaded part of its source code to GitHub. However, Toyota ultimately took responsibility for the breach.


Someone at GitHub (likely either an employee or a contractor) posted the company’s RSA SSH host private key in a public GitHub repository. The good news, however, is that the company reported in a blog post in March 2023 that it acted immediately to contain the risks by replacing the key:

“At approximately 05:00 UTC on March 24, out of an abundance of caution, we replaced our RSA SSH host key used to secure Git operations for We did this to protect our users from any chance of an adversary impersonating GitHub or eavesdropping on their Git operations over SSH. This key does not grant access to GitHub’s infrastructure or customer data. This change only impacts Git operations over SSH using RSA. Web traffic to and HTTPS Git operations are not affected.”

It went on to provide instructions on how to remove the old key and replace it with the new one as an entry in their ~/.ssh/known_hosts file for individuals who connect to using SSH manually or using an automatic update.


The major rideshare company made a similar gaff, accidentally posting a private security key on GitHub that led to the exposure of a database containing confidential and proprietary data. The key remained on GitHub for several months before it was discovered and ultimately removed.

The FTC’s settlement announcement with Uber reports that this led to the AWS-hosted database, containing the names and license information of 100,000+ Uber drivers, being downloaded by an unknown individual.  As it turns out, Ubert didn’t take “reasonable, low-cost measures” to prevent the breach, such as

  • requiring programmers and engineers to use unique access keys to access cloud data (they shared a single, full-access admin key), or
  • implementing multi-factor authentication (MFA) to add another security layer.

Consequences: Why Using Hard-Coding Credentials Is a Big Deal

Hard-coding credentials into your scripts, APIs, and services is playing a dangerous game for several key reasons:

  1. Credentials and keys are difficult to distinguish from the rest of your source code.
  2. Once they’re embedded, it’s easy to forget and hard to track every occurrence to go back and change after the fact.
  3. Hard-coding your keys creates an attack surface that bad guys can exploit to bypass authentication security measures that you set up to protect your product.
  4. If your secret-containing source code becomes public, it gives unauthorized access to your sensitive databases, services, and other resources.

What You Can Do to Avoid Suffering the Consequences of Hard Coding Secrets

We’ve already covered some of this in the PKI mistake #2 section, but here are a few additional tips specifically to help you avoid leaking keys inside source code:

  • Follow secure coding and key management best practices. Both include not hard-coding your credentials or cryptographic keys into your scripts and other code. Instead, set up a configuration file that is stored separately from your project’s code repository. (.gitignore file templates are a good way to keep config files out of code repositories.)
  • Implement processes and policies to help avoid accidental commits of secrets by devs in local work environments. This helps to avoid sensitive commits making their way into your organizations’ central repository, which may be accessible by others. 

Final Takeaways Regarding These Significant PKI Mistakes

There is one simple truth that should be your big takeaway from this article: your data is only as secure as the methods and processes you use to secure it. There is no substitute for sound PKI management.

Businesses and other organizations must implement and adhere to strict certificate and key management best practices or face the consequences. When your cryptographic keys become exposed or are stolen due to poor key management, then it means any systems or data that those keys are used to explore is at risk of compromise. And when you allow your certificates to expire, your apps or associated services or systems will experience downtime, and your customers will suffer as a result.

If you want to avoid your organization becoming another similar headline, then it’s high time you did something about it.


Article link

Buy SSL/TLS Certificate