The Developer’s Essential Guide to Cloud Deployment Models
You’ll probably agree that there are barely any organizations left that don’t use some form of cloud computing in their daily operations. In fact, the cloud
Git is the most popular software version control (SVC) standard used by developers today. That doesn’t make it the most secure. Whether you’re using GitLab, GitHub, or a locally hosted Git server; there are many security issues that can sneak up on you and start a snowball effect of unpleasant repercussions.
In this post, we’ll review just how secure Git is (or rather isn’t). We will demonstrate why and how serious Git security issues can be. Then, we’ll list the eight most common Git security issues, and what you can do about them.
At its core, Git is not built for security but for collaboration. As such, it is not secure but can be made secure through the use of tools and best practices.
Self-hosting a Git server is a security nightmare. If you are not an experienced maven in Git server configuration, you are probably not qualified to maintain a self-hosted Git solution hosting sensitive data. There are too many opportunities to exploit a misconfigured or unpatched Git server. So you may very well end up leaving a lot of holes for hackers to exploit.
Even hosted Git services such as GitHub or GitLab offer limited security. Such services offer an easy-to-use interface with enhanced access controls. However, their convenience and ease-of-use can prove to be a hindrance as well, often leading to human error. This especially true when code-commits are not properly screened by secret detection tools.
With many companies relying on Git for code management, Git has become a popular attack vector for hackers. There are numerous cautionary tales depicting the outcome of badly configured or insecure Git management. These are just the tip of the iceberg:
An employee at the Albert Einstein Hospital in Sao Paulo accidentally committed a sensitive spreadsheet file to a public GitHub repository. The spreadsheet in question included login credentials to two governmental databases. The first database contained private information on patients suffering from mild COVID-19 conditions. The second database held full patient hospitalization data.
Overall, the leak exposed personally identifying medical records of over 16 million Brazilian patients. The list included high-profile patients such as the Brazilian President, his family, 7 Ministers, and 17 state Governors.
Automotive giant Nissan’s North America division suffered a massive data breach because of bad password hygiene. The company’s self-hosted Git server was misconfigured to use the default “admin/admin” password. This left the door completely open for hackers to step right in.
The leak was only discovered after the source code behind Nissan’s mobile apps, websites and internal tools surfaced on hacking forums and Telegram groups. Thus, potentially leading to future exploits based on vulnerabilities hackers may discover within the pilfered code.
A Swiss software engineer discovered a GitLab instance hosting onboard logic unit source code used in Daimler’s Mercedes Benz vans. The badly configured GitLab instance allowed anyone to register a developer account.
With a developer account in hand, over 580 Git repositories became easily accessible. This potentially enabled new and dangerous remote-takeover attacks based on vulnerabilities in the logic unit’s leaked source code.
Nearly 400 private Git repositories were held up for ransom by a hacker looking to cash out on poor security practices. The hacker scanned the internet, searching for websites with exposed Git configuration files that included login credentials.
They could then correlate these credentials with accounts on multiple Git hosting services where users were insecurely using the same login credentials. The malefactor encrypted the repositories and demanded a ransom by Bitcoin, threatening to take the vulnerable repositories public if the victims made no payment.
A Netherland security researcher, wondering if private patient healthcare information was easy to access online, did not take long to discover that healthcare developers did not take care of their code’s health.
The search took only 10 minutes using fairly simple searches on public GitHub repositories. With it, the researcher discovered hardcoded passwords enabling access to over 150,000 patient records across 9 healthcare entities in the United States (AccQData, MaineCare, MedPro Billing, Physician House Calls, Shields Health Care Group, Texas VirMedica, Waystar, Xybion).
It’s all too convenient for a developer to store passwords, tokens, and authentication keys right in the code where such credentials are used. It’s just so tempting to save them where they are most accessible in case an issue arises.
While convenient, hardcoded secrets are possibly the worst security practice currently plaguing software development. No one is perfect. People make mistakes and forget past actions (such as temporarily storing passwords in code). People also often share code. Sometimes, they do so accidentally and at other times, intentionally as part of a collaboration. In such cases, long-forgotten secrets, still embedded in code, can easily leak and even get indexed online search engines.
There is no technical reason to use hardcoded authentication credentials. To prevent leaks, be sure to train developers to use secure coding practices. At the same time, you should ensure that security tools are integrated into the development process. These can monitor the workflow to prevent secrets from accidentally being committed.
When self-hosting a Git server, it is vitally important to secure the “.git” directory. A publicly accessible Git directory can allow malicious actors to clone the repository. Then they can scan it for secrets in the code or within historical records.
Other dangers lie in possible exploits residing in the source code itself. For example, SQL injection attacks are a lot easier to develop when you can review the source code of the target application. Furthermore, it is not enough to just deny directory listing access. Hackers with knowledge of Git’s directory structure can potentially bypass directory listing limitations, easily accessing secure files directly.
The only way to validate if a self-hosted Git installation is secure, is by trying to penetrate it using the same techniques a hacker would. You must verify the “.git” directory and its sub-directories (especially “.git/config”) are not publicly accessible in any way or form and that all communication with the repository is performed over secure-HTTP (HTTPS).
The “.gitignore” file is an important security feature utilized by Git. The .gitignore configuration file informs Git of files that should or should not be included when the developer commits code to a repository.
Unfortunately, some developers are not trained in using .gitignore correctly. For example, a developer may add “.gitignore” to a folder name, assuming Git would ignore the folder. Another example would be a developer dropping an empty “.gitignore” file inside a folder, thinking the entire folder would be ignored.
To properly secure code commits, you must first understand how “.gitignore” works. Then, the best “.gitignore” strategy to use is one of inclusion rather than exclusion. You do not want to keep a list of files that must be skipped during a code commit. This is because more files may be added to the project over time, enabling secrets to leak if the “.gitignore” file is not consistently updated.
By using a “.gitignore” inclusion commit strategy only the specified files are committed to the repository. This helps stops secrets from becoming accidentally exposed due to lacking “.gitignore” maintenance.
When committing code to a Git repository, you can easily see the author committing the code. However, unless the author used a GPG key to cryptographically sign the commit, you simply can’t trust what you’re seeing.
It is fairly trivial for a developer with access to a repository to assign a code-commit they themselves performed to another developer on the project. A disgruntled employee can do this to inject a backdoor into the code, covering their track by assigning ownership of the code to another developer.
Another exploit would be to assign a code commit to a more respected or higher-ranking member of the project’s development, hoping the new code would be integrated with less oversight.
When code commits are signed, a “verified” icon appears next to the commit log entry. This ensures every member of the project knows the code was committed by the original author and was not tampered with in any way.
An insecure CD/CI pipeline can lead to secrets leaking or placed at risk by various processes such as pull requests coming from forks of your repository or CD/CI VMs left operating unattended.
Securing the pipeline requires holding secrets with very limited exposure. Beyond training developers to use proper security practices when storing secrets; you should integrate tools or online services into the CD/CI pipeline to provide an additional layer of protection.
You should employ Tools and Online Services to secure the build process and help store secrets as encrypted data, utilizing “just-in-time” decryption to limit exposure during storage and transport. This additional security layer is even more important when protecting extremely sensitive materials such as code-signing certificates.
While known vulnerabilities in Git are usually resolved quickly by the Git development team, when self-hosting Git, it is up to the site administrator to patch Git with the latest security updates as soon as they are released.
It is quite easy to locate Git servers through a web search, and unpatched servers are an easy mark for any hacker. To emphasize the dangers of an unpatched server, you only need to look at CVE-2017-14867. The vulnerability allowed attackers with Git-shell access to execute OS-level commands on unpatched systems – a recipe for a complete system takeover.
Git is often used in combination with other tools or services to automate, secure, and provide analytics throughout the CD/CI pipeline. These days, hackers are not limiting themselves to directly hacking a target. It’s often easier and even more lucrative to perform a supply-chain attack on tools or services to compromise multiple entities that employ them.
To limit exposure to supply chain attacks, it is vitally important to apply tool-chain security patches as soon as they are released. You should also limit online service access to the minimum required for reliable operations, and of course, perform regular backups.
As shown by the Mercedes example, badly configured permissions can provide an access point to every Git repository on the server. In the Mercedes case, the server automatically granted full access to anyone who just signed up for a developer account. In other cases, more subtle access permissions configuration errors may result in persons accessing data they are not authorized to.
When setting up access permissions, you must define access roles on a per-repository basis to ensure only developers with valid access credentials are allowed to interact with the repository.
Do not take Git security lightly. You have a lot to lose when your source code or intellectual data are compromised. To use Git in a production environment where code must remain secure, secrets should never leak and security practices and policies must be consistently enforced.
You’ll probably agree that there are barely any organizations left that don’t use some form of cloud computing in their daily operations. In fact, the cloud
Jest is one of the most commonly used test frameworks for JavaScript testing. With the rise of asynchronicity in modern web development, it’s important to know
While modern web applications are growing in complexity, the threat landscape is also constantly evolving. It can be difficult for developers to identify and remediate vulnerabilities