Breaking Bugs: The Delicate Dance of Vulnerability Disclosure
Picture this: You're enjoying a quiet evening at home, streaming your favourite show. Suddenly, the video glitches, the screen goes blank for a moment, and then everything seems to return to normal. But you, being tech-savvy, decide to investigate. You discover a bug in the streaming platform's code that allows you to access everyone else’s accounts. What would you do? Remove all the other user accounts, hoping for a higher bitrate? Or would you attempt to steal other users’ credit card details, just to get a few months of free streaming? Hopefully neither of them, there is a better way!
In this post, we'll explore the significance of vulnerability disclosure, its ethical aspects, and how people and organizations can handle these digital blind spots. Uncovering a vulnerability is just the start of an essential process: vulnerability disclosure.
So, what is vulnerability disclosure? It is the process of finding, reporting, and fixing a vulnerability. It is recommended that all organizations implement processes to mitigate and prevent security incidents. This also contributes to continuous and proactive security testing, in addition to an organization’s vulnerability management strategy. The goal of vulnerability disclosure is to improve overall cybersecurity through the process of identifying and addressing weaknesses before they can be exploited by a malicious actor.
Some vendors have a specific way to contact them when it comes to reporting vulnerabilities. Contact details can often be found in a file called “security.txt”. Usually, this file is located under the root directory or under the “/.well-known/” directory. A couple examples are "https://www.svt.se/.well-known/security.txt" and “https://www.amazon.com/security.txt". Even though it is recommended to publish a security.txt file, not everyone does this. In those instances, the reporter must find alternative contact channels, such as the vendor’s customer service, or via e-mail.
A researcher can also go through dedicated organizations to report vulnerabilities, such as Computer Emergency Response Team, aka CERT-SE (or CERT-EU), Swedish Authority for Privacy Protection (IMY) and Swedish Civil Contingencies Agency (MSB). These organizations can help researchers mediate with vendors. In some instances, they can also help escalate the issue, e.g., in case the vendor is unresponsive, or fails to implement fixes in a timely manner.
After discovering and reporting a vulnerability, it is common for the researcher to apply for a CVE, which stands for Common Vulnerabilities and Exposures. A CVE application can be submitted by the individual who discovered the flaw, or as a joint venture with the vendor. The CVE system consists of a database containing known vulnerabilities found in systems, services, applications etc. The CVE database is used to keep track of vulnerabilities related to vendors and in specific versions, to ensure that people and organizations do not run vulnerable components or services.
There are various ways to go about reporting a vulnerability, some more preferred than others. In this blog post we will touch upon different ways of reporting a vulnerability. This includes private disclosure, full disclosure, and coordinated disclosure.
Types of Vulnerability Disclosure
Private Vulnerability Disclosure
The first type we will discuss is private vulnerability disclosure. Private vulnerability disclosure refers to the process of reporting and addressing security vulnerabilities in a private and confidential manner between the security researcher or reporter and the vendor responsible for the software, system, or product in question. Unlike public disclosure, where details about vulnerabilities are made available to the broader community, private disclosure is conducted behind the scenes.
This kind of disclosure focuses on confidentiality, direct communication, and coordination. Private disclosure emphasizes keeping information about the vulnerability confidential to allow the vendor to assess and address the issue before potential malicious actors become aware of it.
The reporter directly communicates with the vendor or entity responsible for the product or system containing the vulnerability. This can be done through established communication channels, such as email or a dedicated security contact. There is also often a coordinated effort between the reporter and the vendor to understand the vulnerability, its potential impact, and the steps required for mitigation or patching. A timeline is usually set for when a patch should be available and for publicly disclosing the vulnerability. Vendors often acknowledge and recognize the contribution of the security researcher who responsibly disclosed the vulnerability once the issue has been addressed.
Figure 1: Private vulnerability disclosure process cycle.
Private vulnerability disclosure may also occur in bug bounty programs, where organizations provide rewards or incentives to security researchers or individuals who responsibly report vulnerabilities. These rewards or incentives may include applying for CVEs or money, which in turn helps encourage ethical hacking and responsible disclosure.
A specific branch of private vulnerability disclosure is coordinated vulnerability disclosure, which we will touch upon later in the post.
Full Vulnerability Disclosure
Full disclosure represents the policy of publishing information publicly about a vulnerability as early on as possible. There is no standardized way of making vulnerability information available to the public, but researchers commonly use mailing lists, industry conferences or academic papers to spread this type of information. Through this approach, everyone is equally informed about the nature of the threat.
Generally, advocates of the full disclosure approach believe that the benefits of freely available vulnerability research outweigh its potential security risks. Opponents on the other hand, prefer to limit this kind of distribution, the reason being that this type of disclosure is often seen as irresponsible by many people.
When vulnerability information is publicly available, it helps both users and administrators to gain a better understanding and to react accordingly to vulnerabilities present in their systems. Furthermore, this information could be used by users to pressure vendors into fixing vulnerabilities before an attacker gets an opportunity to exploit them. This is especially effective when dealing with vulnerabilities that a vendor otherwise would have no incentive to fix.
Figure 2: Full vulnerability disclosure process cycle.
Compared to private disclosure, full disclosure can resolve some of the fundamental problems when it comes to private vulnerability disclosure. It mainly addresses the fact that if customers do not have the knowledge of a vulnerability, they cannot request patches from the vendor, and there is no economic incentive for the vendor to resolve the issue in question. Furthermore, it is impossible for administrators to make informed decisions about the potential risks to their systems.
However, there are also downsides to this approach. When this type of information is in the eye of the public, it gives bad actors the same amount of time to exploit the vulnerability as it takes for the vendor to react to the problem and fix it. Additionally, it may give the vendor a bad reputation. For this reason, full disclosure is commonly seen as a last resort e.g., when the vendor is unresponsive to a vulnerability report, or if the deadline for when a fix should be available is not met.
Coordinated Vulnerability Disclosure
Coordinated vulnerability disclosure is a specific approach to private vulnerability disclosure that emphasizes on collaboration, structure, and a responsible timeline for disclosure. It allows vendors to proactively address security issues while ensuring that users are informed in a timely manner. This is the preferable system, and likely the way all organizations should preferably implement their process.
Coordinated vulnerability disclosure, also known as responsible vulnerability disclosure, is a model in which the researcher and vendor collaborate throughout the entire process, from discovery to disclosing the vulnerability to the public. However, not all vulnerability disclosures start off as coordinated disclosures. Some vendors may want to fix the vulnerability without the involvement of the researcher. In some instances, an agreement can be made where both parties can reap the benefits of a coordinated disclosure.
Discovery, verification, notification, coordination, and disclosure are the common steps in the coordinated vulnerability disclosure process. After the researcher has discovered and verified that the vulnerability poses a security risk, the researcher should notify the vendor. This can be done by the researcher themselves or through a mediator, such as CERT.
During the coordination phase it is crucial to have frequent communication. This process requires that the individual and the responsible vendor work closely together to understand the vulnerability, verify the findings, and develop a plan for addressing the issue. The two parties should also agree on a disclosure timeline, which marks the point in time when the researcher is allowed to disclose the finding to the public. This could range from a few weeks to several months, depending on the complexity of the vulnerability and the time required for remediation.
While initially kept private, the ultimate goal of coordinated vulnerability disclosure is to achieve transparency. Once the vendor has had an opportunity to address the vulnerability, the details are disclosed publicly to inform the user community and the broader public. When the vulnerability is publicly disclosed, the two parties can apply for a CVE as a joint venture, or either party can apply separately, whatever has been agreed upon.
Figure 3: Coordinated vulnerability disclosure process cycle.
Coordinated vulnerability disclosure not only helps the vendors improve their security by patching security issues, but it also encourages the researchers and other people that may stumble on vulnerabilities to report the issue, rather than to exploit it for their own gain.
Another good thing that comes with coordinated vulnerability disclosure is that it normalizes the existence of vulnerabilities in our digital world, that they are not something that should be covered up, and if handled correctly not all bad. A vendor having their name in the CVE database can also indicate that the vendor thinks of the security aspect and highly values it.
Cooperation between the researcher and vendor will help prevent future vulnerabilities, enabling the vendor to learn from its previous mistakes. At the same time, the researcher gets their reward, either in the form of money, a CVE, or another incentive that has been agreed upon.
Coordinated vulnerability disclosure aims to balance the need for public awareness with the responsibility to provide vendors with a reasonable amount of time to address and remediate vulnerabilities.
… what happens if the vendor is unresponsive? Or if they do not want to collaborate? This is unfortunately way too common. From the vendor’s perspective, reporting a vulnerability may be seen as an aggressive move, even more so if the vendor does not have a proper process for handling them. Though this can be common, it is not recommended that vendors be hostile against the discoverers. This can result in a negative response from the discoverer, the discoverer may change their mind and choose to go the full disclosure route instead. This will pressure the vendor to fix the issues presented. It does, however come with the downside that it enables malicious actors to exploit the vulnerability before the vendor has fixed it. Therefore, this scenario and full disclosure is something that is not encouraged.
Figure 4: Coordinated vulnerability disclosure process cycle changed to full disclosure.
As mentioned before, CERT and other organizations can aid individuals in reporting a vulnerability. For instance, in case the vendor is unresponsive, CERT can help mediate communications. In case the vulnerability results in a data breach, breaching data protection regulations, the governmental agency IMY can also be contacted. IMY is a Swedish regulatory authority that works with compliance within the area of data protection. Furthermore, the vendor is responsible to report any data breach of personally identifiable information (PII) to the agency within 72 hours after gaining knowledge about it. They are also obligated to notify all affected parties, such as customers and users.
With coordinated disclosure, there are many boxes the vendors need to check. In cases when a vendor has a lacking coordinated disclosure process, this can cause more problems than it solves. Communication is key to a successful vulnerability disclosure and helps prevent misunderstandings when it comes to severity rating, the remediation process, publication dates, etc. Efforts to address the cons typically involve ongoing improvements in communication, collaboration, and the development of standardized practices for responsible disclosure.
Bidding Adieu to Bugs
Now, we are not telling you to go out and hack everything you can get your hands on – that is still illegal. But if you are still interested, there are legal ways for you to search for vulnerabilities. As mentioned before, you can enter a bug bounty or vulnerability rewards programs (VRP). A bug bounty program is a crowdsourced initiative offered by vendors to encourage independent security researchers and ethical hackers to discover and responsibly disclose security vulnerabilities in their software, systems, or products. When entering a bug bounty, it is crucial to be prepared. You need to read the Code of Conduct (CoC), as well as the vendor’s policy. The scope, rules of engagement (RoE) and legal terms and conditions also need to be studied. This is to make sure that you stay within the legal bounds when performing tests. It is also important to only communicate through the dedicated contact channels, no unofficial communication channels (e.g., social media).
In essence, it acts as a reward system for individuals who find and report bugs, vulnerabilities, or weaknesses in the organization's digital assets. The goal is to improve the overall security of the software or system by identifying and fixing potential issues before malicious actors can exploit them.
HackerOne is a bug bounty platform that facilitates bug bounty programs for vendors. It connects security researchers or ethical hackers, with companies looking to identify and resolve security vulnerabilities in their software, websites, and systems. HackerOne acts as an intermediary, providing a platform for the coordination and management of bug bounty programs.
Security is no longer optional, in order to create a more secure world, we all need to cooperate. In the realm of vulnerabilities, our best defence is a good offense. In the world of cybersecurity, where secrets and codes collide, remember when it comes to vulnerability disclosure, honesty is our best encryption, and transparency is the ultimate antivirus for a safer digital frontier.
About the author
Elin Wallgren is working as a penetration tester at the Stockholm Knowit office. Christoffer Willander is working as a penetration tester at the Karlskrona Knowit office.