Vulnerabilities

13 via 36 paths

Dependencies

367

Source

GitHub

Commit

f53a3e4a

Find, fix and prevent vulnerabilities in your code.

Severity
  • 6
  • 7
Status
  • 13
  • 0
  • 0

high severity

Prototype Pollution

  • Vulnerable module: uplot
  • Introduced through: @grafana/runtime@7.4.3

Detailed paths

  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 uplot@1.6.4
    Remediation: Upgrade to @grafana/runtime@11.2.3.

Overview

uplot is an A small, fast chart for time series, lines, areas, ohlc & bars

Affected versions of this package are vulnerable to Prototype Pollution via the uplot.assign function due to missing check if the attribute resolves to the object prototype.

PoC

var uplot = require("uplot")

BAD_JSON = JSON.parse('{"__proto__":{"polluted":true}}');

console.log('before prototype pollution: polluted:', {}.polluted)

uplot.assign({}, BAD_JSON)

console.log('After prototype pollution: polluted:', {}.polluted)

Details

Prototype Pollution is a vulnerability affecting JavaScript. Prototype Pollution refers to the ability to inject properties into existing JavaScript language construct prototypes, such as objects. JavaScript allows all Object attributes to be altered, including their magical attributes such as __proto__, constructor and prototype. An attacker manipulates these attributes to overwrite, or pollute, a JavaScript application object prototype of the base object by injecting other values. Properties on the Object.prototype are then inherited by all the JavaScript objects through the prototype chain. When that happens, this leads to either denial of service by triggering JavaScript exceptions, or it tampers with the application source code to force the code path that the attacker injects, thereby leading to remote code execution.

There are two main ways in which the pollution of prototypes occurs:

  • Unsafe Object recursive merge

  • Property definition by path

Unsafe Object recursive merge

The logic of a vulnerable recursive merge function follows the following high-level model:

merge (target, source)

  foreach property of source

    if property exists and is an object on both the target and the source

      merge(target[property], source[property])

    else

      target[property] = source[property]

When the source object contains a property named __proto__ defined with Object.defineProperty() , the condition that checks if the property exists and is an object on both the target and the source passes and the merge recurses with the target, being the prototype of Object and the source of Object as defined by the attacker. Properties are then copied on the Object prototype.

Clone operations are a special sub-class of unsafe recursive merges, which occur when a recursive merge is conducted on an empty object: merge({},source).

lodash and Hoek are examples of libraries susceptible to recursive merge attacks.

Property definition by path

There are a few JavaScript libraries that use an API to define property values on an object based on a given path. The function that is generally affected contains this signature: theFunction(object, path, value)

If the attacker can control the value of “path”, they can set this value to __proto__.myValue. myValue is then assigned to the prototype of the class of the object.

Types of attacks

There are a few methods by which Prototype Pollution can be manipulated:

Type Origin Short description
Denial of service (DoS) Client This is the most likely attack.
DoS occurs when Object holds generic functions that are implicitly called for various operations (for example, toString and valueOf).
The attacker pollutes Object.prototype.someattr and alters its state to an unexpected value such as Int or Object. In this case, the code fails and is likely to cause a denial of service.
For example: if an attacker pollutes Object.prototype.toString by defining it as an integer, if the codebase at any point was reliant on someobject.toString() it would fail.
Remote Code Execution Client Remote code execution is generally only possible in cases where the codebase evaluates a specific attribute of an object, and then executes that evaluation.
For example: eval(someobject.someattr). In this case, if the attacker pollutes Object.prototype.someattr they are likely to be able to leverage this in order to execute code.
Property Injection Client The attacker pollutes properties that the codebase relies on for their informative value, including security properties such as cookies or tokens.
For example: if a codebase checks privileges for someuser.isAdmin, then when the attacker pollutes Object.prototype.isAdmin and sets it to equal true, they can then achieve admin privileges.

Affected environments

The following environments are susceptible to a Prototype Pollution attack:

  • Application server

  • Web server

  • Web browser

How to prevent

  1. Freeze the prototype— use Object.freeze (Object.prototype).

  2. Require schema validation of JSON input.

  3. Avoid using unsafe recursive merge functions.

  4. Consider using objects without prototypes (for example, Object.create(null)), breaking the prototype chain and preventing pollution.

  5. As a best practice use Map instead of Object.

For more information on this vulnerability type:

Arteau, Oliver. “JavaScript prototype pollution attack in NodeJS application.” GitHub, 26 May 2018

Remediation

Upgrade uplot to version 1.6.31 or higher.

References

high severity

Regular Expression Denial of Service (ReDoS)

  • Vulnerable module: marked
  • Introduced through: @grafana/runtime@7.4.3

Detailed paths

  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/data@7.4.3 marked@1.2.2
    Remediation: Upgrade to @grafana/runtime@7.5.0.
  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 @grafana/data@7.4.3 marked@1.2.2
    Remediation: Upgrade to @grafana/runtime@7.5.0.

Overview

marked is a low-level compiler for parsing markdown without caching or blocking for long periods of time.

Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS). 3 or more groups of odd and even numbered consecutive underscores (___) followed by a character causes extended processing.

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

Remediation

Upgrade marked to version 2.0.0 or higher.

References

high severity

Directory Traversal

  • Vulnerable module: moment
  • Introduced through: @grafana/runtime@7.4.3

Detailed paths

  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 moment@2.24.0
    Remediation: Upgrade to @grafana/runtime@8.4.7.

Overview

moment is a lightweight JavaScript date library for parsing, validating, manipulating, and formatting dates.

Affected versions of this package are vulnerable to Directory Traversal when a user provides a locale string which is directly used to switch moment locale.

Details

A Directory Traversal attack (also known as path traversal) aims to access files and directories that are stored outside the intended folder. By manipulating files with "dot-dot-slash (../)" sequences and its variations, or by using absolute file paths, it may be possible to access arbitrary files and directories stored on file system, including application source code, configuration, and other critical system files.

Directory Traversal vulnerabilities can be generally divided into two types:

  • Information Disclosure: Allows the attacker to gain information about the folder structure or read the contents of sensitive files on the system.

st is a module for serving static files on web pages, and contains a vulnerability of this type. In our example, we will serve files from the public route.

If an attacker requests the following URL from our server, it will in turn leak the sensitive private key of the root user.

curl http://localhost:8080/public/%2e%2e/%2e%2e/%2e%2e/%2e%2e/%2e%2e/root/.ssh/id_rsa

Note %2e is the URL encoded version of . (dot).

  • Writing arbitrary files: Allows the attacker to create or replace existing files. This type of vulnerability is also known as Zip-Slip.

One way to achieve this is by using a malicious zip archive that holds path traversal filenames. When each filename in the zip archive gets concatenated to the target extraction folder, without validation, the final path ends up outside of the target folder. If an executable or a configuration file is overwritten with a file containing malicious code, the problem can turn into an arbitrary code execution issue quite easily.

The following is an example of a zip archive with one benign file and one malicious file. Extracting the malicious file will result in traversing out of the target folder, ending up in /root/.ssh/ overwriting the authorized_keys file:

2018-04-15 22:04:29 .....           19           19  good.txt
2018-04-15 22:04:42 .....           20           20  ../../../../../../root/.ssh/authorized_keys

Remediation

Upgrade moment to version 2.29.2 or higher.

References

high severity

Regular Expression Denial of Service (ReDoS)

  • Vulnerable module: moment
  • Introduced through: @grafana/runtime@7.4.3

Detailed paths

  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 moment@2.24.0
    Remediation: Upgrade to @grafana/runtime@9.0.6.

Overview

moment is a lightweight JavaScript date library for parsing, validating, manipulating, and formatting dates.

Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) via the preprocessRFC2822() function in from-string.js, when processing a very long crafted string (over 10k characters).

PoC:

moment("(".repeat(500000))

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

Remediation

Upgrade moment to version 2.29.4 or higher.

References

high severity

Regular Expression Denial of Service (ReDoS)

  • Vulnerable module: xss
  • Introduced through: @grafana/runtime@7.4.3

Detailed paths

  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/data@7.4.3 xss@1.0.6
    Remediation: Upgrade to @grafana/runtime@8.4.0.
  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 @grafana/data@7.4.3 xss@1.0.6
    Remediation: Upgrade to @grafana/runtime@8.4.0.

Overview

xss is a package that sanitizes untrusted HTML (to prevent XSS) with a configuration specified by a Whitelist.

Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) via the stripCommentTag function in lib/default.js.

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

Remediation

Upgrade xss to version 1.0.10 or higher.

References

high severity

Code Injection

  • Vulnerable module: lodash
  • Introduced through: @grafana/runtime@7.4.3

Detailed paths

  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/data@7.4.3 lodash@4.17.20
    Remediation: Upgrade to @grafana/runtime@7.5.0.
  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 lodash@4.17.20
    Remediation: Upgrade to @grafana/runtime@7.5.0.
  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 @grafana/data@7.4.3 lodash@4.17.20
    Remediation: Upgrade to @grafana/runtime@7.5.0.

Overview

lodash is a modern JavaScript utility library delivering modularity, performance, & extras.

Affected versions of this package are vulnerable to Code Injection via template.

PoC

var _ = require('lodash');

_.template('', { variable: '){console.log(process.env)}; with(obj' })()

Remediation

Upgrade lodash to version 4.17.21 or higher.

References

medium severity

Cross-site Scripting (XSS)

  • Vulnerable module: @grafana/data
  • Introduced through: @grafana/runtime@7.4.3

Detailed paths

  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/data@7.4.3
    Remediation: Upgrade to @grafana/runtime@8.2.3.
  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 @grafana/data@7.4.3
    Remediation: Upgrade to @grafana/runtime@8.2.3.

Overview

@grafana/data is a This package holds the root data types and functions used within Grafana.

Affected versions of this package are vulnerable to Cross-site Scripting (XSS) when an attacker convince a victim to visit a URL referencing a vulnerable page. The URL is not validated and the AngularJS rendering engine will execute the JavaScript expression contained in the URL.

Details

A cross-site scripting attack occurs when the attacker tricks a legitimate web-based application or site to accept a request as originating from a trusted source.

This is done by escaping the context of the web application; the web application then delivers that data to its users along with other trusted dynamic content, without validating it. The browser unknowingly executes malicious script on the client side (through client-side languages; usually JavaScript or HTML) in order to perform actions that are otherwise typically blocked by the browser’s Same Origin Policy.

Injecting malicious code is the most prevalent manner by which XSS is exploited; for this reason, escaping characters in order to prevent this manipulation is the top method for securing code against this vulnerability.

Escaping means that the application is coded to mark key characters, and particularly key characters included in user input, to prevent those characters from being interpreted in a dangerous context. For example, in HTML, < can be coded as &lt; and > can be coded as &gt; in order to be interpreted and displayed as themselves in text, while within the code itself, they are used for HTML tags. If malicious content is injected into an application that escapes special characters and that malicious content uses < and > as HTML tags, those characters are nonetheless not interpreted as HTML tags by the browser if they’ve been correctly escaped in the application code and in this way the attempted attack is diverted.

The most prominent use of XSS is to steal cookies (source: OWASP HttpOnly) and hijack user sessions, but XSS exploits have been used to expose sensitive information, enable access to privileged services and functionality and deliver malware.

Types of attacks

There are a few methods by which XSS can be manipulated:

Type Origin Description
Stored Server The malicious code is inserted in the application (usually as a link) by the attacker. The code is activated every time a user clicks the link.
Reflected Server The attacker delivers a malicious link externally from the vulnerable web site application to a user. When clicked, malicious code is sent to the vulnerable web site, which reflects the attack back to the user’s browser.
DOM-based Client The attacker forces the user’s browser to render a malicious page. The data in the page itself delivers the cross-site scripting data.
Mutated The attacker injects code that appears safe, but is then rewritten and modified by the browser, while parsing the markup. An example is rebalancing unclosed quotation marks or even adding quotation marks to unquoted parameters.

Affected environments

The following environments are susceptible to an XSS attack:

  • Web servers
  • Application servers
  • Web application environments

How to prevent

This section describes the top best practices designed to specifically protect your code:

  • Sanitize data input in an HTTP request before reflecting it back, ensuring all data is validated, filtered or escaped before echoing anything back to the user, such as the values of query parameters during searches.
  • Convert special characters such as ?, &, /, <, > and spaces to their respective HTML or URL encoded equivalents.
  • Give users the option to disable client-side scripts.
  • Redirect invalid requests.
  • Detect simultaneous logins, including those from two separate IP addresses, and invalidate those sessions.
  • Use and enforce a Content Security Policy (source: Wikipedia) to disable any features that might be manipulated for an XSS attack.
  • Read the documentation for any of the libraries referenced in your code to understand which elements allow for embedded HTML.

Remediation

Upgrade @grafana/data to version 8.2.3 or higher.

References

medium severity

Cross-site Scripting (XSS)

  • Vulnerable module: @braintree/sanitize-url
  • Introduced through: @grafana/runtime@7.4.3

Detailed paths

  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/data@7.4.3 @braintree/sanitize-url@4.0.0
    Remediation: Upgrade to @grafana/runtime@8.4.7.
  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 @grafana/data@7.4.3 @braintree/sanitize-url@4.0.0
    Remediation: Upgrade to @grafana/runtime@8.4.7.

Overview

@braintree/sanitize-url is an A url sanitizer

Affected versions of this package are vulnerable to Cross-site Scripting (XSS) due to improper sanitization in sanitizeUrl function.

PoC:

const sanitizeUrl = require("@braintree/sanitize-url").sanitizeUrl


for(const vector of [ "&#0000106&#0000097&#0000118&#0000097&#0000115&#0000099&#0000114&#0000105&#0000112&#0000116&#0000058&#0000097&#0000108&#0000101&#0000114&#0000116&#0000040&#0000039&#0000088&#0000083&#0000083&#0000039&#0000041",
"javascript:alert('XSS')",
"&#0000106&#0000097&#0000118&#0000097&#0000115&#0000099&#0000114&#0000105&#0000112&#0000116&#0000058&#0000097&#0000108&#0000101&#0000114&#0000116&#0000040&#0000039&#0000088&#0000083&#0000083&#0000039&#0000041",
"&#x6A&#x61&#x76&#x61&#x73&#x63&#x72&#x69&#x70&#x74&#x3A&#x61&#x6C&#x65&#x72&#x74&#x28&#x27&#x58&#x53&#x53&#x27&#x29",
"jav ascript:alert('XSS');",
" &#14; javascript:alert('XSS');"
]) {
console.log(sanitizeUrl(vector))
}

Details

A cross-site scripting attack occurs when the attacker tricks a legitimate web-based application or site to accept a request as originating from a trusted source.

This is done by escaping the context of the web application; the web application then delivers that data to its users along with other trusted dynamic content, without validating it. The browser unknowingly executes malicious script on the client side (through client-side languages; usually JavaScript or HTML) in order to perform actions that are otherwise typically blocked by the browser’s Same Origin Policy.

Injecting malicious code is the most prevalent manner by which XSS is exploited; for this reason, escaping characters in order to prevent this manipulation is the top method for securing code against this vulnerability.

Escaping means that the application is coded to mark key characters, and particularly key characters included in user input, to prevent those characters from being interpreted in a dangerous context. For example, in HTML, < can be coded as &lt; and > can be coded as &gt; in order to be interpreted and displayed as themselves in text, while within the code itself, they are used for HTML tags. If malicious content is injected into an application that escapes special characters and that malicious content uses < and > as HTML tags, those characters are nonetheless not interpreted as HTML tags by the browser if they’ve been correctly escaped in the application code and in this way the attempted attack is diverted.

The most prominent use of XSS is to steal cookies (source: OWASP HttpOnly) and hijack user sessions, but XSS exploits have been used to expose sensitive information, enable access to privileged services and functionality and deliver malware.

Types of attacks

There are a few methods by which XSS can be manipulated:

Type Origin Description
Stored Server The malicious code is inserted in the application (usually as a link) by the attacker. The code is activated every time a user clicks the link.
Reflected Server The attacker delivers a malicious link externally from the vulnerable web site application to a user. When clicked, malicious code is sent to the vulnerable web site, which reflects the attack back to the user’s browser.
DOM-based Client The attacker forces the user’s browser to render a malicious page. The data in the page itself delivers the cross-site scripting data.
Mutated The attacker injects code that appears safe, but is then rewritten and modified by the browser, while parsing the markup. An example is rebalancing unclosed quotation marks or even adding quotation marks to unquoted parameters.

Affected environments

The following environments are susceptible to an XSS attack:

  • Web servers
  • Application servers
  • Web application environments

How to prevent

This section describes the top best practices designed to specifically protect your code:

  • Sanitize data input in an HTTP request before reflecting it back, ensuring all data is validated, filtered or escaped before echoing anything back to the user, such as the values of query parameters during searches.
  • Convert special characters such as ?, &, /, <, > and spaces to their respective HTML or URL encoded equivalents.
  • Give users the option to disable client-side scripts.
  • Redirect invalid requests.
  • Detect simultaneous logins, including those from two separate IP addresses, and invalidate those sessions.
  • Use and enforce a Content Security Policy (source: Wikipedia) to disable any features that might be manipulated for an XSS attack.
  • Read the documentation for any of the libraries referenced in your code to understand which elements allow for embedded HTML.

Remediation

Upgrade @braintree/sanitize-url to version 6.0.0 or higher.

References

medium severity

Cross-site Scripting (XSS)

  • Vulnerable module: @braintree/sanitize-url
  • Introduced through: @grafana/runtime@7.4.3

Detailed paths

  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/data@7.4.3 @braintree/sanitize-url@4.0.0
    Remediation: Upgrade to @grafana/runtime@9.3.0.
  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 @grafana/data@7.4.3 @braintree/sanitize-url@4.0.0
    Remediation: Upgrade to @grafana/runtime@9.3.0.

Overview

@braintree/sanitize-url is an A url sanitizer

Affected versions of this package are vulnerable to Cross-site Scripting (XSS) due to improper user-input sanitization, via HTML entities tab.

Details

A cross-site scripting attack occurs when the attacker tricks a legitimate web-based application or site to accept a request as originating from a trusted source.

This is done by escaping the context of the web application; the web application then delivers that data to its users along with other trusted dynamic content, without validating it. The browser unknowingly executes malicious script on the client side (through client-side languages; usually JavaScript or HTML) in order to perform actions that are otherwise typically blocked by the browser’s Same Origin Policy.

Injecting malicious code is the most prevalent manner by which XSS is exploited; for this reason, escaping characters in order to prevent this manipulation is the top method for securing code against this vulnerability.

Escaping means that the application is coded to mark key characters, and particularly key characters included in user input, to prevent those characters from being interpreted in a dangerous context. For example, in HTML, < can be coded as &lt; and > can be coded as &gt; in order to be interpreted and displayed as themselves in text, while within the code itself, they are used for HTML tags. If malicious content is injected into an application that escapes special characters and that malicious content uses < and > as HTML tags, those characters are nonetheless not interpreted as HTML tags by the browser if they’ve been correctly escaped in the application code and in this way the attempted attack is diverted.

The most prominent use of XSS is to steal cookies (source: OWASP HttpOnly) and hijack user sessions, but XSS exploits have been used to expose sensitive information, enable access to privileged services and functionality and deliver malware.

Types of attacks

There are a few methods by which XSS can be manipulated:

Type Origin Description
Stored Server The malicious code is inserted in the application (usually as a link) by the attacker. The code is activated every time a user clicks the link.
Reflected Server The attacker delivers a malicious link externally from the vulnerable web site application to a user. When clicked, malicious code is sent to the vulnerable web site, which reflects the attack back to the user’s browser.
DOM-based Client The attacker forces the user’s browser to render a malicious page. The data in the page itself delivers the cross-site scripting data.
Mutated The attacker injects code that appears safe, but is then rewritten and modified by the browser, while parsing the markup. An example is rebalancing unclosed quotation marks or even adding quotation marks to unquoted parameters.

Affected environments

The following environments are susceptible to an XSS attack:

  • Web servers
  • Application servers
  • Web application environments

How to prevent

This section describes the top best practices designed to specifically protect your code:

  • Sanitize data input in an HTTP request before reflecting it back, ensuring all data is validated, filtered or escaped before echoing anything back to the user, such as the values of query parameters during searches.
  • Convert special characters such as ?, &, /, <, > and spaces to their respective HTML or URL encoded equivalents.
  • Give users the option to disable client-side scripts.
  • Redirect invalid requests.
  • Detect simultaneous logins, including those from two separate IP addresses, and invalidate those sessions.
  • Use and enforce a Content Security Policy (source: Wikipedia) to disable any features that might be manipulated for an XSS attack.
  • Read the documentation for any of the libraries referenced in your code to understand which elements allow for embedded HTML.

Remediation

Upgrade @braintree/sanitize-url to version 6.0.1 or higher.

References

medium severity

Regular Expression Denial of Service (ReDoS)

  • Vulnerable module: d3-color
  • Introduced through: @grafana/runtime@7.4.3

Detailed paths

  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 d3@5.15.0 d3-color@1.4.1
    Remediation: Upgrade to @grafana/runtime@9.4.1.
  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 d3@5.15.0 d3-interpolate@1.4.0 d3-color@1.4.1
    Remediation: Upgrade to @grafana/runtime@9.4.1.
  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 d3@5.15.0 d3-transition@1.3.2 d3-color@1.4.1
    Remediation: Upgrade to @grafana/runtime@9.4.1.
  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 d3@5.15.0 d3-scale-chromatic@1.5.0 d3-color@1.4.1
    Remediation: Upgrade to @grafana/runtime@9.4.1.
  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 d3@5.15.0 d3-transition@1.3.2 d3-interpolate@1.4.0 d3-color@1.4.1
    Remediation: Upgrade to @grafana/runtime@9.4.1.
  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 d3@5.15.0 d3-brush@1.1.6 d3-interpolate@1.4.0 d3-color@1.4.1
    Remediation: Upgrade to @grafana/runtime@9.4.1.
  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 d3@5.15.0 d3-scale@2.2.2 d3-interpolate@1.4.0 d3-color@1.4.1
    Remediation: Upgrade to @grafana/runtime@9.4.1.
  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 d3@5.15.0 d3-scale-chromatic@1.5.0 d3-interpolate@1.4.0 d3-color@1.4.1
    Remediation: Upgrade to @grafana/runtime@9.4.1.
  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 d3@5.15.0 d3-zoom@1.8.3 d3-interpolate@1.4.0 d3-color@1.4.1
    Remediation: Upgrade to @grafana/runtime@9.4.1.
  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 d3@5.15.0 d3-brush@1.1.6 d3-transition@1.3.2 d3-color@1.4.1
    Remediation: Upgrade to @grafana/runtime@9.4.1.
  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 d3@5.15.0 d3-zoom@1.8.3 d3-transition@1.3.2 d3-color@1.4.1
    Remediation: Upgrade to @grafana/runtime@9.4.1.
  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 d3@5.15.0 d3-brush@1.1.6 d3-transition@1.3.2 d3-interpolate@1.4.0 d3-color@1.4.1
    Remediation: Upgrade to @grafana/runtime@9.4.1.
  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 d3@5.15.0 d3-zoom@1.8.3 d3-transition@1.3.2 d3-interpolate@1.4.0 d3-color@1.4.1
    Remediation: Upgrade to @grafana/runtime@9.4.1.

Overview

d3-color is a Color spaces! RGB, HSL, Cubehelix, Lab and HCL (Lch).

Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) via the rgb() and hrc() functions.

PoC by Yeting Li:

var d3Color = require("d3-color")
// d3Color.rgb("rgb(255,255,255)")

function build_blank(n) {
    var ret = "rgb("
    for (var i = 0; i < n; i++) {
        ret += "1"
    }
    return ret + "!";
}

for(var i = 1; i <= 5000000; i++) {
    if (i % 1000 == 0) {
        var time = Date.now();
        var attack_str = build_blank(i)
        d3Color.rgb(attack_str)
        var time_cost = Date.now() - time;
        console.log("attack_str.length: " + attack_str.length + ": " + time_cost+" ms")
    }
}

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

Remediation

Upgrade d3-color to version 3.1.0 or higher.

References

medium severity

Regular Expression Denial of Service (ReDoS)

  • Vulnerable module: lodash
  • Introduced through: @grafana/runtime@7.4.3

Detailed paths

  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/data@7.4.3 lodash@4.17.20
    Remediation: Upgrade to @grafana/runtime@7.5.0.
  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 lodash@4.17.20
    Remediation: Upgrade to @grafana/runtime@7.5.0.
  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 @grafana/data@7.4.3 lodash@4.17.20
    Remediation: Upgrade to @grafana/runtime@7.5.0.

Overview

lodash is a modern JavaScript utility library delivering modularity, performance, & extras.

Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions.

POC

var lo = require('lodash');

function build_blank (n) {
var ret = "1"
for (var i = 0; i < n; i++) {
ret += " "
}

return ret + "1";
}

var s = build_blank(50000)
var time0 = Date.now();
lo.trim(s)
var time_cost0 = Date.now() - time0;
console.log("time_cost0: " + time_cost0)

var time1 = Date.now();
lo.toNumber(s)
var time_cost1 = Date.now() - time1;
console.log("time_cost1: " + time_cost1)

var time2 = Date.now();
lo.trimEnd(s)
var time_cost2 = Date.now() - time2;
console.log("time_cost2: " + time_cost2)

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

Remediation

Upgrade lodash to version 4.17.21 or higher.

References

medium severity

Regular Expression Denial of Service (ReDoS)

  • Vulnerable module: marked
  • Introduced through: @grafana/runtime@7.4.3

Detailed paths

  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/data@7.4.3 marked@1.2.2
    Remediation: Upgrade to @grafana/runtime@8.4.0.
  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 @grafana/data@7.4.3 marked@1.2.2
    Remediation: Upgrade to @grafana/runtime@8.4.0.

Overview

marked is a low-level compiler for parsing markdown without caching or blocking for long periods of time.

Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) when passing unsanitized user input to inline.reflinkSearch, if it is not being parsed by a time-limited worker thread.

PoC

import * as marked from 'marked';

console.log(marked.parse(`[x]: x

\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](\\[\\](`));

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

Remediation

Upgrade marked to version 4.0.10 or higher.

References

medium severity

Regular Expression Denial of Service (ReDoS)

  • Vulnerable module: marked
  • Introduced through: @grafana/runtime@7.4.3

Detailed paths

  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/data@7.4.3 marked@1.2.2
    Remediation: Upgrade to @grafana/runtime@8.4.0.
  • Introduced through: k6-cloud-grafana-datasource@grafana/k6-cloud-grafana-datasource#f53a3e4af785159ef6fc8f9338dd25e380da17dc @grafana/runtime@7.4.3 @grafana/ui@7.4.3 @grafana/data@7.4.3 marked@1.2.2
    Remediation: Upgrade to @grafana/runtime@8.4.0.

Overview

marked is a low-level compiler for parsing markdown without caching or blocking for long periods of time.

Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) when unsanitized user input is passed to block.def.

PoC

import * as marked from "marked";
marked.parse(`[x]:${' '.repeat(1500)}x ${' '.repeat(1500)} x`);

Details

Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.

The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.

Let’s take the following regular expression as an example:

regex = /A(B|C+)+D/

This regular expression accomplishes the following:

  • A The string must start with the letter 'A'
  • (B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
  • D Finally, we ensure this section of the string ends with a 'D'

The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD

It most cases, it doesn't take very long for a regex engine to find a match:

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total

$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total

The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.

Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.

Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:

  1. CCC
  2. CC+C
  3. C+CC
  4. C+C+C.

The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.

From there, the number of steps the engine must use to validate a string just continues to grow.

String Number of C's Number of steps
ACCCX 3 38
ACCCCX 4 71
ACCCCCX 5 136
ACCCCCCCCCCCCCCX 14 65,553

By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.

Remediation

Upgrade marked to version 4.0.10 or higher.

References