Find, fix and prevent vulnerabilities in your code.
critical severity
- Vulnerable module: form-data
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › request@2.88.2 › form-data@2.3.3
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › twilio@2.11.1 › request@2.74.0 › form-data@1.0.1
Overview
Affected versions of this package are vulnerable to Predictable Value Range from Previous Values via the boundary value, which uses Math.random(). An attacker can manipulate HTTP request boundaries by exploiting predictable values, potentially leading to HTTP parameter pollution.
Remediation
Upgrade form-data to version 2.5.4, 3.0.4, 4.0.4 or higher.
References
critical severity
- Vulnerable module: hawk
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › twilio@2.11.1 › request@2.74.0 › hawk@3.1.3
Overview
hawk is a library for the HTTP Hawk Authentication Scheme.
Affected versions of this package are vulnerable to Authentication Bypass. The incoming (client supplied) hash of the payload is trusted by the server and not verified before the signature is calculated.
A malicious actor in the middle can alter the payload and the server side will not identify the modification occurred because it simply uses the client provided value instead of verify the hash provided against the modified payload.
According to the maintainers this issue is to be considered out of scope as "payload hash validation is optional and up to developer to implement".
Remediation
There is no fixed version for hawk.
References
high severity
- Vulnerable module: https-proxy-agent
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › https-proxy-agent@1.0.0Remediation: Upgrade to botkit@0.5.5.
Overview
https-proxy-agent provides an http.Agent implementation that connects to a specified HTTP or HTTPS proxy server, and can be used with the built-in https module.
Affected versions of this package are vulnerable to Uninitialized Memory Exposure and Denial of Service (DoS) attacks due to passing unsanitized options to Buffer(arg).
Note: CVE-2018-3739 is a duplicate of CVE-2018-3736.
Uninitialized memory Exposre PoC by ChALKer
// listen with: nc -l -p 8080
var url = require('url');
var https = require('https');
var HttpsProxyAgent = require('https-proxy-agent');
var proxy = {
protocol: 'http:',
host: "127.0.0.1",
port: 8080
};
proxy.auth = 500; // a number as 'auth'
var opts = url.parse('https://example.com/');
var agent = new HttpsProxyAgent(proxy);
opts.agent = agent;
https.get(opts);
Details
The Buffer class on Node.js is a mutable array of binary data, and can be initialized with a string, array or number.
const buf1 = new Buffer([1,2,3]);
// creates a buffer containing [01, 02, 03]
const buf2 = new Buffer('test');
// creates a buffer containing ASCII bytes [74, 65, 73, 74]
const buf3 = new Buffer(10);
// creates a buffer of length 10
The first two variants simply create a binary representation of the value it received. The last one, however, pre-allocates a buffer of the specified size, making it a useful buffer, especially when reading data from a stream.
When using the number constructor of Buffer, it will allocate the memory, but will not fill it with zeros. Instead, the allocated buffer will hold whatever was in memory at the time. If the buffer is not zeroed by using buf.fill(0), it may leak sensitive information like keys, source code, and system info.
Remediation
Upgrade https-proxy-agent to version 2.2.0 or higher.
Note This is vulnerable only for Node <=4
References
high severity
- Vulnerable module: bl
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › twilio@2.11.1 › request@2.74.0 › bl@1.1.2Remediation: Upgrade to botkit@0.6.11.
Overview
bl is a library that allows you to collect buffers and access with a standard readable buffer interface.
Affected versions of this package are vulnerable to Uninitialized Memory Exposure. If user input ends up in consume() argument and can become negative, BufferList state can be corrupted, tricking it into exposing uninitialized memory via regular .slice() calls.
PoC by chalker
const { BufferList } = require('bl')
const secret = require('crypto').randomBytes(256)
for (let i = 0; i < 1e6; i++) {
const clone = Buffer.from(secret)
const bl = new BufferList()
bl.append(Buffer.from('a'))
bl.consume(-1024)
const buf = bl.slice(1)
if (buf.indexOf(clone) !== -1) {
console.error(`Match (at ${i})`, buf)
}
}
Remediation
Upgrade bl to version 2.2.1, 3.0.1, 4.0.3, 1.2.3 or higher.
References
high severity
- Vulnerable module: axios
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › localtunnel@1.9.2 › axios@0.19.0Remediation: Upgrade to botkit@0.7.5.
Overview
axios is a promise-based HTTP client for the browser and Node.js.
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) via the trim function.
PoC
// poc.js
var {trim} = require("axios/lib/utils");
function build_blank (n) {
var ret = "1"
for (var i = 0; i < n; i++) {
ret += " "
}
return ret + "1";
}
var time = Date.now();
trim(build_blank(50000))
var time_cost = Date.now() - time;
console.log("time_cost: " + time_cost)
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade axios to version 0.21.3 or higher.
References
high severity
- Vulnerable module: semver
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › https-proxy-agent@1.0.0 › agent-base@2.1.1 › semver@5.0.3
Overview
semver is a semantic version parser used by npm.
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) via the function new Range, when untrusted user data is provided as a range.
PoC
const semver = require('semver')
const lengths_2 = [2000, 4000, 8000, 16000, 32000, 64000, 128000]
console.log("n[+] Valid range - Test payloads")
for (let i = 0; i =1.2.3' + ' '.repeat(lengths_2[i]) + '<1.3.0';
const start = Date.now()
semver.validRange(value)
// semver.minVersion(value)
// semver.maxSatisfying(["1.2.3"], value)
// semver.minSatisfying(["1.2.3"], value)
// new semver.Range(value, {})
const end = Date.now();
console.log('length=%d, time=%d ms', value.length, end - start);
}
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade semver to version 5.7.2, 6.3.1, 7.5.2 or higher.
References
high severity
- Vulnerable module: hawk
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › twilio@2.11.1 › request@2.74.0 › hawk@3.1.3Remediation: Upgrade to botkit@0.6.11.
Overview
hawk is a library for the HTTP Hawk Authentication Scheme.
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) in header parsing where each added character in the attacker's input increases the computation time exponentially.
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade hawk to version 9.0.1 or higher.
References
high severity
- Vulnerable module: deep-extend
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › command-line-args@2.1.6 › command-line-usage@2.0.5 › column-layout@2.1.4 › deep-extend@0.4.2
Overview
deep-extend is a library for Recursive object extending.
Affected versions of this package are vulnerable to Prototype Pollution. Utilities function in all the listed modules can be tricked into modifying the prototype of "Object" when the attacker control part of the structure passed to these function. This can let an attacker add or modify existing property that will exist on all object.
PoC by HoLyVieR
var merge = require('deep-extend');
var malicious_payload = '{"__proto__":{"oops":"It works !"}}';
var a = {};
console.log("Before : " + a.oops);
merge({}, JSON.parse(malicious_payload));
console.log("After : " + a.oops);
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade deep-extend to version 0.5.1 or higher.
References
high severity
- Vulnerable module: follow-redirects
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › localtunnel@1.9.2 › axios@0.19.0 › follow-redirects@1.5.10Remediation: Upgrade to botkit@0.7.5.
Overview
Affected versions of this package are vulnerable to Improper Handling of Extra Parameters due to the improper handling of URLs by the url.parse() function. When new URL() throws an error, it can be manipulated to misinterpret the hostname. An attacker could exploit this weakness to redirect traffic to a malicious site, potentially leading to information disclosure, phishing attacks, or other security breaches.
PoC
# Case 1 : Bypassing localhost restriction
let url = 'http://[localhost]/admin';
try{
new URL(url); // ERROR : Invalid URL
}catch{
url.parse(url); // -> http://localhost/admin
}
# Case 2 : Bypassing domain restriction
let url = 'http://attacker.domain*.allowed.domain:a';
try{
new URL(url); // ERROR : Invalid URL
}catch{
url.parse(url); // -> http://attacker.domain/*.allowed.domain:a
}
Remediation
Upgrade follow-redirects to version 1.15.4 or higher.
References
high severity
- Vulnerable module: axios
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › localtunnel@1.9.2 › axios@0.19.0
Overview
axios is a promise-based HTTP client for the browser and Node.js.
Affected versions of this package are vulnerable to Cross-site Request Forgery (CSRF) due to inserting the X-XSRF-TOKEN header using the secret XSRF-TOKEN cookie value in all requests to any server when the XSRF-TOKEN0 cookie is available, and the withCredentials setting is turned on. If a malicious user manages to obtain this value, it can potentially lead to the XSRF defence mechanism bypass.
Workaround
Users should change the default XSRF-TOKEN cookie name in the Axios configuration and manually include the corresponding header only in the specific places where it's necessary.
Remediation
Upgrade axios to version 0.28.0, 1.6.0 or higher.
References
medium severity
- Vulnerable module: axios
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › localtunnel@1.9.2 › axios@0.19.0
Overview
axios is a promise-based HTTP client for the browser and Node.js.
Affected versions of this package are vulnerable to Allocation of Resources Without Limits or Throttling via the data: URL handler. An attacker can trigger a denial of service by crafting a data: URL with an excessive payload, causing allocation of memory for content decoding before verifying content size limits.
Remediation
Upgrade axios to version 1.12.0 or higher.
References
medium severity
- Vulnerable module: jsonwebtoken
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › twilio@2.11.1 › jsonwebtoken@5.4.1Remediation: Upgrade to botkit@4.0.0.
Overview
jsonwebtoken is a JSON Web Token implementation (symmetric and asymmetric)
Affected versions of this package are vulnerable to Use of a Broken or Risky Cryptographic Algorithm such that the library can be misconfigured to use legacy, insecure key types for signature verification. For example, DSA keys could be used with the RS256 algorithm.
Exploitability
Users are affected when using an algorithm and a key type other than the combinations mentioned below:
EC: ES256, ES384, ES512
RSA: RS256, RS384, RS512, PS256, PS384, PS512
RSA-PSS: PS256, PS384, PS512
And for Elliptic Curve algorithms:
ES256: prime256v1
ES384: secp384r1
ES512: secp521r1
Workaround
Users who are unable to upgrade to the fixed version can use the allowInvalidAsymmetricKeyTypes option to true in the sign() and verify() functions to continue usage of invalid key type/algorithm combination in 9.0.0 for legacy compatibility.
Remediation
Upgrade jsonwebtoken to version 9.0.0 or higher.
References
medium severity
- Vulnerable module: follow-redirects
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › localtunnel@1.9.2 › axios@0.19.0 › follow-redirects@1.5.10Remediation: Upgrade to botkit@0.7.5.
Overview
Affected versions of this package are vulnerable to Information Exposure due to the handling of the Proxy-Authorization header across hosts. When using a dependent library, it only clears the authorization header during cross-domain redirects but allows the proxy-authentication header, which contains credentials, to persist. This behavior may lead to the unintended leakage of credentials if an attacker can trigger a cross-domain redirect and capture the persistent proxy-authentication header.
PoC
const axios = require('axios');
axios.get('http://127.0.0.1:10081/',{
headers: {
'AuThorization': 'Rear Test',
'ProXy-AuthoriZation': 'Rear Test',
'coOkie': 't=1'
}
}).then(function (response) {
console.log(response);
})
Remediation
Upgrade follow-redirects to version 1.15.6 or higher.
References
medium severity
- Vulnerable module: jsonwebtoken
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › twilio@2.11.1 › jsonwebtoken@5.4.1Remediation: Upgrade to botkit@4.0.0.
Overview
jsonwebtoken is a JSON Web Token implementation (symmetric and asymmetric)
Affected versions of this package are vulnerable to Improper Restriction of Security Token Assignment via the secretOrPublicKey argument due to misconfigurations of the key retrieval function jwt.verify(). Exploiting this vulnerability might result in incorrect verification of forged tokens when tokens signed with an asymmetric public key could be verified with a symmetric HS256 algorithm.
Note:
This vulnerability affects your application if it supports the usage of both symmetric and asymmetric keys in jwt.verify() implementation with the same key retrieval function.
Remediation
Upgrade jsonwebtoken to version 9.0.0 or higher.
References
medium severity
- Vulnerable module: request
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › request@2.88.2
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › twilio@2.11.1 › request@2.74.0
Overview
request is a simplified http request client.
Affected versions of this package are vulnerable to Server-side Request Forgery (SSRF) due to insufficient checks in the lib/redirect.js file by allowing insecure redirects in the default configuration, via an attacker-controller server that does a cross-protocol redirect (HTTP to HTTPS, or HTTPS to HTTP).
NOTE: request package has been deprecated, so a fix is not expected. See https://github.com/request/request/issues/3142.
Remediation
A fix was pushed into the master branch but not yet published.
References
medium severity
- Vulnerable module: tough-cookie
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › request@2.88.2 › tough-cookie@2.5.0
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › twilio@2.11.1 › request@2.74.0 › tough-cookie@2.3.4
Overview
tough-cookie is a RFC6265 Cookies and CookieJar module for Node.js.
Affected versions of this package are vulnerable to Prototype Pollution due to improper handling of Cookies when using CookieJar in rejectPublicSuffixes=false mode. Due to an issue with the manner in which the objects are initialized, an attacker can expose or modify a limited amount of property information on those objects. There is no impact to availability.
PoC
// PoC.js
async function main(){
var tough = require("tough-cookie");
var cookiejar = new tough.CookieJar(undefined,{rejectPublicSuffixes:false});
// Exploit cookie
await cookiejar.setCookie(
"Slonser=polluted; Domain=__proto__; Path=/notauth",
"https://__proto__/admin"
);
// normal cookie
var cookie = await cookiejar.setCookie(
"Auth=Lol; Domain=google.com; Path=/notauth",
"https://google.com/"
);
//Exploit cookie
var a = {};
console.log(a["/notauth"]["Slonser"])
}
main();
Details
Prototype Pollution is a vulnerability affecting JavaScript. Prototype Pollution refers to the ability to inject properties into existing JavaScript language construct prototypes, such as objects. JavaScript allows all Object attributes to be altered, including their magical attributes such as __proto__, constructor and prototype. An attacker manipulates these attributes to overwrite, or pollute, a JavaScript application object prototype of the base object by injecting other values. Properties on the Object.prototype are then inherited by all the JavaScript objects through the prototype chain. When that happens, this leads to either denial of service by triggering JavaScript exceptions, or it tampers with the application source code to force the code path that the attacker injects, thereby leading to remote code execution.
There are two main ways in which the pollution of prototypes occurs:
Unsafe
Objectrecursive mergeProperty definition by path
Unsafe Object recursive merge
The logic of a vulnerable recursive merge function follows the following high-level model:
merge (target, source)
foreach property of source
if property exists and is an object on both the target and the source
merge(target[property], source[property])
else
target[property] = source[property]
When the source object contains a property named __proto__ defined with Object.defineProperty() , the condition that checks if the property exists and is an object on both the target and the source passes and the merge recurses with the target, being the prototype of Object and the source of Object as defined by the attacker. Properties are then copied on the Object prototype.
Clone operations are a special sub-class of unsafe recursive merges, which occur when a recursive merge is conducted on an empty object: merge({},source).
lodash and Hoek are examples of libraries susceptible to recursive merge attacks.
Property definition by path
There are a few JavaScript libraries that use an API to define property values on an object based on a given path. The function that is generally affected contains this signature: theFunction(object, path, value)
If the attacker can control the value of “path”, they can set this value to __proto__.myValue. myValue is then assigned to the prototype of the class of the object.
Types of attacks
There are a few methods by which Prototype Pollution can be manipulated:
| Type | Origin | Short description |
|---|---|---|
| Denial of service (DoS) | Client | This is the most likely attack. DoS occurs when Object holds generic functions that are implicitly called for various operations (for example, toString and valueOf). The attacker pollutes Object.prototype.someattr and alters its state to an unexpected value such as Int or Object. In this case, the code fails and is likely to cause a denial of service. For example: if an attacker pollutes Object.prototype.toString by defining it as an integer, if the codebase at any point was reliant on someobject.toString() it would fail. |
| Remote Code Execution | Client | Remote code execution is generally only possible in cases where the codebase evaluates a specific attribute of an object, and then executes that evaluation. For example: eval(someobject.someattr). In this case, if the attacker pollutes Object.prototype.someattr they are likely to be able to leverage this in order to execute code. |
| Property Injection | Client | The attacker pollutes properties that the codebase relies on for their informative value, including security properties such as cookies or tokens. For example: if a codebase checks privileges for someuser.isAdmin, then when the attacker pollutes Object.prototype.isAdmin and sets it to equal true, they can then achieve admin privileges. |
Affected environments
The following environments are susceptible to a Prototype Pollution attack:
Application server
Web server
Web browser
How to prevent
Freeze the prototype— use
Object.freeze (Object.prototype).Require schema validation of JSON input.
Avoid using unsafe recursive merge functions.
Consider using objects without prototypes (for example,
Object.create(null)), breaking the prototype chain and preventing pollution.As a best practice use
Mapinstead ofObject.
For more information on this vulnerability type:
Arteau, Oliver. “JavaScript prototype pollution attack in NodeJS application.” GitHub, 26 May 2018
Remediation
Upgrade tough-cookie to version 4.1.3 or higher.
References
medium severity
- Vulnerable module: jsonwebtoken
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › twilio@2.11.1 › jsonwebtoken@5.4.1Remediation: Upgrade to botkit@4.0.0.
Overview
jsonwebtoken is a JSON Web Token implementation (symmetric and asymmetric)
Affected versions of this package are vulnerable to Improper Authentication such that the lack of algorithm definition in the jwt.verify() function can lead to signature validation bypass due to defaulting to the none algorithm for signature verification.
Exploitability
Users are affected only if all of the following conditions are true for the jwt.verify() function:
A token with no signature is received.
No algorithms are specified.
A falsy (e.g.,
null,false,undefined) secret or key is passed.
Remediation
Upgrade jsonwebtoken to version 9.0.0 or higher.
References
medium severity
- Vulnerable module: hoek
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › twilio@2.11.1 › request@2.74.0 › hawk@3.1.3 › hoek@2.16.3Remediation: Upgrade to botkit@0.6.11.
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › twilio@2.11.1 › request@2.74.0 › hawk@3.1.3 › boom@2.10.1 › hoek@2.16.3Remediation: Upgrade to botkit@0.6.11.
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › twilio@2.11.1 › request@2.74.0 › hawk@3.1.3 › sntp@1.0.9 › hoek@2.16.3Remediation: Upgrade to botkit@0.6.11.
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › twilio@2.11.1 › request@2.74.0 › hawk@3.1.3 › cryptiles@2.0.5 › boom@2.10.1 › hoek@2.16.3Remediation: Upgrade to botkit@0.6.11.
Overview
hoek is an Utility methods for the hapi ecosystem.
Affected versions of this package are vulnerable to Prototype Pollution. The utilities function allow modification of the Object prototype. If an attacker can control part of the structure passed to this function, they could add or modify an existing property.
PoC by Olivier Arteau (HoLyVieR)
var Hoek = require('hoek');
var malicious_payload = '{"__proto__":{"oops":"It works !"}}';
var a = {};
console.log("Before : " + a.oops);
Hoek.merge({}, JSON.parse(malicious_payload));
console.log("After : " + a.oops);
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade hoek to version 4.2.1, 5.0.3 or higher.
References
medium severity
- Vulnerable module: axios
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › localtunnel@1.9.2 › axios@0.19.0
Overview
axios is a promise-based HTTP client for the browser and Node.js.
Affected versions of this package are vulnerable to Server-side Request Forgery (SSRF) due to the allowAbsoluteUrls attribute being ignored in the call to the buildFullPath function from the HTTP adapter. An attacker could launch SSRF attacks or exfiltrate sensitive data by tricking applications into sending requests to malicious endpoints.
PoC
const axios = require('axios');
const client = axios.create({baseURL: 'http://example.com/', allowAbsoluteUrls: false});
client.get('http://evil.com');
Remediation
Upgrade axios to version 0.30.0, 1.8.2 or higher.
References
medium severity
- Vulnerable module: axios
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › localtunnel@1.9.2 › axios@0.19.0
Overview
axios is a promise-based HTTP client for the browser and Node.js.
Affected versions of this package are vulnerable to Server-side Request Forgery (SSRF) due to not setting allowAbsoluteUrls to false by default when processing a requested URL in buildFullPath(). It may not be obvious that this value is being used with the less safe default, and URLs that are expected to be blocked may be accepted. This is a bypass of the fix for the vulnerability described in CVE-2025-27152.
Remediation
Upgrade axios to version 0.30.0, 1.8.3 or higher.
References
medium severity
- Vulnerable module: https-proxy-agent
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › https-proxy-agent@1.0.0Remediation: Upgrade to botkit@0.5.5.
Overview
https-proxy-agent is a module that provides an http.Agent implementation that connects to a specified HTTP or HTTPS proxy server, and can be used with the built-in https module.
Affected versions of this package are vulnerable to Man-in-the-Middle (MitM). When targeting a HTTP proxy, https-proxy-agent opens a socket to the proxy, and sends the proxy server a CONNECT request. If the proxy server responds with something other than a HTTP response 200, https-proxy-agent incorrectly returns the socket without any TLS upgrade. This request data may contain basic auth credentials or other secrets, is sent over an unencrypted connection. A suitably positioned attacker could steal these secrets and impersonate the client.
PoC by Kris Adler
var url = require('url');
var https = require('https');
var HttpsProxyAgent = require('https-proxy-agent');
var proxyOpts = url.parse('http://127.0.0.1:80');
var opts = url.parse('https://www.google.com');
var agent = new HttpsProxyAgent(proxyOpts);
opts.agent = agent;
opts.auth = 'username:password';
https.get(opts);
Remediation
Upgrade https-proxy-agent to version 2.2.3 or higher.
References
medium severity
- Vulnerable module: axios
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › localtunnel@1.9.2 › axios@0.19.0Remediation: Upgrade to botkit@0.7.5.
Overview
axios is a promise-based HTTP client for the browser and Node.js.
Affected versions of this package are vulnerable to Server-Side Request Forgery (SSRF). An attacker is able to bypass a proxy by providing a URL that responds with a redirect to a restricted host or IP address.
Remediation
Upgrade axios to version 0.21.1 or higher.
References
medium severity
- Vulnerable module: yargs-parser
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › localtunnel@1.9.2 › yargs@6.6.0 › yargs-parser@4.2.1Remediation: Upgrade to botkit@0.7.5.
Overview
yargs-parser is a mighty option parser used by yargs.
Affected versions of this package are vulnerable to Prototype Pollution. The library could be tricked into adding or modifying properties of Object.prototype using a __proto__ payload.
Our research team checked several attack vectors to verify this vulnerability:
- It could be used for privilege escalation.
- The library could be used to parse user input received from different sources:
- terminal emulators
- system calls from other code bases
- CLI RPC servers
PoC by Snyk
const parser = require("yargs-parser");
console.log(parser('--foo.__proto__.bar baz'));
console.log(({}).bar);
Details
Prototype Pollution is a vulnerability affecting JavaScript. Prototype Pollution refers to the ability to inject properties into existing JavaScript language construct prototypes, such as objects. JavaScript allows all Object attributes to be altered, including their magical attributes such as __proto__, constructor and prototype. An attacker manipulates these attributes to overwrite, or pollute, a JavaScript application object prototype of the base object by injecting other values. Properties on the Object.prototype are then inherited by all the JavaScript objects through the prototype chain. When that happens, this leads to either denial of service by triggering JavaScript exceptions, or it tampers with the application source code to force the code path that the attacker injects, thereby leading to remote code execution.
There are two main ways in which the pollution of prototypes occurs:
Unsafe
Objectrecursive mergeProperty definition by path
Unsafe Object recursive merge
The logic of a vulnerable recursive merge function follows the following high-level model:
merge (target, source)
foreach property of source
if property exists and is an object on both the target and the source
merge(target[property], source[property])
else
target[property] = source[property]
When the source object contains a property named __proto__ defined with Object.defineProperty() , the condition that checks if the property exists and is an object on both the target and the source passes and the merge recurses with the target, being the prototype of Object and the source of Object as defined by the attacker. Properties are then copied on the Object prototype.
Clone operations are a special sub-class of unsafe recursive merges, which occur when a recursive merge is conducted on an empty object: merge({},source).
lodash and Hoek are examples of libraries susceptible to recursive merge attacks.
Property definition by path
There are a few JavaScript libraries that use an API to define property values on an object based on a given path. The function that is generally affected contains this signature: theFunction(object, path, value)
If the attacker can control the value of “path”, they can set this value to __proto__.myValue. myValue is then assigned to the prototype of the class of the object.
Types of attacks
There are a few methods by which Prototype Pollution can be manipulated:
| Type | Origin | Short description |
|---|---|---|
| Denial of service (DoS) | Client | This is the most likely attack. DoS occurs when Object holds generic functions that are implicitly called for various operations (for example, toString and valueOf). The attacker pollutes Object.prototype.someattr and alters its state to an unexpected value such as Int or Object. In this case, the code fails and is likely to cause a denial of service. For example: if an attacker pollutes Object.prototype.toString by defining it as an integer, if the codebase at any point was reliant on someobject.toString() it would fail. |
| Remote Code Execution | Client | Remote code execution is generally only possible in cases where the codebase evaluates a specific attribute of an object, and then executes that evaluation. For example: eval(someobject.someattr). In this case, if the attacker pollutes Object.prototype.someattr they are likely to be able to leverage this in order to execute code. |
| Property Injection | Client | The attacker pollutes properties that the codebase relies on for their informative value, including security properties such as cookies or tokens. For example: if a codebase checks privileges for someuser.isAdmin, then when the attacker pollutes Object.prototype.isAdmin and sets it to equal true, they can then achieve admin privileges. |
Affected environments
The following environments are susceptible to a Prototype Pollution attack:
Application server
Web server
Web browser
How to prevent
Freeze the prototype— use
Object.freeze (Object.prototype).Require schema validation of JSON input.
Avoid using unsafe recursive merge functions.
Consider using objects without prototypes (for example,
Object.create(null)), breaking the prototype chain and preventing pollution.As a best practice use
Mapinstead ofObject.
For more information on this vulnerability type:
Arteau, Oliver. “JavaScript prototype pollution attack in NodeJS application.” GitHub, 26 May 2018
Remediation
Upgrade yargs-parser to version 5.0.1, 13.1.2, 15.0.1, 18.1.1 or higher.
References
medium severity
- Vulnerable module: axios
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › localtunnel@1.9.2 › axios@0.19.0
Overview
axios is a promise-based HTTP client for the browser and Node.js.
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS). An attacker can deplete system resources by providing a manipulated string as input to the format method, causing the regular expression to exhibit a time complexity of O(n^2). This makes the server to become unable to provide normal service due to the excessive cost and time wasted in processing vulnerable regular expressions.
PoC
const axios = require('axios');
console.time('t1');
axios.defaults.baseURL = '/'.repeat(10000) + 'a/';
axios.get('/a').then(()=>{}).catch(()=>{});
console.timeEnd('t1');
console.time('t2');
axios.defaults.baseURL = '/'.repeat(100000) + 'a/';
axios.get('/a').then(()=>{}).catch(()=>{});
console.timeEnd('t2');
/* stdout
t1: 60.826ms
t2: 5.826s
*/
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade axios to version 0.29.0, 1.6.3 or higher.
References
medium severity
- Vulnerable module: botkit
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2Remediation: Upgrade to botkit@0.4.4.
Overview
botkit is Building blocks for Building Bots.
Affected versions of the package are vulnerable to Denial of Service (DoS) attacks. An attacker may send huge arrays of requests or impersonate FB/users, causing the server to take extremely long time to process these requests.
Remediation
Upgrade botkit to version 0.4.4 or higher.
References
medium severity
- Vulnerable module: follow-redirects
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › localtunnel@1.9.2 › axios@0.19.0 › follow-redirects@1.5.10Remediation: Upgrade to botkit@0.7.5.
Overview
Affected versions of this package are vulnerable to Information Exposure by leaking the cookie header to a third party site in the process of fetching a remote URL with the cookie in the request body. If the response contains a location header, it will follow the redirect to another URL of a potentially malicious actor, to which the cookie would be exposed.
Remediation
Upgrade follow-redirects to version 1.14.7 or higher.
References
medium severity
- Vulnerable module: ws
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › ws@1.1.5Remediation: Upgrade to botkit@0.6.14.
Overview
ws is a simple to use websocket client, server and console for node.js.
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS). A specially crafted value of the Sec-Websocket-Protocol header can be used to significantly slow down a ws server.
##PoC
for (const length of [1000, 2000, 4000, 8000, 16000, 32000]) {
const value = 'b' + ' '.repeat(length) + 'x';
const start = process.hrtime.bigint();
value.trim().split(/ *, */);
const end = process.hrtime.bigint();
console.log('length = %d, time = %f ns', length, end - start);
}
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade ws to version 7.4.6, 6.2.2, 5.2.3 or higher.
References
medium severity
- Vulnerable module: tunnel-agent
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › twilio@2.11.1 › request@2.74.0 › tunnel-agent@0.4.3Remediation: Upgrade to botkit@0.6.11.
Overview
tunnel-agent is HTTP proxy tunneling agent. Affected versions of the package are vulnerable to Uninitialized Memory Exposure.
A possible memory disclosure vulnerability exists when a value of type number is used to set the proxy.auth option of a request request and results in a possible uninitialized memory exposures in the request body.
This is a result of unobstructed use of the Buffer constructor, whose insecure default constructor increases the odds of memory leakage.
Details
Constructing a Buffer class with integer N creates a Buffer of length N with raw (not "zero-ed") memory.
In the following example, the first call would allocate 100 bytes of memory, while the second example will allocate the memory needed for the string "100":
// uninitialized Buffer of length 100
x = new Buffer(100);
// initialized Buffer with value of '100'
x = new Buffer('100');
tunnel-agent's request construction uses the default Buffer constructor as-is, making it easy to append uninitialized memory to an existing list. If the value of the buffer list is exposed to users, it may expose raw server side memory, potentially holding secrets, private data and code. This is a similar vulnerability to the infamous Heartbleed flaw in OpenSSL.
Proof of concept by ChALkeR
require('request')({
method: 'GET',
uri: 'http://www.example.com',
tunnel: true,
proxy:{
protocol: 'http:',
host:"127.0.0.1",
port:8080,
auth:80
}
});
You can read more about the insecure Buffer behavior on our blog.
Similar vulnerabilities were discovered in request, mongoose, ws and sequelize.
Remediation
Upgrade tunnel-agent to version 0.6.0 or higher.
Note This is vulnerable only for Node <=4
References
low severity
- Vulnerable module: debug
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › localtunnel@1.9.2 › debug@4.1.1Remediation: Upgrade to botkit@0.7.5.
Overview
debug is a small debugging utility.
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) in the function useColors via manipulation of the str argument.
The vulnerability can cause a very low impact of about 2 seconds of matching time for data 50k characters long.
Note: CVE-2017-20165 is a duplicate of this vulnerability.
PoC
Use the following regex in the %o formatter.
/\s*\n\s*/
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade debug to version 2.6.9, 3.1.0, 3.2.7, 4.3.1 or higher.
References
low severity
- Vulnerable module: ms
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › twilio@2.11.1 › jsonwebtoken@5.4.1 › ms@0.7.3Remediation: Upgrade to botkit@0.6.11.
Overview
ms is a tiny millisecond conversion utility.
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an incomplete fix for previously reported vulnerability npm:ms:20151024. The fix limited the length of accepted input string to 10,000 characters, and turned to be insufficient making it possible to block the event loop for 0.3 seconds (on a typical laptop) with a specially crafted string passed to ms() function.
Proof of concept
ms = require('ms');
ms('1'.repeat(9998) + 'Q') // Takes about ~0.3s
Note: Snyk's patch for this vulnerability limits input length to 100 characters. This new limit was deemed to be a breaking change by the author. Based on user feedback, we believe the risk of breakage is very low, while the value to your security is much greater, and therefore opted to still capture this change in a patch for earlier versions as well. Whenever patching security issues, we always suggest to run tests on your code to validate that nothing has been broken.
For more information on Regular Expression Denial of Service (ReDoS) attacks, go to our blog.
Disclosure Timeline
- Feb 9th, 2017 - Reported the issue to package owner.
- Feb 11th, 2017 - Issue acknowledged by package owner.
- April 12th, 2017 - Fix PR opened by Snyk Security Team.
- May 15th, 2017 - Vulnerability published.
- May 16th, 2017 - Issue fixed and version
2.0.0released. - May 21th, 2017 - Patches released for versions
>=0.7.1, <=1.0.0.
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade ms to version 2.0.0 or higher.
References
low severity
- Vulnerable module: follow-redirects
- Introduced through: botkit@0.2.2
Detailed paths
-
Introduced through: imdb-bot@davidkelley/imdb-bot#1c4e5322502ff0403de63bd806b7bd0c1185f218 › botkit@0.2.2 › localtunnel@1.9.2 › axios@0.19.0 › follow-redirects@1.5.10Remediation: Upgrade to botkit@0.7.5.
Overview
Affected versions of this package are vulnerable to Information Exposure due a leakage of the Authorization header from the same hostname during HTTPS to HTTP redirection. An attacker who can listen in on the wire (or perform a MITM attack) will be able to receive the Authorization header due to the usage of the insecure HTTP protocol which does not verify the hostname the request is sending to.
Remediation
Upgrade follow-redirects to version 1.14.8 or higher.