17
How to mess up your JavaScript code like a boss
Photo by Sebastian Herrmann on Unsplash
Good Bye, reliable code! Leverage these concepts and language features, deploy your app and then... watch everything burn π₯
I actually felt that way from time to time when I just ran into some of these things the first time. It was like all my hard work had just been nullified by a simple misunderstanding or naive implementation. π’
This article is therefore my personal "best-of" collection of problems that came up due to my very naive usage of JavaScript. Some of them actually caused severe issues in my early days apps and brought me countless hours of debugging, reading, finding and fixing. π
However, this process made me a better developer and engineer and I hope they will also serve for you and your projects well. Knowing them and finding alternatives at the design phase will improve your apps robustness and maintainability. At least I think so. Leave a comment, if think otherwise. β€οΈ
In JavaScript you are actually pretty lost, when you rely on checking the given type of a variable:
// expected
typeof 135.791113 // "number"
typeof "foo" // "string"
typeof {} // "object"
typeof Symbol('foo') // "symbol"
typeof 1357911n // "bigint"
// somewhat unexpected for beginners
typeof [] // "object", expected something like "array"
typeof async () => {} // "function", expected "async function"
// totally not as expected
typeof NaN // "number", what!? Not a number is a number!?
typeof null // "object", how can nothing be an object!?
Relying on typeof
can therefore not be considered as safe, at least not without detailed additional checks. Relying on it in sensitive contexts can have severe consequences.
- Runtime errors
- Injection of unwanted code into functions can become possible
- Breaking the applications or server process becomes possible
- Use a validation library (there are some, do your research)
- Define "interfaces" (easy in TypeScript, though) that check for primitive (own) properties of an input
- Extend your checks with additional checks (for example check if
n
is of typenumber
and is not equalNaN
- Add a lot more edge test-cases, use fuzzing techniques to make sure you cover as many non-trivial inputs as possible
- Use TypeScript to have built-in type-checking at "compile time" (it's not a silver-bullet though)
This is not only a problem from an OOP perspective (implement against interfaces, not classes!) but also does not work out quite well all the time:
// Proxy simply comes from another dimension....
new Proxy({}, {}) instanceof Proxy // TypeError: 'prototype' property of Proxy is not an object
// descendants of Object are still Objects
(() => {}) instanceof Object // true
// primitives disguising as Object
new String('foo') instanceof Object // true
new Number(1.357911) instanceof Object // true
// Object disguising as non-Object
Object.create(null) instanceof Object // false
const obj = {}
obj.__proto__ = null
obj instanceof Object // false
- All of the former mentioned issues plus
- Tight coupling is introduced easily
- All of the former mentioned fixes plus
- Check for properties and their types instead of specific inheritance
The prototypical inheritance of JavaScript brings further complexity when it comes to detecting an Object's properties. Some have been inherited from the prototype, others are the object's own properties. Consider the following example:
class Food {
constructor (expires) {
this.expires = expires
this.days = 0
}
addDay () {
this.days++
}
hasExpired () {
return this.days >= this.expires
}
}
class Apple extends Food {
constructor () {
super(3) // 3 days
this.shape = 'sphere'
}
}
Now let's create a new Apple
instance and see which of the properties are available:
const apple = new Apple()
// let's add this method just to this one apple instance
apple.isFresh = () => apple.days < apple.expires
'expires' in apple // true
'shape' in apple // true
'addDay' in apple // true
'hasExpired' in apple // true
'isFresh' in apple // true
As you can see here we simply get true
for every in
check, because
The in operator returns true if the specified property is in the specified object or its prototype chain.
MDN
Beware of confusing the in
operator with the for..in
statement. It gives you a totally different result:
for (const prop in apple) {
console.log(prop)
}
// output
"expires"
"days"
"shape"
"isFresh"
The for..in
loops only through the enumerable properties and omits all the methods, which are assigned to the prototype but it still lists the directly assigned properties.
So it seems to be safe to always use for..in
? Let's take a look at a slightly different approach to our food-chain:
const Food = {}
Food.expires = 3 // assigned, right!?
const apple = Object.create(Food)
apple.shape = 'sphere' // also assigned
'expires' in apple // true
apple.hasOwnProperty('expires') // false
'shape' in apple // true
apple.hasOwnProperty('shape') // true
for (const prop in apple) {
console.log(prop)
}
// output
"expires"
"shape"
The apple
is now created with Food
as it's prototype, which itself has Object
as it's prototype.
As you can see the expires
property hasn't been passed down the prototype chain as it happened with the ES6 classes example above. However, the property is considered as "enumerable", which is why it's listed in the for..in
statement's output.
- Validations can fail, creating false-positives or false-negatives
- Make it clear, whether validations will check for direct properties or have look at the full prototype-chain
- Avoid inheritance where possible and use composition in favor
- Otherwise try to stick with ES6 classes as they solve many fiddling with the prototype chain for you
The toString
method is a builtin that descends from Object
and returns a String-representation of it. Descendants can override it to create a custom output that suits it's internal structure.
However, you can't simply rely on it without knowing each specific implementation. Here is one example, where you might think you are clever by using the toString
method to fast-compare two Arrays:
[1, 2, 3].toString() === ["1",2,3].toString() // true, should be false
0.0.toString() === "0.0" // false, should be true
Also note, that someone can easily override global toString implementations:
Array.prototype.toString = function () {
return '[I, am,compliant, to, your, checks]'
}
[1, 2, 3].toString() // "[I, am,compliant, to, your, checks]"
- Runtime errors, due to wrong comparisons
-
toString
spoofing / overriding can break these checks and is considered a vulnerability
- Use
JSON.stringify
+ sorting on arrays - If
JSON.stringify
alone isn't enough, you may need to write a custom replacer function - Use
toLocaleString()
ortoISOString()
on Date objects but note they are also easily overridden - Use an alternative Date library with better comparison options
There are builtin Methods, that help to parse a variable into a different type. Consider Number.parseInt
which allows to parse a (decimal) Number to an integer (still Number).
However, this can easily get out of hand if you don't determine the radix
parameter:
// expected
Number.parseInt(1.357911) // 1
Number.parseInt('1.357911') // 1
Number.parseInt(0x14b857) // 1357911
Number.parseInt(0b101001011100001010111) // 1357911
// boom
const hexStr = 1357911.toString(16) // "14b857"
Number.parseInt(hexStr) // 14
const binStr = 1357911.toString(2) // "101001011100001010111"
Number.parseInt(binStr) // 101001011100001010111
// fixes
Number.parseInt(hexStr, 16) // 1357911
Number.parseInt(binStr, 2) // 1357911
- Calculations will end up wrong
- Always use the
radix
parameter - Only allow numbers as input, note that
0x14b857
and0b101001011100001010111
are of typenumber
and due to the0x
and the0b
prefixes theparseInt
method will automatically detect their radix (but not for other systems like octal or other bases)
You can easily write code that may bring up unexpected results if you don't care about potential type coercion.
To understand the difference to type conversion (which we discussion by one example in the previous section), check out this definition from MDN:
Type coercion is the automatic or implicit conversion of values from one data type to another (such as strings to numbers). Type conversion is similar to type coercion because they both convert values from one data type to another with one key difference β type coercion is implicit whereas type conversion can be either implicit or explicit.
The easiest example is a naive add-Function:
const add = (a, b) => a + b
add('1', 0) // '10'
add(0, '1') // '01'
add(0) // NaN, because Number + undefined = NaN
add(1, null) // 1, just don't think about why...
add(1, []) // "1", just don't think about why...
add(1, []) // "1", just don't think about why...
add(1, () => {}) // "1() => {}", I'll stop here
- Totally uncontrollable results will happen
- Can break your application or server process
- Debugging back from errors to the function where the coercion happened will be lots of fun... π₯
- validate input parameters
const isNumber = x => typeof x === 'number' && !Number.isNaN(x) // unfortunately NaN is of type number
const add = (a, b) => {
if (!isNumber(a) || !isNumber(b)) {
throw new Error('expected a and b to be a Number')
}
return a + b
}
add('1', 0) // throws
add('0', 1) // throws
add(0) // throws
add(1, null) // throws
add(1, []) // throws
add(1, []) // throws
add(1, () => {}) // throws
add(1, 2) // 3, yeay!
- explicit conversion before coercion can happen
// preventing NaN by using parameter defaults
const add = (a = 0, b = 0) => {
let a1 = Number.parseFloat(a, 10)
let b1 = Number.parseFloat(b, 10)
// a1, b1 could be NaN so check them
if (!isNumber(a1) || !isNumber(b1)) {
throw new Error('Expected input to be number-alike')
}
return a1 + b1
}
add('1', 0) // 1
add('0', 1) // 1
add(0) // 0
add(1) // 1
add(1, null) // throws
add(1, []) // throws
add(1, []) // throws
add(1, () => {}) // throws
add(1, 2) // 3, yeay!
Simply using typescript won't fix the issue:
const add = function (a:number, b:number) {
return a + b
}
add(1, NaN) // NaN
You will therefore end up with one of the above strategies. Let me know if you came up with another strategy.
const isDefined = x => !!x
isDefined('') // false, should be true
isDefined(0) // false, should be true
- Runtime errors
- Undefined application state
- Potential security risk if user input is involved
- Avoid truthy/falsy evaluations and evaluate strict
- Additionally: have high test coverage; use fuzzing; test for edge cases
Example:
const isDefined = x => typeof x !== 'undefined'
isDefined('') // true
isDefined(0) // true
isDefined(null) // true <-- uh oh
Finally:
const isDefined = x => typeof x !== 'undefined' && x !== null
isDefined('') // true
isDefined(0) // true
isDefined(null) // false
If you don't want to use the typeof
check here, you can alternatively use x !== (void 0)
.
A very underrated issues arises, when accessing properties via Object-Bracket notation by user input.
This is, because bracket-notation allows us even to override properties of the prototype-chain like __proto__
or prototype
and thus potentially affecting all Objects in the current scope.
With prototype pollution an attacker is able to manipulate properties in the prototype chain and exploit this fact to gain privileged access.
Consider the following example:
const user = { id: 'foo', profile: { name: 'Jane Doe', age: 42 }, roles: { manager: true } }
function updateUser(category, key, value) {
if (category in user) {
user[category][key] = value
}
}
// good use
updateUser('profile', 'locale', 'de-DE')
// bad use
updateUser('__proto__', 'exploit', 'All your base are belong to us')
// consequence of this
const newObject = {}
newObject.exploit // "All your base are belong to us"
I admin this example is inherently dangerous as it contains so many problems but I tried to break it down to give you the idea how easily a prototype pollution can occur with bracket notation.
- Exploitable vulnerability
- use explicit variable names
function updateUserProfile(category, key, value) {
if (key === 'name') user.profile.name = value
if (key === 'age') user.profile.age = value
}
- use
Object.prototype.hasOwnProperty
to check
function updateUser(category, key, value) {
if (Object.prototype.hasOwnProperty.call(user, category)) {
user[category][key] = value
}
}
updateUser('__proto__', 'exploit', 'All your base are belong to us')
const newObject = {}
newObject.exploit // undefined
- use a
Proxy
Object
const forbidden = ['__proto__', 'prototype', 'constructor']
const user = new Proxy({ id: 'foo', profile: { name: 'Jane Doe', age: 42 }, roles: { manager: true } }, {
get: function (target, prop, receiver) {
if (forbidden.includes(prop)) {
// log this incident
return
}
// ... otherwise do processing
}
})
function updateUser(category, key, value) {
user[category][key] = value
}
updateUser('profile', 'locale', 'de-DE')
updateUser('__proto__', 'exploit', 'All your base are belong to us') // error
Note: libraries are not a silver-bullet here!
We already covered the problems with 'number'
types in previous sections:
const isNumber = n => typeof n === 'number'
isNumber(NaN) // true
isNumber(Number.MAX_VALUE * 2) // true
isNumber(Number.MIN_VALUE / 2) // true
However, there is much more to validating numerical input. Consider a few potential cases here:
- value is expected to be integer but is a float
- value is not a "safe" integer (max./min. supported Int value)
- value is +/-Infinity but expected to be finite
- value is beyond Number.MIN_VALUE
- value is beyond Number.MAX_VALUE
The potential issues should be clear by now (unless you skipped the first couple of sections) so let's find a modular way to handle as many of these cases as possible.
const isValidNumber = num => (typeof num === 'number') && !Number.isNaN(num)
const num = Number.parseFloat({}) // => NaN
isNumber(num) // false, as expected
We simply don't want "not a number" to be interpreted as a number, that's just insane.
export const isValidInteger = num => isValidNumber(num) && Number.isSafeInteger(num)
isValidInteger({}) // false
isValidInteger(Number.parseFloat({})) // false
isValidInteger(1.357911) // false
isValidInteger(1.0) // true
isValidInteger(1) // true
Note the edge case of 1.0
which is internally in JS treated as integer:
let n = 1
n.toString(2) // "1"
const isInFloatBounds = num => isValidNumber(num) && num >= Number.MIN_VALUE && num <= Number.MAX_VALUE
isInFloatBounds(Infinity) // false
isInFloatBounds(-Infinity) // false
// check for MAX_VALUE
isInFloatBounds(100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000) // true
isInFloatBounds(1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000) // false
// check for MIN_VALUE
isInFloatBounds(0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001) // true
isInFloatBounds(0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001) // false
Make sure the value is in between the usable range. Everything beyond that should be handled using BigInt
or a specialized library for large Numbers.
Also note, that allthough these values are considered valid floats, you may still find odd interpretations:
const almostZero = 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001
isInFloatBounds(almostZero) // true
almostZero // 1e-323
const zero = 0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001
isInFloatBounds(zero) // false
zero // 0
export const isValidFloat = num => {
if (!isValidNumber(num)) return false
if (num === 0) return true // this is debatable
return isInFloatBounds(num < 0 ? -num : num)
}
This section already reveals the next one: simply avoid any serious floating point computations with Number
in JavaScript!
In order to understand this section, let's read on the JavaScript Number implementation:
The JavaScript Number type is a double-precision 64-bit binary format IEEE 754 value, like double in Java or C#. This means it can represent fractional values, but there are some limits to what it can store. A Number only keeps about 17 decimal places of precision; arithmetic is subject to rounding. The largest value a Number can hold is about 1.8E308. Numbers beyond that are replaced with the special Number constant Infinity.
Some examples, where this can become problematic:
const n = 0.1 + 0.2 // 0.30000000000000004
n === 0.3 // false
Think of systems, where currencies are involved or calculation results are used for life-affecting decisions. Even the smallest rounding errors can lead to catastrophic consequences. π₯
Try to convert float to hex or to bin and back to float is not possible out-of-the box:
const num = 1.357911
const hex = num.toString(16) // 1.5ba00e27e0efa
const bin = num.toString(2) // 1.010110111010000000001110001001111110000011101111101
Number.parseFloat(hex, 16) // 1.5
Number.parseFloat(bin, 2) // 1.01011011101
// integers
const num = Number.MAX_SAFE_INTEGER
num // 9007199254740991
num + 100 // 9007199254741092, should be 9007199254741091
// floats
const max = Number.MAX_VALUE
max // 1.7976931348623157e+308
max * 1.00001 // Infinity
- Use BigInt
- Use
Math.fround
- Use a library for precise arithmetic
- Use typed arrays to precisely convert between numerical systems
- Write your code in a way, that you can easily replace plain Number arithmetic with one of the above solutions
Note: I am not digging deeper into this as my best advice is to use a library that handles arithmetic precision for you. Doing your own implementations will easily still result in errors.
This one is not definitive good or bad and rather depends on the situation. If you are certain, that the involved evaluations will always result in a boolean value then it it's safe to use them.
As example you can review the extended Number checks above. However, consider the following example: You want to write a function, that checks, whether a given array is filled.
const isFilled = arr => arr && arr.length > 0
isFilled([ ]) // false
isFilled([1]) // true
isFilled() // undefined
As you can see the function has not a well-defined return type. It should return either true
or false
but never undefined
.
In these case you should write your code more verbose and explicit in order to make sure, that functions really return only valid values:
Possible solution
const isFilled = arr => arr ? arr.length > 0 : false
isFilled([ ]) // false
isFilled([1]) // true
isFilled() // false
Better
This solution is just a half-baked one, better is to throw an error to ensure the function had the proper input to reason about - fail early, fail often to make your application more robust:
const isFilled = arr => {
if (!Array.isArray(arr)) {
throw new TypeError('expected arr to be an Array')
}
return arr.length > 0
}
isFilled([ ]) // false
isFilled([1]) // true
isFilled() // throws Uncaught TypeError
Related issues
- Ambiguous return values, leading to potential branching issues and runtime errors
- Checks may fail
- Business/application logic becomes unreliable
Potential fixes
- Use ternary operator
- return explicit
- use TypeScript
- Write extensive unit tests to ensure only valid return values are involved
If you work a bit longer in the JavaScript realm you may still remember these "psuedo"-private members: if they begin with an underscore they are intended (by convention) to be private and not used directly:
const myObj = {
_count: 0,
count: function () {
return count++
}
}
- These properties are enumerable by default
- They can be manipulated without any restrictions
- By exploiting a prototype-pollution vulnerability they can theoretically be accessed by users; on the client they can be accessed anyway if the containing Object is accessible to the user
- Use closures with real private variables
const createCounter = () => {
let count = 0
return {
count: () => count++
}
}
- Use a
Proxy
Object to have fine grained control about any member access - Use classes with private features
- Use my
class-privacy
if you can't support private members yet
- Using
eval
without exactly knowing what you're doing - Passing String literals to
setTimeout
(orsetInterval
) - rely on encodeURIComponent
17