JavaScript vs. WASM: Monte-Carlo Simulation

February 18th, 2024

I've always been fascinated by WebAssembly (WASM). I often get asked what is it about WASM which excites me. My answer is that it's the unlikely intersection of web development and systems engineering - two very different ends of the spectrum. Taking code typically written in C or C++, often containing system-level calls expecting POSIX compliant interfaces, and compiling it to a binary format that can run natively in web browsers, is a crazy proposition - and that's exactly why I love WASM so much. If you've read my post advocating for generalist software engineers, you will totally see why this makes sense in my world.

Today's edition is going to be a bit more in the weeds, so buckle up!

Why am I here?

Excellent question. You're here to help me with a little experiment. But before we dig deeper, let me tell you why you're not here - we won't be discussing anything related to WebAssembly System Interface (WASI) today. If you were hoping to read more about that, I'm sorry to disappoint you, but I promise I'll write a post about WASI in the future, because that's indeed an exciting space where most of the innovation is happening!


Since its introduction in 2017, WASM has exploded in popularity. I'm sure you already know what WASM is/does, but if not then here is a short summary: WASM is a binary instruction format for a virtual ISA designed for portability. Similar to how Java compiles to bytecode which can run on the JVM, languages such as C, C++, Rust, Go etc. can compile code to WASM, and that code can then run on various WASM runtimes (VMs). There are many runtimes for WASM, with modern web browsers being the most prominent example. There is also a growing number of server-based runtimes for WASM.

The original intention behind WASM was to open up web browsers for the possibly of running intensive calculations natively, and with expected performance. You see, the V8 JavaScript engine is so well optimized that it should, in theory, have performance parity with WASM, ceteris paribus. However, in practice, things are not so clear. Everything I am about to explain here is covered in glorious detail by Surma in this talk, so I encourage you to watch it if this interests you!

When a user runs JavaScript code in the browser, the entry point for that code in the V8 engine, is the Ignition interpreter. Ignition collects data about the code as it runs it, and at some point, after enough data has been collected, it hands parts of it off to an optimizing compiler called TurboFan. TurboFan optimizes the code and emits machine code which enables some performance gain. But that entire process comes with a big caveat. As Ignition observes the running JavaScript code, it operates based on a set of assumptions which help decide which code should be sent to TurboFan for further optimization. Due to the dynamic nature of JavaScript, these assumptions can, and do, change as code runs. And when that happens, the code optimized by TurboFan gets chucked out, and the process restarts. This is called deopt. In other words, peak performance in JavaScript is not deterministic.

So how does it work with WASM, you ask? WASM's entry point is not Ignition (WASM is a binary format which needs to be compiled and not interpreted), but a baseline compiler called Liftoff. Once Liftoff is done generating machine code, it hands over that machine code to TurboFan for further optimization, and once TurboFan is done optimizing the code, that code continues running throughout the life of the application, and no deopts happen. In other words, peak performance in WebAssembly is deterministic.

And that's really the crux of the matter here. Whilst JavaScript can run at peak performance thanks to TurboFan optimizations, deopts will happen due to JS' dynamic nature. This means that, on average, JavaScript code will not run at peak performance. WebAssembly code, however, is guaranteed to run at peak performance all the time.

Today we are going to test how much of a performance delta we can find between JS and WASM by running and measuring the time it takes two identical code snippets - one written in C and compiled to WASM, and the other written in JavaScript - to complete execution. And of course we will use option pricing code for the test, because it just makes this entire exercise more exciting!

Monte Carlo

It's a city. But in this post's context it's a method for numerically estimating the possible outcomes of an uncertain event (named after that city). In our case, the uncertain event is the evolution of an asset price into the future. As you can imagine, if I were to ask you where, given a set of parameters, the share price of Netflix Inc would end up in a year's time, you would agree there are many possible outcomes. Let's build our intuition further:

You own 10 shares of Netflix today and you plan on holding them for another year. I send you a text asking if you could sell me (I buy from you) the right, but not the obligation, to buy - in exactly one year from today - 10 Shares of Netflix at a predetermined price of 100 US Dollars per share.

How would you determine the value of that contract? Intuitively, you would want to have a good idea of where the Netflix share price could be one year from today. You will also want to know the likelihood of the price I offered to pay per share being lower or higher than the Netflix share price a year from today. The contract I describes above is a European call option. European, in this context, bears absolutely no geographic meaning - it simply means that the buyer of the contract can exercise the right (but does not have the obligation) to buy 10 shares of Netflix at 100 per share exactly at the expiry date - not before. Monte Carlo simulation allows us to simulate the share price of Netflix into the future, given a set of parameters: the share price of Netflix today (S), the price at which I want to pay per one Netflix share in the future (K), the time and date on which I need to decide if I want to exercise my right (but not the obligation) to buy the Netflix shares at the predetermined price (T), the expected fluctuation in Netflix's share price from today until the transaction date (sigma), and the current risk-free interest rate (r). Given those parameters, we can simulate a path for Netflix's share price between today and the "horizon" - the expiry date of the European call option contract. Each time we repeat this simulation, we will have another data point as to where Netflix's share price could be in the future. Repeat this experiment a few times, and you end up with a distribution of the evolution of Netflix's share price, which is very useful data to have.

The MC Formula

$$ S_{t+1} = S_t e^{\left( r - \frac{1}{2}\sigma^2 \right)\Delta t + \sigma\sqrt{\Delta t} \varepsilon} $$

This formula represents the price evolution of the Netflix (NFLX) stock price into the future, but it does so one step at a time. $S_{t+1}$ is the value of NFLX tomorrow, and it's calculated by taking the value of NFLX today, $S_t$, and multiplying it by a factor which simulates the randomness of asset movements from one time period to the next. This calculation is applied iteratively on an asset price for a number of steps, until we reach the final asset price, projected into the future.

When I studied this material, earlier in my career, the math didn't always illustrate the concept clearly enough. And so I often found myself tinkering with Excel (and later Python), trying to bring these dynamics to life by adding visualization with an element of interactivity. Doing so helped me tremendously to develop an intuition for these dynamics. With that in mind, below you will find an interactive Monte Carlo simulator with full visuals of the path. Let's go over the details:

  1. The share price as of the pricing date, $S_0$, is set to 100.
  2. The expected fluctuation, or volatility ($\sigma$), of the asset price, for the next one calendar year, is set to 20%.
  3. We are simulating the asset one year forward, or 252 trading days. This is the number you'll see on the x-axis.
  4. The risk-free rate we're using here is 5% ($r$).
  5. The asset price for each day is displayed on the y-axis

Click on the button to add more simulations to the chart. You'll see that each path is completely different. After adding a few paths, you'll start developing an appreciation for where this asset price will be in the future. You will also see that it's very hard to arrive at a single number, and it may be better to consider a range of outcomes instead.

Black-Scholes again

If quant finance is not your thing then I promise we're almost done! It would just be a wasted opportunity to not explain the relationship between Black-Scholes (BS) and Monte Carlo simulation. If you recall from an earlier post, Black-Scholes is a closed-form solution for pricing European options. BS uses stochastic calculus to model the same dynamics you saw above, but in a formula you can just plug values into - no numerical simulation. Take a moment to appreciate how special this is! Myron Scholes and Robert Merton (the third, often forgotten name in the Black-Scholes-Merton formula) won a Nobel Prize for the discovery of the Black-Scholes formula, in 1997. Sadly, Fischer Black passed away in 1995.

The BS formula is much quicker and more efficient for calculating European option prices, than Monte Carlo simulation. Notice this is not the case for other types of options such as barrier, exotics an American. You are probably asking yourself:

Well, why bother with Monte Carlo simulation, then?

Excellent question! Because Monte Carlo simulation is computationally intensive, it is an excellent vehicle to drive our experiment of testing whether JavaScript is faster than WebAssembly, or vice-versa. And because we also have Black-Scholes, we can use it to sanity-check our calculations, given that our MC simulations - with enough paths and time steps - should converge on the BS price.

The Game Plan

We write two implementations for the Monte-Carlo-based formula above, one in JavaScript, and the other in C, which is then compiled to WASM. We could have used any language for the WASM part, and C is a personal choice. I could (should I? Let me know!) write a separate post about my love for C and its simplicity, but that's something for another day 😁.

We will try to keep the two implementations as close as possible to each other, so that the performance comparison is as accurate as it can be. But what I want to stress is that this is not a scientific experiment. There will be some elements of the code which we will not implement from scratch. For example, for random number generation, we use Number.Random() in JS, and rand() in C. Both of these do follow a similar logic in their implementations, but they are not identical. I also didn't bother with a seed for the pseudo random generators as this is intended to be a "real world" scenario with different results each time. The erf function in C and JS also follows the same logic - but with different implementations. This means that it's likely that some of the performance difference we capture in our tests can be attributed to this difference - and that's okay!

Think of this experiment as an applied one - a hypothetical scenario where an engineer at ACME Tech Valley is trusted with finding the most optimal solution for fast MC simulations in the browser. This is the spirit in which this experiment is conducted.

Let's Dive In

Recall the MC formula:

$$ S_{t+1} = S_t e^{\left( r - \frac{1}{2}\sigma^2 \right)\Delta t + \sigma\sqrt{\Delta t} \varepsilon} $$

This formula describes the evolution of the asset price from one time step to the next. This is enough for projecting a price into the future. But it's not sufficient for computing the payoff of a European call option. Because this is a European option, we only care about the price at expiry. For a strike price of $K$ and a final asset price (post-simulation) $S_T$, the payoff formula for a long (we buy) European call option at expiry is as follows:

$$ C(T) = \max(S_{T} - K, 0) $$

In layman's terms:

The value of the call option at time $T$ is the price of the asset at expiry ($S_{T}$) minus the price at which we agreed to buy the asset at expiry ($K$), or zero - whichever of the two is greater. Basically, if we agreed to buy at a lower price than the actual price of the asset at option expiry, that's what the option is worth at expiry. Otherwise it's worthless.

From a code standpoint, most of the MC formula is quite simple to implement using basic variables with arrays and other built in functions such as sqrt. There is one component which is less straightforward to implement and that's the epsilon ($\epsilon$) symbol. This symbol denotes a random number chosen from a Gaussian (normal) distribution. We implement the same logic for choosing that number in both JS and C. The algorithm we'll use is Box-Muller.

Box Muller

C implementation
double box_muller() {
  double u1, u2;
  do {
      u1 = (double) rand() / (RAND_MAX + 1.0);
      u2 = (double) rand() / (RAND_MAX + 1.0);
  } while (u1 <= 0.0 || u2 <= 0.0);
  return sqrt(-2 * log(u1)) * cos(2 * M_PI * u2);

RAND_MAX and M_PI are both macros which map to numbers. The former is defined in the stdlib module, and the latter in the math module. For the function calls, rand() is defined in the stdlib module, and sqrt, log and cos are all defined in the math module.

JS implementation
function boxMullerJS() {
  let u1 = 0,
    u2 = 0;
  while (u1 === 0) u1 = Math.random();
  while (u2 === 0) u2 = Math.random();

  return Math.sqrt(-2.0 * Math.log(u1)) * Math.cos(2.0 * Math.PI * u2);

MC Code

We are introducing a small optimization in our MC implementation. In each iteration in our loop, we calculate an entire path. Usually, we would have an $N \cdot M$ matrix, where $M$ is the number of paths, and $N$ is the number of steps in each path, where the step size is calculated as $T \div N$. This is useful for simulations where we care about the path each simulation is taking, and not just the final price. Barrier options and other exotic options are a good example of cases where we care about the path and not just the final price. Because we're pricing a European option, we only care about the final price. This means we can get a final price for the asset by using a single time step, $T$, where $S_{T}$ is the price at expiry. That allows us to treat each iteration in the loop as a full path.

We calculate a payoff for each path, and the final payoff is the average of all payoffs from all paths. The exp function discounts the price by the risk-free rate to its present value (this is a time value of money concept, where a dollar today is worth more than a dollar a year from today, and vice-versa).

C implementation
double monte_carlo_call(double S, double K, double r, double sigma, double T, int M) {
  double sum = 0;
  for (int i = 0; i < M; i++) {
    double z = box_muller();
    double ST = S * exp((r - 0.5 * sigma * sigma) * T + sigma * sqrt(T) * z);
    double payoff = fmax(ST - K, 0);
    sum += payoff;
  return exp(-r * T) * (sum / M);

There is a little quirk with the srand function call here. The rand() function used in the Box-Muller algorithm will use a default seed of 1 to generate random variables, which means the same number being generated each time we call the function. We therefore need to call srand() to set a custom seed number. And our call to time(null) takes the value of the computer's internal clock - which changes continuously, and between calls - as the seed value. This means that we set a different seed each time we run the simulation, effectively defeating the purpose of the seed, resulting in random numbers between runs.

JS implementation
const monteCarloJS = (S, K, r, sigma, T, M) => {
  let sum = 0;
  for (let i = 0; i < M; i++) {
    let rand = boxMullerJS();
    let stockPrice = S * Math.exp((r - 0.5 * sigma * sigma) * T + sigma * Math.sqrt(T) * rand);
    let payoff = Math.max(stockPrice - K, 0);
    sum += payoff;
  return (sum / M) * Math.exp(-r * T);

Black-Scholes code

The BS implementations are there as a sanity check to make sure our MC numbers are not completely out of whack. This benchmarking technique is quite common in derivative valuation shops and finance in general. You always want a benchmark or reference point to check against your calculations.

C implementation
double cdf(double x) {
  return 0.5 * (1 + erf(x / sqrt(2)));

double black_scholes_call(double S, double K, double r, double sigma, double T) {
  double d1 = (log(S / K) + (r + 0.5 * sigma * sigma) * T) / (sigma * sqrt(T));
  double d2 = d1 - sigma * sqrt(T);
  return S * cdf(d1) - K * exp(-r * T) * cdf(d2);

The erf function comes from the math module.

JS implementation
const blackScholesJS = (S, K, r, vol, T) => {
  function cdf(x) {
    return 0.5 * (1 + math.erf(x / Math.sqrt(2)));

  const d1 =
    (Math.log(S / K) + (r + (vol * vol) / 2) * T) / (vol * Math.sqrt(T));
  const d2 = d1 - vol * Math.sqrt(T);
  return S * cdf(d1) - K * Math.exp(-r * T) * cdf(d2);

To stay as close as possible to the C implementation, I needed an implementation of the erf function as it's not available in the built-in JS Math library. erf is quite complex to implement so I used the math.js implementation which is quite solid. We are accessing it via the math.erf() call.

Have you made it this far? I thank you for the perseverance 🤗. It's time for the fun part - testing things out! Because we're testing everything in a browser environment, it means you can run the test, right now, in your browser!

Try it yourself!

You can tweak the parameters of the MC pricer. I'd encourage you to test different volatility, expiry and path values. Click on Run each time you want to run a simulation. When you do, a web worker will run MC simulations and calculate BS prices using both JS and WASM. You will see a result matrix showing you the diff between all those calculations, as well as the time the MC simulations took on both JS and WASM.

The Results

This is one of the most interesting experiments I have ever conducted. To start with, the first run yielded similar results for both calculations. The performance delta described below only showed from the second run and beyond.

For the default parameters, I see the JS simulation taking an average of ~230ms longer than the WASM one on my machine (~330ms WASM, vs ~560ms JS). This difference seems to scale with the number of paths! Changing the number of paths to 100000000 (adding one zero) whilst keeping all other parameters the same, the JS calculation takes 3.2 seconds longer than the WASM one. For 200000000 simulations (double the previous number), I get a time difference of 6.3 seconds.

Another interesting observation: if I have the Dev Tools open (I'm using Edge. Yes, from Microsoft 😂), the WASM performance drops significantly - almost to parity with JS. Granted, this is not a common scenario, but I must say I didn't expect that.

So what's the conclusion?

In this particular case, WASM is faster, and it looks like the difference in performance scales linearly with the number of paths. Indeed, we can see that WASM was able to consistently perform at peak performance, whereas JS could not. It is important to highlight that this holds true on my machine, with my configuration, but could be different on other machines.

Here are a few interesting things to try if you are interested in the quant finance side of things:

  1. Increasing the volatility and time to expiry to really high numbers.
  2. Reducing the number of steps to a really small number.
  3. Reducing the volatility to a really small number.
  4. A combination of the above!

What results did you see on your machine? Let me know! I do expect the numbers to change between different browser engines, CPU architectures and operating systems.

And if you made it this far - thank you 🙏