Unit economics in AI

When I think of exciting technology innovations, AI is the first that comes to mind.

Last week, I wrote about defensibility in AI applications. In doing so, I became interested in a different topic: what is the cost of developing and running AI? Will it fundamentally change the margin profile of software businesses?

Margins are always important. The last few years have made us all comfortable, but get ready to get uncomfortable. According to this report by Battery ventures, companies with strong growth commanded a 25% premium last year; now the fastest growing companies are being penalised with a 40% discount.

image

SaaS margins

Investors love software as a service because revenue is repeatable and margins are high. For example, the top 10 publicly listed enterprise SaaS companies have a median gross margin of 74%.

Gross margin is the difference between your revenue and the cost of producing your product (COGS, or cost of goods sold). In SaaS, COGS typically consists of the following:

  1. Hosting costs like AWS
  2. Costs for your technical support team
  3. Costs for your dev ops team
  4. Any costs to implement your software
  5. Customer success under certain circumstances

The decision on what to include under COGS is heavily debated; a topic for another day. We’re going to talk about the cost of developing and operating AI, and whether this is going to change the margin profile of software businesses.

I’m going to focus on “AI first” software products: products that require AI to deliver most of their value. Software products that offer an AI feature, but where AI is not the core part of their product, are out of scope. Copy.AI is included because you need AI every single time you used the product. Tableau, despite incorporating some AI, is not included because it is primarily an analytics platform.

The cost of AI

With over-simplification, there are two parts to every AI model. First, you train the model on sample data — this process is unsurprisingly called “training”. Second, you apply the model to generate data — this process is called “inference”. Finally, work doesn’t stop at inference because models need to be continuously tuned and monitored. Otherwise, you might end up losing millions like Zillow did (their house price predication model overestimated housing prices).

Let’s look at how the cost of training and inference has changed.

Training cost has fallen dramatically.

ImageNet is a database that includes over 14 million images across 20,000 categories publicly available to researchers working on image classification problems. The cost to train this model has fallen dramatically as the graph below shows.

We’ve also seen this play out in practice: Deepmind allegedly spent $35 million training AlphaGo, OpenAI spent $12 million for a single GPT-3 training run and StabilityAI spent $600K training it’s stable diffusion model.

Inference costs have also fallen dramatically.

The cost to perform inference on a billion images has fallen as the graph below shows.

image

The cost of AI vs. SaaS

The cost of training and inference has fallen, but AI-first SaaS companies have lower margins compared to traditional SaaS companies.

AI requires more processing power than traditional software. It requires GPUs (graphics processing units) whereas traditional software companies require CPUs (central processing units).

Comparison of cloud computing costs for CPUs

image

Comparison of cloud computing costs for GPUs

image

The choice of CPU or GPU depends on your application. But the tables above that GPUs cost 10x more than CPUs. Whatever method you use to enable AI in your application, it’s going to be more expensive.

Consider a product helps people generate images for social media using AI. You have 3 options:

  1. Use an API like OpenAI’s Dall-E
  2. Host your own model leveraging open source software like Stable Diffusion
  3. Build your own AI model

Unless your software product targets a very specific niche, (3) is unlikely to make sense. The cost to train a model is simply not worth it. You are better off leveraging an existing model and tuning it up for your use case.

If you host (i.e. option 2), you will incur the GPU costs referenced in the table above. If you use an API, you will incur a higher cost (in steady state) because the API provider needs to make a margin. The advantage of the latter is that it requires less maintenance and is easy to set up.

Today, infrastructure costs for AI first SaaS applications are substantially higher than traditional SaaS applications. I’ve seen this play out anecdotally as well:

Pieter Levels launched a project for AI generated avatars. It’s selling like wildfire, but there are substantial costs to run the product:

How will the cost of GPUs change?

According to Moore’s law, the number of transistors in microchips doubles every two years. For decades, people have used this to state that compute will become cheaper. And it has.

image

The price of compute depends on the cost of hardware and the cost of operating that hardware. The price of physical GPUs has fallen recently because demand has fallen. Crypto mining requires GPUs. When crypto prices crashed and Ethereum moved over to proof-of-stake, demand for mining equipment fell. The cost of operating GPUs has increased because of rising energy prices.

On the supply side, prices aren’t quite following the same trajectory. Nvidia, the leader in GPUs, said a month ago that Moore’s law is dead and people should expect GPU prices to go up. The other effect to consider is the amount of data and complexity in models. One must expect that models will become more complex as time passes, which means you require more computing power to operate them.

If we look past the short-term fluctuations, compute will become cheaper. The real question is: will it become cheap enough to account for demand and the increase in complexity?

To close

AI is one of the most exciting technology trends at the moment. The cost of training and deploying AI models has fallen dramatically. However, the cost of running AI is substantially higher than the costs of running a traditional software business. Over time, we should expect the costs of operating AI to go down but be prepared for this to take its time. If you’re building an AI first business, consider the costs of operation when pricing your product, at least for the near term.