Apple, Nvidia

There were 2 specific announcements in AI that really caught my eye this week:

  1. NVIDIA announced that it’s entering the LLM race
  2. OpenAI announced plugins for ChatGPT

These announcements also prompted me to think about Apple who has been very quiet on the AI front. Contrary to their silence, I believe they have big things brewing.

Here are my thoughts on these three things:

NVIDIA’s has entered the party

NVIDIA announced the following plans for LLMs this week.

  • Foundation model as a service: imagine ChatGPT but for your company specifically. This means that the LLM trains only your company’s data. Questions and responses are stored on your servers and never leave your database. This is a big deal for enterprises who may not want their data entering a general purpose LLM like GPT4.
  • Multi-modal: The LLM will be able to take text, images, videos and 3D models as inputs. The variety of inputs could be a big deal. Imagine manufacturers who have a tonne of information in the form of drawings.
  • Focus on Biology: They are plan to focus heavily on drug discovery and lifesciences research. Life sciences is a great example of an industry that will benefit from a custom model tuned for that specific use case. In fact, NVIDIA plans to do exactly this by leveraging AlphaFold (an open source model developed by DeepMind that can predict the structure of proteins).

NVIDIA is first and foremost a hardware company. In the world of AI, they are known for making GPUs (graphical processing units). These devices were originally built for gaming, but people quickly realised there were incredible computing machines. They’ve been used for everything from mining in crypto to developing large language models. To give you some sense of how popular the NVIDIA chips are, see the graph below on chip usage for AI research (taken from the excellent 2022 State of AI Report):


I’m always intrigued when a company chooses to go upstream. NVIDIA has the hardware and the software. This means they are directly competing with the companies they are supplying chips to (OpenAI uses NVIDIA’s chips for their own models). If there is ever a shortage of GPUs, NVIDIA is going to be in a very powerful position.

Perhaps the foundational models don’t need to worry that much. Last year, they made $27 billion in GPU sales. They aren’t going to do anything that jeopardises this revenue stream. They’re likely betting that some subset of companies (e.g. life sciences) want to train their own models. And if they do, NVIDIA can now say we will give you everything you need: the underlying model and the compute required to train it.


ChatGPT Plugins

OpenAI announced the plugins for ChatGPT this week. The objective of plugins is to enable three things:

  • Access to real time info: get information on weather, traffic or flights
  • Access to internal knowledge: answer questions using books or academic papers
  • Ability to take actions: reserve a table at a restaurant or book a flight ticket

Here are some of the plugins from OpenAI’s announcement:


This launch has the opportunity to become as big as the App Store. A quick example from the OpenAI demo to illustrate: imagine you want to do meal planning for the week. Here’s a demo from the OpenAI team that responds to this request using 3 plugins: OpenTable (to make reservations), Instacart (to order groceries) and WolframAlpha (to calculate calories).

Plugins will unlock use cases that did not exist before. If you’re interested, I highly recommend going through the announcement from OpenAI to see what plugins might unlock.

To enable plugins, users must first activate them within ChatGPT. This is an important point because it means you need to acquire that customer.

We will likely see a similar dynamic to mobile app stores. Everyone installs lots and lots of apps, discovers they use only a few and either delete the others or let them sit idle.

In the case of ChatGPT, this dynamic might be worse. Every time you enter a question, ChatGPT will think about whether it should use one of your activated plugins. I can imagine it to be quite annoying if it pulls in a plugin when you don't want it to. If this is the case, users will really want limit the set of plugins they activate = harder to acquire.

Another way to think about plugins is public vs. private data. Plugins are only useful for public data because anyone has the ability to install the plugin and access your data. Tools that help businesses with private data, will continue to use APIs to achieve what they need to do.

Apple: the sleeping giant

Apple hasn’t really made any big AI announcements to date. Google, Meta and Microsoft have. I think Apple is the sleeping giant in the AI race. Here’s why.

Let’s look back at the iPhone:


Blackberry and Nokia were market leaders in the smartphone market. They did a lot of work to educate and market what a mobile phone was capable of doing. Then Apple launched the iPhone and the rest is history. The product was 10x better, and we might see the same dynamic play out with AI.

Today, when you use ChatGPT the information is sent to an LLM that is hosted on the cloud. This means that your question and the response are stored by OpenAI on their data centres. Now, imagine all of this magic happened on your phone. The experience is exactly the same as ChatGPT but your question and associated response never leave your device. Apple has always marketed themselves as privacy first and could make this happen.

You need ridiculously good hardware to run LLMs locally. I’m writing this essay on a Macbook Pro. It uses a processor designed by Apple (Apple M1). I can tell you that this laptop is simply incredible and already has the power to run an LLM locally. Apple didn’t always make their own chips. Previously, they used Intel and NVIDIA chips. But for the last decade or so, they’ve been investing heavily in chip design.

The real magic happens when we get this running on iPhone. And you know what? I don’t think we’re far away at all from a technological point of view. Alpaca (an LLM released by Meta) is open source. This person took it, optimised it and is running it locally on their iPhone. The demo is a bit slow and crude: there is a lot more to do. But when this does happen, you can have your own LLM, the data never leaves your phone and you don’t need the internet for ChatGPT like capabilities.

So your move Apple, we’re all watching.