Why Dust?

Why Dust?
There are numerous parallels between scientific pursuits and startup building.

In this article, I'll expose the core motivations behind building Dust, how we believe the generative AI ecosystem will evolve and why we are building products at the intersection of models, human and productivity (instead of training models despite all the investor market pull to do so).

There are numerous parallels between scientific pursuits and startup building. It's all about making hypotheses and verifying them. That's why I'll share major hypotheses developed to date, when they were made and whether they've been validated or not.

Product locks and behavioral shifts

The main hypothesis that motivated my departure from OpenAI and the proto-incubation of Dust 18 months ago (Dust was incorporated much later in Feb 2023) is simple:

Hypothesis 1 [Sep 2022, status: VERIFIED]
Models are already extremely powerful and potentially economically valuable, yet, they are completely under-deployed. While research remains critical to the deployment of AI, there exist massive locks at the product layer preventing their adoption.

The overall goal was (and remains): let's find these locks and knock them wide open.

This was pre-ChatGPT, which confirmed the hypothesis as profoundly accurate. What was ChatGPT in retrospect? Mostly two things:

  • Aligned models that follow instructions (we already had that for months if not years, text-davinci-002 being the prototypical model).
  • A nice product with minimal friction: a decent UI and free access.

There was no substantial difference between the models that were powering the first versions of ChatGPT and the models that were available through OpenAI's API at the time, which highlights the missing ingredient: Products enabling individuals to harness these models.

When it comes to team productivity (vs. individuals), the state we're currently in is the same as the pre-ChatGPT days. We are on the verge of a massive shift as part of which team workflows and productivity will be reinvented. This is Dust's "raison d'être".

Hypothesis 2 [Feb 2023, status: PENDING VERIFICATION]
Models are powerful enough for team productivity use-cases. Similar to ChatGPT unlocking consumer adoption, the use of generative AI in the workspace will be unlocked by building the required product layer at the interface of humans, company data and existing models.

If we assume Hypothesis 2, then the fastest way to unlock massive value to the enterprise is to focus on product, not building models. Beyond our initial conviction, customer feedback gathered to date has reinforced our belief: Customers do not care about implementation details such as model training techniques or facts about particular model used to power a feature. Humans don't want models, they want to solve problems, reduce friction. Models packaged in effective ways can do that for them.

That's our core focus.

No PMF, no GPUs

While training models is flashy, exciting, and attracts visibility and capital, assuming hypothesis 2, it appears as a dangerous distraction for a startup. Training models (in the broad sense of fine-tuning or aligning) is trajectory defining for pre-PMF companies. I've fine-tuned 10k+ models while at OpenAI using probably in excess of 10M+ A100.hours. Fine-tuning models is complicated. Fine-tuning models is more complicated than pre-training them. We don't understand fine-tuning as well as we understand pre-training. We understand fine-tuning so badly that we don't even know how to do online learning.

Progress in that space is achieved through experimentation, trial and errors. This is research. And startups are not meant to do research.

Hypothesis 3 [Feb 2023, status: PARTIALLY VERIFIED]
It is detrimental to train models as an early-stage startup. Focusing on product is the most direct way to shift behaviors and create early value. Only after scale is achieved, should a startup integrate vertically and train their own models.

We introduced that motto early on: No PMF, no GPUs.

It was a clear fork in the road. We could have gone down the route of training our own models. We had the skills and the investors appetite to do so. We saw other talented founders and promising companies take that road. Because we were operating under hypothesis 2, we purposefully decided to go down the product road instead.

We raised less money than these other startups. We have more usage and users than these other startups. We make more money than these other startups. We will likely blow these other companies out of the water the day we decide to train models, because we will have captured the interface between humans doing productive work and models where a lot of value will accrue, and we will have the resources that it really takes to do AI research.

Training a model is like building a rock that will be washed by future generations of models. Building a great product is, on the contrary, akin to building a surf board that can be used to surf the waves rippling from the emergence of these same future models. Either you're going to be building the best models, or you're better off building yourself something that floats.

Cheaper tokens, yes. But better models?

Token generation is commoditised. The cost per token at iso-model-capability has decreased by two orders of magnitude since I started working on Dust in September 2022. This is driven by better hardware, fierce competition between model providers, large troves of capital, and a desire from OpenAI to remain the category leader.

Hypothesis 4 [Jan 2024, status: PARTIALLY VERIFIED]
Iso-capability cost of token generation will continue decrease drastically.
Hypothesis 5 [Jan 2024, status: PENDING VERIFICATION]
With each new model, the incremental value perceptible by humans will plateau. GPT-(n+1) will be hardly distinguishable to GPT-n (for rather small n, even if loss will be predictively better obviously).

This one is maybe the most controversial hypothesis. One sign going in this direction is the fact that nothing substantially better came out since GPT-4, which is... soon to be 2 years old. The trend has been opposite in fact: getting smaller models to perform equivalently instead. We desperately wish that future models will be much better, because this would be an incredibly strong tailwind for us. Stating this hypothesis is a way to acknowledge that we also prepare ourselves for a future where we're stuck for some time with current model capabilities.

If that future was to realize, then commodisation of token generation would be even more drastic, driving its cost to virtually zero in matter of single-digit years. This is all great for us. That means we can focus on quality over cost always. There is no limit to what we can do with a model for our users. Even if a single interaction costs $10 today, if value is created, we will do it because cost will plummet. Training a model you're on the other side of that prism. This reinforces hypothesis 3 even more.

Horizontal, not vertical

We're making the risky bet of building an horizontal product that follows its users where they do. It's risky because cemeteries are full of horizontal products: the value proposition is harder to explain, the white-page syndrome is a blocker to activation, ... Yet we're making it.

Hypothesis 6 [March 2024, status: PENDING VERIFICATION]
There is a space and tremendous value to be captured by an horizontal AI assistants platform.

So far we have sufficient confirmation that this hypothesis is correct to continue in that direction. In particular:

  • Assistants built on Dust have been able to outperform some vertical-specific assistants. How? Dust assistants are powered by your company entire data and aren’t subject to information silos.
  • The most valuable enterprise workflows are fundamentally cross-application (e.g. take information from an email in gmail and update Salesforce). Systems of records are deeply entrenched in the enterprise. LLMs are unique in that they can circumvent these walled gardens to power cross-application workflows (e.g. take information in a bunch of Notion pages, process it, and update a Google Sheet). Supporting workflows that aren’t app specific enables us to truly supercharge knowledge workers
  • While effective, vertical products ignore the fact that knowledge workers do not operate in vacuums. Instead, workplaces are fundamentally multi-player and need to accommodate a diverse set of stakeholders. Generative AI in the workplace is fundamentally a muti-player problem.

We'll build AGI

AGI may pop, in fast take-off mode, from anywhere. But in light of the hypotheses made above, taking into account the limits of pure scale and believing in the fact that we're only scratching the surface of what we can do with current models with more advanced outer-model orchestration, then it becomes sensible to bet that transformative AI will happen at the interface with humans not ex-nihilo.

Hypothesis 7 [Jan 2024, status: PENDING VERIFICATION]
The most advanced AI capabilities will emerge at the interface with humans.

We strongly believe in augmenting humans, not replacing them. Additionally, we have conviction that pairing closely with humans is how we’ll build systems that are vastly more efficient and intelligent than humans or machines alone.  If that’s true, then we’ll get to build AGI.

In 2022, I bet that the first billion dollar solo-company (1 human at the helm) had already been started (yet to be verified) and that its emergence would be the leading indicator for AGI. Dust will act as this company’s backbone. And that’s why your company should too.