If you see the progress of open AIthe company run by Sam Altman whose neural networks can now write original text and take original photos with amazing ease and speed, you could just skip this part.
If, on the other hand, you’ve only paid vague attention to the company’s progress and the increasing traction other so-called “generative” AI companies are suddenly gaining and want to better understand why, you might benefit from this interview with James Currier, five-time founder and now venture investor who co-founded the company NFX five years ago with some of his series founder friends.
Currier falls into the camp of people who monitor progress closely – so closely that NFX has made numerous related investments in “generative technology” as he describes it, and it’s attracting more attention from the team every month. In fact, Currier doesn’t think the buzz about this new wrinkle on AI isn’t so much hype as a realization that the wider startup world is suddenly seeing a very big opportunity for the first time in a long time. “Every 14 years,” Currier says, “we get one of these Cambrian explosions. We had one on the Internet in ’94. We had one in 2008 around cell phones. Now we’ll have another in 2022.”
In retrospect, this editor wishes she had asked better questions, but I’m also learning here. Following excerpts from our chat, edited for length and clarity. You can listen to our longer conversation here.
TC: There’s a lot of confusion about generative AI, including exactly how new it is, or whether it’s just become the latest buzzword.
JC: I think what happened to the AI world in general is that we felt like we could have deterministic AI, which would help us identify the truth of something. For example, is that a broken piece on the production line? Is that a suitable meeting to have? It’s where you determine something using AI in the same way that a human determines something. That’s largely what AI has been for the past 10 to 15 years.
The other sets of algorithms in AI were more diffusion algorithms, which were meant to look at massive amounts of content and then generate something new from it, saying, ‘Here are 10,000 examples. Can we make the 100,000th example that is comparable?’
They were pretty fragile, pretty brittle, until about a year and a half ago. [Now] the algorithms have gotten better. But more importantly, the content we’ve been looking at has gotten bigger because we just have more processing power. So what has happened is that these algorithms follow Moore’s law – [with vastly improved] storage, bandwidth, computation speed – and suddenly being able to produce something very similar to what a human would produce. That means that the face value of the text it will write, and the face value of the drawing it will draw, is very similar to what a human will do. And all that has happened in the past two years. So it’s not a new idea, but it’s new on that threshold. That’s why everyone looks at this and says, “Wow, that’s magic.”
So it was computing power that suddenly changed the game, not a previously missing piece of tech infrastructure?
It didn’t change suddenly, it just changed gradually until the quality of his generation got to the point where it was meaningful to us. So the answer is generally no, the algorithms are very similar. In these diffusion algorithms they have gotten a little better. But really, it’s about processing power. Then, about two years ago, [powerful language model] GPT came out, which was an on-premises type of calculation, and then GPT3 came out true [the AI company Open AI] would do [the calculation] for you in the cloud; because the data models were so much bigger, they had to do it on their own servers. You just can’t afford to do it [on your own]. And that’s when things really took off.
We know because we have invested in a company doing AI-based generative games, including “AI Dungeon”, and I think the vast majority of all GPT-3’s calculations came through “AI Dungeon” at some point.
Does “AI Dungeon” need a smaller team than another game maker maybe?
That’s one of the big advantages, absolutely. They don’t have to spend all that money to house all that data, and they can produce dozens of gaming experiences with a small group of people that all benefit from it. [In fact] the idea is that you’re going to add generative AI to old games so that your non-player characters can actually say something more interesting than they do today, although you’re getting fundamentally different gaming experiences coming out of AI in gaming, versus adding AI to existing games.
So a big change is in the quality? Will this technology reach a plateau at some point?
No, it will always get better step by step. It’s just that the differences of the increases will get smaller over time because they’re already getting pretty good,
But the other big change is that Open AI wasn’t really open. They made this amazing thing but then it wasn’t open and it was very expensive. So groups got together like Stability AI and other people, and they said, ‘Let’s just make open source versions of this.’ And at that point, the costs dropped 100x, just in the last two or three months.
These are not offshoots of Open AI.
All this generative technology will not only be built on the Open AI GPT-3 model; that was only the first. The open source community has now replicated a lot of their work, and they are probably eight months behind, six months behind, in terms of quality. But it will come. And because the open source versions are one-third or one-fifth or one-twentieth the cost of Open AI, you’re going to see a lot of price competition, and you’re going to see a proliferation of these models competing with Open AI. And you’ll probably end up with five, or six, or eight, or maybe 100 of them.
On top of that, unique AI models are built. So you might have an AI model that really looks at how to create poetry, or AI models that really look at how you create visual images of dogs and dog hair, or you have one that really specializes in writing sales- emails. You get a whole layer of these specialized AI models that are then purpose built. Then on top of That, you have all the generative technology, namely: how do you get people to use the product? How do you get people to pay for the product? How do you make sure people log in? How do you get people to share it? How do you create network effects?
Who makes money here?
The application layer where people go after the distribution and the network effects is where you are going to make the money.
What about large companies that can integrate this technology into their networks. Won’t it be very difficult for a company that doesn’t have that advantage to come out of nowhere and make money?
I think you’re looking for something like a Twitch where YouTube could have integrated that into its model, but they didn’t. And Twitch created a new platform and a valuable new part of culture and value for the investors and the founders, even though it was difficult. So you will have great founders who are going to use this technology to give them an advantage. And that creates a seam in the market. And while the big boys do other things, they can build multi-billion dollar companies.
The New York Times ran a part recently with a handful of creatives saying that the generative AI apps they use in their respective fields are tools in a broader toolbox. Are people here naive? Are they at risk of being replaced by this technology? As you mentioned, the team working on “AI Dungeon” is smaller. That’s good for the company, but potentially bad for developers who would otherwise have worked on the game.
I think with most technologies there is some kind of uneasiness that people have [for example] robots that replace a job at a car factory. When the internet came along, many people working on direct mail felt threatened that companies would be able to sell directly and not use their paper advertising services. But [after] they embraced digital marketing, or digital communication via email, they probably had huge career bumps, their productivity went up, the speed and efficiency went up. The same thing happened with online credit cards. We didn’t feel comfortable putting credit cards online until maybe 2002. But those who embraced [this wave in] 2000 to 2003 fared better.
I think what’s happening now. The writers, designers and architects who think ahead and embrace these tools to give themselves a 2x or 3x or 5x boost in productivity will do incredibly well. I think the whole world will see a productivity increase in the next 10 years. It’s a huge opportunity for 90% of people to just do more, be more, create more, connect more.
Do you think it was a mistake by Open AI not to do it? [open source] what it was building, given what has sprung up around it?
The leader ends up behaving differently from the followers. I don’t know, I’m not in the company, I don’t really know. What I do know is that there will be a large ecosystem of AI models, and it’s not clear to me how an AI model remains differentiated, as they are all asymptomatic toward the same quality and it just becomes a price game. It seems to me that the people who are winning are Google Cloud and AWS, because we are all going to generate stuff like crazy.
It could be that Open AI will eventually go up or down. Maybe they become an AWS themselves, or maybe they start making specialized AIs that they sell to certain verticals. I think everyone in this space has a chance to do well if they navigate well; they will just have to be smart about it.
NFX has a lot more on its site about: generative AI that’s worth reading by the way; you will find that here.