For the first episode of the Newcomer podcast, I sat down with Reid Hoffman — the PayPal mafia member, LinkedIn co-founder, Greylock partner, and Microsoft board member. Hoffman had just stepped off OpenAI’s board of directors. Hoffman traced his interest in artificial intelligence back to a conversation with Elon Musk.
“This kicked off, actually, in fact, with a dinner with Elon Musk years ago,” Hoffman said.
Musk told Hoffman that he needed to dive into artificial intelligence during conversations about a decade ago. “This is part of how I operate,” Hoffman remembers. “Smart people from my network tell me things, and I go and do things. And so I dug into it and I’m like, ‘Oh, yes, we have another wave coming.’”
This episode of Newcomer is brought to you by Vanta
Security is no longer a cost center — it’s a strategic growth engine that sets your business apart. That means it’s more important than ever to prove you handle customer data with the utmost integrity.
But demonstrating your security and compliance can be time-consuming, tedious, and expensive. Until you use Vanta.
Vanta’s enterprise-ready Trust Management Platform empowers you to:
* Centralize and scale your security program
* Automate compliance for the most sought-after frameworks, including SOC 2, ISO 27001, and GDPR
* Earn and maintain the trust of customers and vendors alike
With Vanta, you can save up to 400 hours and 85% of costs. Win more deals and enable growth quickly, easily, and without breaking the bank.
For a limited time, Newcomer listeners get $1,000 off Vanta. Go to vanta.com/newcomer to get started.
Why I Wanted to Talk to Reid Hoffman & What I Took Away
Hoffman is a social network personified. Even his journey to something as wonky as artificial intelligence is told through his connections with people. In a world of algorithms and code, Hoffman is upfront about the extent to which human connections decide Silicon Valley’s trajectory. (Of course they are paired with profound technological developments that are far larger than any one person or network.)
When it comes to the rapidly developing future powered by large language models, a big question in my mind is who exactly decides how these language models work?
Sydney appeared in Microsoft Bing and then disappeared. Microsoft executives can dispatch our favorite hallucinations without public input. Meanwhile, masses of images can be gobbled up without asking their creators and then the resulting image generation tools can be open-sourced to the world.
It feels like AI super powers come and go with little notice.
It’s a world full of contradictions. There’s constant talk of utopias and dystopias and yet startups are raising conventional venture capital financing.
The most prominent player in artificial intelligence — OpenAI — is a non-profit that raised from Tiger Global. It celebrates its openness in its name and yet competes with companies whose technology is actually open-sourced. OpenAI’s governance structure and priorities largely remain a mystery.
Finally, unlike tech’s conservative billionaires who throw their money into politics, in the case of Hoffman, here is a tech overlord that I seem to mostly agree with politically. I wanted to know what that would be like. Is it just good marketing? And where exactly is his heart and political head at right now?
I thought he delivered. I didn’t feel like he was dodging my questions, even in a world where maintaining such a wide network requires diplomacy. Hoffman seemed eager and open — even if he started to bristle at what he called my “edgy words.”
Some Favorite Quotes
We covered a lot of ground in our conversation. We talked about AI sentience and humans’ failures to identify consciousness within non-human beings. We talked about the coming rise in AI cloud compute spending and how Microsoft, Google, and Amazon are positioned in the AI race.
Hoffman said he had one major condition for getting involved in OpenAI back in the early days when Musk was still on board.
“My price for participation was to ask Elon to stop saying the word “robocalypse,” Hoffman told me. “Because I thought that the problem was it’s very catchy and it evokes fear.”
I asked Hoffman why he thought Musk got involved in artificial intelligence in the first place when Musk seems so worried about how it might develop.
Why get the ball rolling down the hill at all, I wondered?
Hoffman replied that many people in the field of artificial intelligence had “messiah complexes.”
“It’s the I am the one who must bring this — Prometheus, the fire to humanity,” Hoffman said. “And you’re like, ‘Okay, I kind of think it should be us versus an individual.’”
He went on, “Now, us can’t be 8 billion people — us is a small group. But I think, more or less, you see the folks who are steering with a moral compass try to say, how do I get at least 10 to 15 people beyond myself with their hands on the steering wheel in deep conversations in order to make sure you get there? And then let’s make sure that we’re having the conversations with the right communities.”
I raised the possibility that this merely suggested oligarchic control of artificial intelligence rather than dictatorial control.
We also discussed Hoffman’s politics, including his thoughts on Joe Biden and “woke” politics. I asked him about the state of his friendship with fellow PayPal mafia member Peter Thiel.
“I basically am sympathetic to people as long as they are legitimately and earnestly committed to the dialogue and discussion of truth between them and not committed otherwise,” Hoffman said. “There are folks from the PayPal years that I don’t really spend much time talking to. There are others that I do continue because that conversation about discovering who we are and who we should be is really important. And you can’t allow your own position to be the definer.”
I suggested that Thiel’s public views sometimes seemed insincere.
“Oh, that’s totally corrosive,” Hoffman said. “And as much as that’s happening, it’s terrible. And that’s one of the things that in conversations I have, I push people, including Peter, on a lot.”
Give it a listen.
Find the Podcast
Read the Transcript
Eric: Reid, thank you so much for coming on the show. I'm very excited for this conversation. You know, I'm getting ready for my own AI conference at the end of this month, so hopefully this is sort of a prep by the end of this conversation, we'll all be super smart and ready for that. I feel like there've been so many rounds of sort of AI as sort of the buzzword of the day.
This clearly seems the hottest. When did you get into this moment of it? I mean, obviously you just stepped off the Open AI board. You were on that board. Like how, when did you start to see this movement that we're experiencing right now coming.
Reid: Well, it's funny because my undergraduate major was artificial intelligence and cognitive science. So I've, I've been around the hoop for multiple waves for a long time and I think this kicked off actually, in fact, with a dinner with Elon Musk years ago. You know, 10-ish years ago, Elon and I would have dinner about once a quarter and he's like, well, are you paying attention to this AI stuff?
And I'm like, well, I majored in it and you know, I know about this stuff. He's like, no, you need to get back involved. And I was like, all right.
This is part of how I operate is smart people from my network tell me things and I go and do things. And so I dug into it and I went, oh yes, we have another wave coming.
And this was probably about seven or eight years ago, when I, when I saw the beginning of the wave or the seismic event. Maybe it was a seismic event out at sea and I was like, okay, there's gonna be a tsunami here and we should start getting ready cause the tsunami is actually gonna be amazingly great and interesting.
Eric: And that—is that the beginning of Open AI?
Reid: Open AI is later. What I did is I went and made connections with the kind of the heads of every AI lab and major company because I concluded that I thought that the AI revolution will be primarily driven by large companies initially because of the scale compute requirements.
And so, you know, talked to Demis Hassabis, met Mustafa Suleyman, talked to Yann LeCun, talked to Jeff Dean, you know, all these kind of folks and kind of, you know, built all that. And then it was later in conversations with Sam and Elon that I said, look, we need to do something that's a for pro humanity. Not just commercial effort.
And my price for participation, cause I thought it was a great idea, but my price for participation was to ask Elon to stop saying the word robocalypse. Because I thought that the problem was that it's very catchy and it evokes fear. And actually, in fact, one of the things I think about this whole area is that it's so much more interesting and has so much amazing opportunity for humanity.
A little bit like, I don't know if you saw the Atlantic article I wrote that we evolve ourselves through technology and I'm, you know, going to be doing some writings around describing AI as augmented intelligence versus artificial intelligence. And I wanted to kind of build that positive, optimistic case that I think is the higher probability that I think we can shape towards and so forth.
So it's like, okay, I'm in, but no more Robocalypse.
Eric: I appreciate the ultimate sort of network person that you tell the story through people. I always appreciate when the origin stories of technology actually come through the human beings. With Elon in particular, I'm sort of confused by his position because it seems like he's very afraid of AI.
And if that's the case, why would you want to, like, do anything to sort of get the ball rolling down the hill? Like, isn't there a sort of just like, stay away from it, man, if you think it's so bad. How do you see his thinking? And I'm sure it's evolved.
Reid: Well, I think his instinct for the good and the challenging of this is he tends to think AI will only be good if I'm the one who's in control.
Eric: Sort of, yeah.
Reid: Yeah. And this is actually somewhat replete within the modern AI field. Not everybody but this. And Elon is a public enough figure that I think, you know, making this comment of him is not talking at a school.
Other people would, there's a surprising number of Messiah complexes in the field of AI, and, and it's the, I am the one who must bring this, you know, Prometheus, you know, the Fire to humanity. And you're like, okay, I kind of think it should be us, right? Versus an individual. Now us can't be 8 billion people, us as a small group, but I think more or less you see the, the folks who are steering with a moral compass try to say, how do I get at least 10 to 15 people beyond myself with their hands on the steering wheel in deep conversations in order to make sure you get there and then let, let's make sure that we're having the conversations with the right communities.
Like if you say, well, is this going to, you know, institutionalize, ongoing, um, you know, power structures or racial bias, something else? Well, we're talking to the people to make sure that we're going to minimize that, especially over time and navigate it as a real issue. And so those are the, like, that's the kind of anti Messiah complex, which, which is more or less the efforts that I tend to get involved in.
Eric: Right. At least sort of oligarchy, of AI control instead of just dictatorship of it.
Reid: Well, yeah, and it depends a little bit, even on oligarchy, look, things are built by small numbers of people. It's just a fact, right? Like, there aren't more than, you know, a couple of founders, maybe maximum five in any, any particular thing. There is, you know, there's reasons why. When you have a construction project, you have a head of construction, right?
Et cetera. The important thing is to make sure that's why you have, why you have a CEO, you have a board of directors. That's why you have, you know, you say, well, do we have the right thing where a person is accountable to a broader group? And that broader group feels their governance responsibility seriously.
So oligarchy is a—
Eric: a charged
Reid: is a charged word. And I,
Eric: There’s a logic to it. I'm not, I'm not using it to say it doesn't make sense that you want the people to really understand it around, around it. Um, I mean, specifically with Open AI, I mean, you, you just stepped off the board. You're also on the board of Microsoft, which is obviously a very significant player.
In this future, I mean, it's hard to be open. I get a little frustrated with the “open” in “Open AI” because I feel like there's a lot that I don't understand. I'm like, maybe they should change the name a little bit, but is it still a charity in your mind? I mean, it's obviously raised from Tiger Global, the ultimate prophet maker.
Like, how should we think about the sort of core ambitions of Open AI?
Reid: Well, um, one, the board I was on was a fine one and they've been very diligent about making sure that all of the controls, including for the subsidiary company are from the 501(C)(3) and diligent to its mission, which is staffed by people on the 501(C)(3) board with the responsibilities of being on a 5 0 1 board, which is being in service of the mission, not doing, you know, private inurement and other kinds of things.
And so I actually think it is fundamentally still a 501(C)(3). The challenge is if you kind of say, you look at this and say, well, in order to be a successful player in the modern scale AI, you need to have billions of dollars of compute. Where do you get those billions of dollars? Because, you know, the foundations and the philanthropy industry is generally speaking bad at tech and bad at anything other than little tiny checks in tech.
And so you said, well, it's really important to do this. So part of what I think, you know, Sam and that group of folks came up with this kind of clever thing to say, well, look, we're about beneficial AI, we're about AI for humanity. We're about making an, I'll make a comment on “open” in a second, but we are gonna generate some commercially valuable things.
What if we struck a commercial deal? So you can have the commercial things or you can share the commercial things. You invest in us in order to do this, and then we make sure that the AI has the right characteristics. And then the “open”, you know, all short names have, you know, some simplicities to them.
The idea is open to the world in terms of being able to use it and benefit from it. It doesn't mean the same thing as open source because AI is actually one of those things where opening, um, where you could do open source, you could actually be creating something dangerous.
As a modern example, last year, Open AI deliberately… DALL·E 2 was ready four months before it went out. I know cause I was playing with it. They did the four months to do safety training and the kind of safety training is, well, let's make sure that individuals can't be libeled. Let's make sure you can't create as best we can, child sexual material. Let's make sure you can't do revenge porn and we'll serve it through the API and we'll make it unchangeable on that.
And then the open source people come out and they go do whatever you want and then wow, you get all this crazy, terrible stuff.
So “open” is openness of availability, but still with safety and still with, kind of call it the pro-human controls. And that's part of what OpenAI means in this.
Eric: I wrote in sort of a mini essay in the newsletter about, like tech fatalism and it fits into your sort of messiah complex that you're talking about, if I'm a young or new startup entrepreneur, it's like this is my moment if I hold back, you know, there's a sense that somebody else is gonna do it too. This isn't necessarily research. Some of the tools are findable, so I need to do it.
If somebody's going to, it's easy if you're using your own personhood to say, I'm better than that guy! Even if I have questions about it, I should do it.
So that, I think we see that over and over again. Obviously the stakes with AI, I think we both agree are much larger.
On the other hand, with AI, there's actually, in my view, been a little bit more restraint. I mean, Google has been a little slower. Facebook seems a little worried, like, I don't know.
How do you agree with that sort of view of tech fatalism? Is there anything to be done about it or it's just sort of—if it's possible, it's gonna happen, so the best guy, the best team should do it?
Or, or how do you think about that sense of inevitability on if it's possible, it'll be built?
Reid: Well, one thing is you like edgy words, so what you describe is tech fatalism, I might say as something more like tech inevitability or tech destiny. And part of it is what, I guess what I would say is for example, we are now in a AI moment and era. There's global competition for it. It's scale compute.
It's not something that even somebody like a Google or someone else can kind of have any kind of, real ball control on. But the way I look at it is, hey, look, there's, there's utopic outcomes and dystopic outcomes and it's within our control to steer it. Um, and even to steer it at speed, even under competition because.
For example, obviously the general discourse within media is, oh my God, what's happening with the data and what's gonna happen with the bias and what's gonna happen with the crazy conversations, with Bing Chat and all the rest of this stuff. And you're like, well, what am I obsessed about? I'm obsessed about the fact that I have line of sight to an AI tutor and an AI doctor on every cell phone.
And think about if you delay that, whatever number of years you delay that, what your human cost is of delaying that, right? And it's like, how do we get that? And for example, people say, wow, the real issue is that Bing chat model is gonna go off the rails and have a drunken cocktail party conversation because it's provoked to do so and can't run away from the person who's provoking it.
Uh, and you say, well, is that the real issue? Or is it a real issue? Let's make sure that as many people as we can have access to that AI doctor have access to that AI tutor that where, where we can, where not only, you know, cause obviously technology cause it's expensive initially benefits elites and people are rich.
And by the way, that's a natural way of how our capitalist system and all the rest works. But let's try to get it to everyone else as quickly as possible, right?
Eric: I a hundred percent agree with that. So I don't want any of my sort of, cynical take like, oh my God, this version.
I'd also extend it, you know, I think you're sort of referencing maybe the Sydney situation where you have Kevin Rus in New York Times, you know, communicating with Bing's version of ChatGPT and sort of finding this character who's sort of goes by Sydney from the origin story.
And Ben Thompson sort of had a similar experience. And I would almost say it's sad for the world to be deprived of that too. You know, there's like a certain paranoia, it's like, it's like, oh, I wanna meet this sort of seemingly intelligent character. I don't know. What do you make of that whole episode? I mean, people really, I mean, Ben Thompson, smart tech writers really latched onto this as something that they found moving.
I don't know. Is there anything you take away from that saga and do you think we'll see those sort of, I don't know, intelligent characters again,
Reid: Well for sure. I think 2023 will be at least the first year of the so-called chatbot. Not just because of ChatGPT. And I think that we will have a bunch of different chat bots. I think we'll have chatbots that are there to be, you know, entertainment companions, witty dialogue participants.
I think we'll have chatbots that are there to be information like Insta, Wikipedia, kind of things. I think we'll have chatbots that are there to just have someone to talk to. So I think there'll be a whole, whole range of things. And I think we will have all that experience.
And I think part of the thing is to say, look, what are the parameters by which you should say the bots should absolutely not do X. And it's fine if these people want a bot that's like, you know, smack talking and these people want something that you know, goes, oh heck. Right?
You know, like, what's, what's the range of that? And obviously children get in the mix and, and the questions around things that we already encounter a lot with search, which is like could a chat bot enable self-harm in a way that would be really bad?
Let's really try to make sure that someone who's depressed doesn't figure out a way to harm themselves either with search or with chat bots.
Eric: Is there a psychologically persuasive, so it's not just the information provided, it's the sense that they might be like walking you towards something less serious.
Reid: And they are! This is the thing that's amazing. and it's part of the reason why like everyone should have some interaction with these in some emotional, tangible way. We are really passing the Turing test. This is the thing that I had visibility on a few years ago because I was like, okay, we kind of judge, you know, intelligence and sentience like that, Google engineers like it.
I asked if it was conscious and it said it was because we use language as a way of doing that. And you're like, well, but look, that tells you that your language use is not quite fully there. And because part of what's really amazing about, “hallucinations”—and I'm probably gonna do a fireside chat with the gray matter thing on hallucinations, maybe later this week—where the hallucination is, on one hand it says this amazingly accurate, wonderful thing, very persuasively, and then it says this other thing really persuasively that's total fiction, right? And you're like, wow, you sound very persuasive in both cases. But that one's true and that one's fiction.
And that's part of the reason why I kind of go back to the augmented intelligence and all the things that I see going on with in 2023 is much less replacement and much more augmentation. It's not zero replacement, but it's much more augmentation in terms of how this plays. And that is super exciting.
Eric: Yeah. I mean, to some degree it reflects sort of the weakness in human beings’ own abilities to read what's happening. Ahead of this interview, I was talking to the publicly available ChatGPT. I don't know if you saw but I was asking it for questions and I felt like it delivered a very reasonable set of questions. You know, you've written about Blitzscaling, so [ChatGPT] is like, let's ask about that. It's, you know, ask in the context of Microsoft.
But when I was like, have you [ChatGPT] ever watched Joe Rogan? Have you ever been on a podcast? Sometimes maybe you should have a long sort of, you should have a statement like I'm doing right now where I sort of have some things I'm saying.
Then I ask a question. Other times it should be short and sweet. Sometimes it, you know, annoys you and says oligarchy, like explaining to the chat bot.
[In an interview, a journalist] can't just ask a list of like, straightforward questions and it felt like it didn't really even get that. And I get that there's some sort of, we're, we're starting to have a conversation now with companies like Jasper, where it's almost like the language prompting itself.
I think Sam Altman was maybe saying it's like almost a form of plain language like coding because you have to figure out how to get what you want out of them. And maybe it was just my failure to explain it, but as a journalist replacing questions, I didn't find the current model of ChatGPT really capable of that.
Reid: No, that's actually one of the things on the ChatGPT I find is, like, for example, you ask what questions to ask Reid Hoffman in a podcast interview, and you'll get some generic ones. It'll say like, well, what's going on with new technologies like AI and, and what's going on in Silicon Valley? And you know, and you're like, okay, sure.
But those aren't the really interesting questions. That's not what makes me a great journalist, which is kind of a lens to something that people can learn from and that will evolve and change that'll get better. But that's again, one of the reasons why I think it's a people plus machine.
Because for example, if I were to say, hey, what should I ask Eric about? Or what should I talk to Eric about and go to? Yeah, gimme some generic stuff. Now if I said, oh, give me a briefing on, um, call it, um, UN governance systems as they apply to AI, because I want to be able to talk about this. I didn't do this, but it would give me a kind of a quick Wikipedia briefing and that would make my conversation more interesting and I might be able to ask a question about the governance system or something, you know, as a way of doing it.
And that's what AI is, I think why the combo is so great. Um, and anyway, so that's what we should be aiming towards. It isn't to say, by the way, sometimes like replacement is a good thing. For example, you go to autonomous vehicles and say, hey, look, if we could wave a wand and every car on the road today would be an autonomous vehicle, we'd probably save, we'd probably go from 40,000 deaths in the US per, you know, year to, you know, maybe a thousand or 2000. And you're like, you're shaving 38,000 lives a year, in doing this. It's a good thing. And, you know, it will have a positive vector on gridlocks and for climate change and all the rest of the stuff.
And you go, okay, that replacement, yes, we have to navigate truck jobs and all the rest, but that replacement's good. But I think a lot of it is going to end up being, you know, kind of, various forms of amplification. Like if you get to journalists, you go, oh, it'll help me ask, figure out which interesting questions to add.
Not because it'll just go here, here's your script to ask questions. But you can get better information to prep your thinking on it.
Eric: Yeah. I'm glad you brought up like the self-driving car case and, you know, you're, are you still on the board of Aurora?
Reid: I am.
Eric: I've, you know, I covered Uber, so I was in their self-driving cars very early, and they made a lot of promises. Lyft made a lot of promises.
I mean, I feel like part of my excitement about this sort of generative AI movement is that it feels like it doesn't require completeness in the same way that self-driving cars do. You know? And that, that, that's been a barrier to self-driving cars.
On the flip side, you know, sometimes we sort of wave away the inaccuracy and then we say, you know, we sort of manage it.
I think that's what we were sort of talking about earlier. You imagine it in some of the completeness that could come. So I guess the question here is just do you think, what I'm calling the completeness problem. I guess just the idea that it needs to be sort of fully capable will be an issue with the large language models or do you think you have this sort of augmented model where it could sort of stop now and still be extremely useful to much of society?
Reid: I think it could stop now and be extremely useful.
I've got line of sight on current technology for a tutor, for a doctor, for a bunch of other stuff. One of the things my partner and I wrote last year was that within five years, there's gonna be a co-pilot for every profession.
The way to think about that is what professionals do. They process information, they take some kind of action. Sometimes that's generating other information, just like you see with Microsoft's co-pilot product for engineers. And what you can see happening with DallE and other image generation for graphic designers, you'll see this for every professional, that there will be a co-pilot on today's technology that can be built.
That's really amazing. I do think that as you continue to make progress, you can potentially make them even more amazing, because part of what happened when you move from, you know, GPT3 to 3.5, which is all of a sudden it can write sonnets. Right? You didn't really know that it was gonna be able to write sonnets.
That's giving people superpowers. Most people, including myself—I mean, look, I could write a sonnet if you gave me a couple of days and a lot of coffee and a lot of attempts to really try.
Eric: But you wouldn't.
Reid: You wouldn't. Yeah. But now I can go, oh, you know, I'd like to, to, um, write a sonnet about my friend Sam Altman.
And I can go down and I can sit there and I can kind of type, you know, duh da, and I can generate, well, I don't like that one. Oh, but then I like this one, you know, and da da da.
And, and that, that gives you superpowers. I mean, think about what you can do for writing, a whole variety of things with that. And that I think the more and more completeness is the word you are using is I think also a powerful thing. Even though what we have right now is amazing.
Eric: Is GPT4 a big improvement over what we have? I assume you've seen a fair bit of unreleased, stuff. Like how hyped should we be about the improvement level?
Reid: I have. I'm not really allowed to say very much about it cause, you know, part of the responsibilities of former board members and confidentiality. But I do think that it will be a nice—I think people will look at it and go, Ooh, that's cool.
And it will be another iteration, another thing as amazing as ChatGPT has, and obviously that's kind of in the last few months. It’s kind of taken the world by storm, opening up this vista of imagination and so forth.
I think GPT4 will be another step forward where people will go, Ooh, that's, that, that's another cool thing. I think that's—can't be more specific than that, but watch this space cause it'll be cool.
Eric: Throughout this conversation we've danced around this sort of artificial general intelligence question. starting with the discussion of Elon and the creation of eventually Open AI. I'm curious how close you think we are with AGI and this idea of a sort of, I mean, people define it so many different ways, you know, it's more sophisticated than humans in some tasks, you know, mini tasks, whatever.
How, how do you think we're far from that? Or how, how, how do you see that playing out?
Reid: Personally amongst a lot of the people who are in the field, I'm probably on the, we're-much-further-than-we-think stage. Now, some of that's because I've lived through this before with my undergraduate degree and the, you know, the pattern generally is, oh my God, we've gotten this computer to do this amazing thing that we thought was formally the provence of only these cognitive human beings.
And it could do that. So then by the way, in 10 years it'll be solving new science problems like fusion and all the rest. And if you go back to the seventies, you saw that same dialogue. I mean, it, it's, it's an ongoing thing. Now we do have a more amazing set of cognitive capabilities than we did before, and there are some reasons to argue that it could be in a decade or two.
Because you say, well, these large language models can enable coding and that coding can all, can then be self, reflective and generative, and that can then make something go. But when I look at the coding and how that works right now, it doesn't generate the kind of code that's like, oh, that's amazing new code.
It helps with the, oh, I want to do a parser for quick sort, right? You know, like that kind of stuff. And it's like, okay, that's great. Or a systems integration use of an API or calling in an API for a spellchecker or whatever. Like it's really helpful stuff on engineers, but it's not like, oh my God, it's now inventing the new kind of training of large scale models techniques.
And so I think even some of the great optimists will tell you of the great, like believers that it'll be soon and say there's one major invention. And the thing is, once you get to one major invention, is that one major invention? Is that three major inventions? Is it 10 major inventions?
Like I think we are some number of major inventions away. I don't, I certainly don't think it's impossible to get there.
Eric: Sorry. The major inventions are us human beings build, building things into the system or…?
Reid: Yeah. Like for example, you know, can it do, like, for example, a classic, critique of a lot of large language models is can it do common sense reasoning.
Eric: Gary Marcus is very…
Reid: Exactly. Right. Exactly. And you know, the short answer right now is the large language models are approximating common sense reasoning.
Now they're doing it in a powerful and interesting enough way that you're like, well, that's pretty useful. It's pretty helpful about what it's doing, but I agree that it's not yet doing all of that. And also you get problems like, you know, what are called one shot learning. Can you learn from one instance of it?
Cause currently the training requires lots and lots of compute processing over days or in self play, can you have an accurate memory store that you update? Like for example, you say now fact X has happened, your entire world based on fact X. Look, there's a bunch of this stuff to all go.
And the question is, is that one major invention is that, you know, five major inventions, and by the way, major inventions or major inventions even all the amazing stuff we've done over the last five to 10 years. Major inventions on major inventions. So I myself tend to be two things on the AGI one.
I tend to think it's further than most people think. And I don't know if that further is it's 10 years versus five or 20 years versus 10 or 50 years versus 20. I don't, I don't really know.
Eric: In your lifetime, do you think?
Reid: It's possible, although I don't know. But let me give two other lenses I think on the AGI question cause the other thing that people tend to do is they tend to go, there's like this AI, which is technique machine learning, and there's totally just great, it's augmented intelligence and then there's AGI and who knows what happens with AGI.
And you say, well first is AGI is a whole range of possible things. Like what if you said, Hey, I can build something that's the equivalent of a decent engineer or decent doctor, but to run it costs me $200 an hour and I have AGI? But it's $200 an hour. And you're like, okay, well that's cool and that means we can, we can get as many of them as we need. But it's expensive.
And so it isn't like all of a sudden, you know, Terminator or you know, or inventing fusion or something like that is AGI and or a potential version of AGI. So what is AGI is the squishy thing that people then go, magic. The second thing is, the way that I've looked at the progress in the last five to eight years is we're building a set of iteratively better savants, right?
It just like the chess player was a savant. Um, and, and the savants are interestingly different now. When does savant become a general intelligence and when might savant become a general super intelligence? I don't know. It's obviously a super intelligence already in some ways. Like for example, I wouldn't want to try to play, go against it and win, try to win.
It's a super intelligence when it comes, right? But like okay, that's great cause in our perspective, having some savants like this that are super intelligence is really helpful to us. So, so the whole AGI discussion I think tends to go a little bit Hollywood-esque. You know, it's not terminator.
Eric: I mean, there there is, there's a sort of argument that could be made. I mean, you know, humans are very human-centric about our beliefs and our intelligence, right? We don't have a theory of mind for other animals. It's very hard for us to prove that other species, you know, have some experience of consciousness like qualia or whatever.
Reid: Very philosophically good use of a term by the way.
Eric: Thank you. Um, I studied philosophy though. I've forgotten more than I remember. But, um, you know, I mean…
Reid: Someday we'll figure out what it's like to be a bat. Probably not this time.
Eric: Right, right, exactly. Is that, that's Nagel.
If the machine's better than me at chess and go there, there's a level of I, you know, here I am saying it doesn't have an experience, but it, it's so much smarter than me in certain domains.
I don't, I, the question is just like, it seems like humans are not capable of seeing what it's like to be a bat. So will we ever really be able to sort of convince ourselves that there's something that it's like to be, um, an AGI system?
Reid: Well, I think the answer is um, yes, but it will require a bunch of sophistication. Like one of the things I think is really interesting about, um, as we anthropomorphize the world a little bit and I think some of this machine. Intelligence stuff will, will enable us to do that is, well what does it mean to understand X or, or, or, or no X or experience X or have qualia or whatever else.
And right now what we do is we say, well it’s some king of shadowy image from being human. So we tend to undercount like animals intelligence. And people tend to be surprised like, look, you know, some animals mate for life and everything else, they clearly have a theory of the world and it's clearly stuff we're doing.
We go, ah, they don't have the same kind of consciousness we do. And you're like, well they certainly don't have the same kind of consciousness, but we're not doing a very good job of studying like what the, where it's similar in order it's different. And I think we're gonna need to broaden that out outcome to start saying, well, when you compare us and an eagle or a dolphin or a whale or a chimpanzee or a lion, you know, what are the similarities and and differences?
And how this works. And um, and I think that will also then be, well, what happens when it's a silicon substrate? You know? Do we, do we think that consciousness requires a biological substrate? If so, why? Um, and, you know, part of how, of course we get to understand, um, each other's consciousness as we, we get this depth of experience.
Where I realize is it isn't, you're just a puppet.
Eric: [laughs] I am, I am just a puppet.
Reid: Well, we're, we're talking to each other through Riverside, so, you know, who knows, right. You know, deep fakes and all that.
Eric: The AI's already ahead of you. You know, I'm just, it's already, no.
Reid: Yeah. I think we're gonna have to get more sophisticated on that question now.
I think it's, it's too trivial to say because it can mimic language in particularly interesting ways. And it says, yes, I'm conscious that that makes it conscious. Like that's not, that's not what we use as an instance. And, and part of it is like, do you understand the like part of how we’ve come to understand each other's consciousness is we realize that we experience things in similar ways.
We feel joy in similar, we feel pain in similar ways and that kinda stuff. And that's part of how we begin to understand. And I think it'll be really good that this may kick off kind of us being slightly less kind of call it narcissistically, anthropocentric in this and a broader concept as we look at this.
Eric: You know, I was talking to my therapist the other day and I was saying, you know, oh, I did this like kind gesture, but I didn't feel like some profound, like, I don't, it just seemed like the right thing to do. I did it. It felt like I did the right thing should, you know, shouldn't I feel like more around it?
And you know, her perspective was much more like, oh, what matters is like doing the thing, not sort of your internal states about it. Which to me would, would go to the, if the machine can, can do all the things we expect from sort of a caring type type machine. Like why do we need to spend all this time when we don't even expect that of humans to always feel the right feelings.
Reid: I totally agree with you. Look, I think the real question is what you do. Now that being said, part of how we predict what you do is that, you know, um, you may not have like at that moment gone, haha, I think of myself as really good cause I've done this kind thing. Which by the way, might be a better human thing as opposed to like, I'm doing this cause I'm better than most people.
Eric: Right.
Reid: Yeah, but it's the pattern in which you engage in these things and part of the feelings and so forth is cause that creates a kind of a reliability of pattern of do you see other people? Do you have the aspiration to have, not just yourself, but the people around you leading better and improving lives.
And obviously if that's the behavior that we're seeing from these things, then that's a lot of it. And the only question is, what's that forward looking momentum on it? And I think amongst humans that comes to an intention, a model of the world and so forth. You know, amongst, amongst machines that mean just maybe the no, no, we're aligned.
Well, like, we've done a really good alignment with human progress.
Eric: Do you think there will be a point in time where it's like an ethical problem to unplug it? Like I think of like a bear, right? Like a bear is dangerous. You know, there are circumstances where pretty comfortable. Killing the bear,
But if the bear like hasn't actually done anything, we've taken it under our care. Like we don't just like shoot bears at zoos, you know? Do you think there's a point where like, and it costs us money to sustain the bear at a zoo, do you think there are cases where we might say, oh man, now there's an ethical question around unplugging
Reid: I think it's a when, not an if.
Eric: Yeah.
Reid: Right? I mean, it may be a when, once again, just like AGI, that's a fair way’s out. But it's a when, not an if. And by the way, I think that's again, part of the progress that we make because we think about like, how should we be treating it? Because, you know, like for example, if you go back a hundred, 150 years, the whole concept of animal rights doesn't exist in humans.
You know, it's like, hey, you wanna, you want to torture animal X to death, you know, like you're queer, but you're, you're, you're allowed to do that. That's an odd thing for you to do. And maybe it's kind of like, like distasteful, like grungy bad in some way, but , you know, it's like, okay. Where's now you're like, oh, that person is, is like going out to try to go torture animals! We should like get them in an institution, right? Like, that's not okay.
You know, what is that further progress for the rights and lives? And I think it will ultimately come to things that we think are, when it gets to kind of like things that have their own agency and have their own consciousness and sets of existence.
We should be including all of that in some, in some grand or elevated, you know, kind of rights conceptions.
Eric: All right, so back back to my listeners who, you know, wanna know where to invest and make money off this and, you know.
Reid: [laughs] It isn't from qualia and consciousness. Oh, wait.
Eric: Who do you think are the key players? The key players in the models. Then obviously there are more sort of, I don't know if we're calling them vertical solutions or product oriented or whatever, however you think about them.
But starting with the models, like who do you see as sort of the real players right now? Are you counting out a Google or do you think they'll still, you know, sort of show?
Reid: Oh no. I think Google will show up. And obviously, you know, Open AI, Microsoft has done a ton of stuff. I co-founded Inflection last year with Mustafa Suleyman. We have a just amazing team and I do see a lot of teams, so I'm.
Eric: And that's to build sort of the foundational…
Reid: Yeah, they're gonna, well, they're building their own models and they're gonna build some things off those models.
We haven't really said what they are yet. But that's obviously going to be kind of new models. Adept, another Greylock investment building its own models, Character is building its own models, Anthropic is building its own models. And Anthropic is, you know, Dario and the crew is smart folks from Open AI, they're, they're doing stuff within a kind of a similar research program that Open AI is doing.
And so I think those are the ones that I probably most track.
Eric: Character's an interesting case and you know, we're still learning more about that company. You know, I was first to report they're looking to raise 250 million. My understanding is that what's interesting is they're building the models, but then for a particular use case, right?
Or like, it's really a question of leverage or like, do people need to build the models to be competitive or do you think there will be... can you build a great business on top of Stability or Open AI or do you need to do it yourself?
Reid: I think you can, but the way you do it is you can't say it's cause I have unique access to the model. It has to be, you know, I have a business that has network effects or I'm well integrated in enterprise, or I have another deep stack of technology that I'm bringing into it. It can't just be, I'm a lightweight front end to it because then other people can be the lightweight front end.
So you can build great businesses. I think with it, I do think that people will both build businesses off, you know, things like the Open AI APIs and I think people will also train models. Because I think one of the things that will definitely happen is a lot of… not just will large models be built in ways that are interesting and compelling, but I think a bunch of smaller models will be built that are specifically tuned and so forth.
And there's all kinds of reasons. Everything from you can build them to do something very specific, but also like inference cost, does it, does it run on a low compute or low power footprint? You know, et cetera, et cetera. You know, AI doctor, AI tutor, um, you know, duh and on a cell phone. And, um, and so, you know, I think like all of that, I think the short answer to this is all
Eric: Right. Do you think we are in a compute arms race still, or do you, do you think this is gonna continue where it's just if you can raise a billion dollars to, to buy sort of com GPU access basically from Microsoft or Amazon or Google, you're, you're gonna be sort of pretty far ahead? Or how do you think about that sort of the money, the money and computing rates shaping up?
Reid: So I kind of think about two. There's kind of two lines of trends. There's one line, which is the larger and larger models, which by the way, you say, well, okay, so does the scale compute and one x flop goes to two x flops, and does your performance function go up by that?
And it doesn't have to go up by a hundred percent or, or two x or plus one x. It could go up by 25%, but sometimes that really matters. Coding doctors, you know, legal, other things. Well, it's like actually, in fact, it, even though it's twice as expensive, a 25% increase in, you know, twice as expensive of compute, the 25% increase in performance is worth it. And I think you then have a large scale model, like a set of things that are kind of going along need to be using the large scale models.
Then I think there's a set of things that don't have that need. And for example, that's one of the reasons I wasn't really surprised at all by the profusion of image generation, cuz those are, you know, generally speaking, trainable for a million to $10 million. I think there's gonna be a range of those.
I think, you know, maybe someone will figure out how to do, you know, a hundred-million version and once they figured out how to do a hundred-million dollar version, someone also figured out how to do the 30-million version of that hundred-million dollar version.
And there's a second line going on where all of these other smaller models will fit into interesting businesses. And then I think a lot of people will either deploy an open source model that they're using themselves, train their own model, get a special deal with, like a model provider or something else as a way of doing it.
And so I think the short answer is there will be both, and you have to be looking at this from what's the specific that this business is doing. You know, the classic issues of, you know, how do you go to market, how do you create a competitive mode? What are the things that give you real, enduring value that people will pay for in some way in a business?
All of the, those questions still apply, but the, but, but there's gonna be a panoply of answers, depending on the different models of how it plays
Eric: Do you think spend on this space in terms of computing will be larger in ‘24 and then larger in 25?
Reid: Yes. Unquestionably,
Eric: We're on the, we're still on the rise.
Reid: Oh, yes. Unquestionably.
Eric: That's great for a certain company that you're on the board of.
Reid: Well look, and it's not just great for Microsoft. There are these other ones, you know, AWS, Google, but…
Eric: Right. It does feel like Amazon's somewhat sleepy here. Do you have any view there?
Reid: Well, I think they have begun to realize, what I've heard from the market is that they've begun to realize that they should have some stuff here. I don't think they've yet gotten fully underway. I think they are trying to train some large language models themselves. I don't know if they've even realized that there is a skill to training those large language models, cause like, you know, sometimes people say, well, you just turn on and you run the, run the large language model, the, the training regime that you read in the papers and then you make stuff.
We've seen a lot of failures, of people trying to build these things and failing to do so, so, you know, there's, there's an expertise that you learn in doing it as well. And so I think—
Eric: Sorry to interrupt—if Microsoft is around Open AI and Google is around Anthropic, is Amazon gonna be around stability? That's sort of the question that I'll put out to the world. I don't know if you have.
Reid: I certainly don't know anything. And in the case of, you know, very, very, very, um, a politely said, um, Anthropic and OpenAI have scale with huge models. Stability is all small models, so, hmm.
Eric: Yeah. Interesting. I, I don't think I've asked you sort of directly about sort of stepping off the Open AI board. I mean, I would assume you would prefer to be on the board or…?
Reid: Yeah. Well, so look, it was a funny thing because, um, you know, I was getting more and more requests from various Greylock portfolio companies cause we've been investing in AI stuff for over five years. Like real AI, not just the, we call it “software AI”, but actual AI companies.
For a while and I was getting more and more requests to do it and I was like oh, you know, what I did before was, well here's the channel. Like here is the guy who, the person who handles the API request goes, go talk to them. Like, why can't you help me? I was like, well, I'm on the board.
I have a responsibility to not be doing that. And then I realized that, oh s**t, it's gonna look more and more. Um, I might have a real conflict of interest here, even as we’re really carefully navigating it and, and it was really important cause you know various forces are gonna kind of try to question the frankly, super deep integrity of Open AI.
It's like, look, I, Sam, I think it might be best even though I remain a fan, an ally, um, to helping, I think it may be best for Open AI. And generally to step off a board to avoid a conflict of interest. And we talked about a bunch and said, okay, fine, we'll do it.
And you know, I had dinner with Sam last night and most of what we were talking about was kind of the range of what's going on and what are the important things that open eyes need to solve? And how should we be interfacing with governments so that governments understand? What are the key things that, that, that should be in the mix? And what great future things for humanity are really important not to fumble in the, in the generally, like everyone going, oh, I'm worrying.
And then I said, oh, I got a question for you. And he's like, yeah, okay. I’m like, now that I'm no longer on the board, could I ask you to personally look at unblocking, my portfolio company's thing to the API? Because I couldn't ever ask you that question before. Cause I would be unethical. But now I'm not on the board, so can I ask the question?
He's like, sure, I'll look into it. I'm like, great, right? And that's the substance of it, which I never would've done before. But that wasn't why, I mean, obviously love Sam and the Open AI team.
Eric: The fact that you're sort of a Democratic super donor was that in the calculus? Or, because I mean, we are seeing Republican… well, I didn't think that at all coming into this conversation, but just hearing what you're saying. Looking at it now, it feels like Republicans are like trying to find something to be angry about.
Reid: Well
Eric: These AI things, I don't quite…
Reid: The unfortunate thing about the, the most vociferous of the republican media ecosystem is they just invent fiction, like their hallucination full out.
Eric: Right.
Reid: I mean, it just like, I mean, the amount of just like, you know, 2020 election denial and all the rest, which you can tell from having their text released from Fox News that like, here are these people who are on camera going on where you have a question about, you know, what happened in the election.
And they're texting each other going, oh my God, this is insane. This is a coup, you know, da da da. And you're like, okay. Anyway, so, so all like, they don't require truth to generate. Heat and friction. So that was, wasn't that no, no. It's just really, it's kind of the question of, when you're serving on a board, you have to understand what your mission is very deeply and, and to navigate it.
And part of the 501(C)(3) boards is to say, look, obviously I contribute by being a board member and helping and navigate various circumstances and all the rest. And, you know, I can continue to be a counselor and an aid to the company not being on the board. And one of the things I think is gonna be very important for the next X years, for the entire world to know is that open AI takes its ethics super seriously,
Eric: Right.
Reid: As do I.
Eric: Does that fit with having to invest? I mean, there are lots of companies that do great things. They have investors. I believe in companies probably more than personally I believe in charities to accomplish things. But the duality of OpenAI is extremely confusing. Like, was Greylock, did Greylock itself invest a lot or you invested early as an angel?
Reid: I was the founding investor as an angel, as a, as a program related investment from my foundation. Because like I started, I was among the first people to make a philanthropic donation to Open AI. Just straight out, you know, here's a grant by Wednesday, then Sam and Crew came up with this idea for doing this commercial lp, and I said, look, I, I'll help and I have no idea if this will be an interesting economic investment.
They didn't have a business plan, they didn't have a revenue plan, they didn't have a product plan. I brought it to Greylock. We talked about it and they said, look, we think this will be possibly a really interesting technology, but you know, part of our responsibility to our LPs, which you know, includes a whole bunch of universities and else we invest in businesses and there is no business plan.
Eric: So is that the Khosla did? Khosla’s like we invested wild things. Anyway, we don't care. That's sort of what Vinod wants to project anyway, so yeah.
Reid: You know, yes, that's exactly the same. So I put them 50 and then he put in a, I think he was the only venture fund investing in that round. But like, there was no business plan, there was no revenue model, there was no go to market…
Eric: Well, Sam basically says, someday we're gonna have AGI and we're gonna ask you how to make a bunch of money? Like, is he, that's a joke, right? Or like, how much is he joking?
Reid: It's definitely, it's not a 100% joke and it's not a 0% joke. It's a question around, the mission is really about how do we get to AGI or as close to AGI as useful and to make it useful for humanity. And by the way, the closer you get to AGI, the more interesting technologies fall out, including the ability to have the technology itself solve various problems.
So if you said, we have a business model problem, it's like, well ask the thing. Now, if you currently sit down and ask, you know, ChatGPT what the business model is, you'll get something pretty vague and generic that wouldn't get you a meeting with a venture capitalist because it's like “we will have ad supported”... you're like, okay. Right.
Eric: Don't you have a company that's trying to do pitch decks now or something?
Reid: Oh yeah, Tome. No, and it's awesome, but by the way, that's the right kind of thing. Because, because what it does is you say, hey, give me a set of tiles, together with images and graphics and things arguing X and then you start working with the AI to improve it. Say, oh, I need a slide that does this and I need a catchier headline here, and, and you know, da da da.
And then you, and you know, obviously you can edit it yourself and so on. So that's the kind of amplification. Now you don't say, give me my business model, right?
Eric: You're like, I have this business model, like articulate it.
Reid: Exactly.
Eric: Um, I, politics, I mean, I feel like we, we live through such like a… you know what I mean, I feel like Silicon Valley, you know, has like, worked on PE everybody be able to, you know, everybody can get along. There's sort of competition, but then you sort of still stay close to any, everybody like, you, you especially like are good, you know, you you are in the PayPal mafia with a lot of people who are fairly very conservative now.
The Trump years broke that in some ways and particular, and that, yeah. So how did you maintain those relationships?I see headlines that say you're friends with Peter Thiel. What is, what's the state of your friendship with Peter Thiel and how, how did it survive?
I guess the Trump years is the question.
Reid: Well, I think the thing that Peter and I learned when we were undergraduate at Stanford together is it's very important to… cause we, you know, I was a lefty. He was a righty. We'd argue a lot to maintain conversation and to argue things.
It’s difficult to argue on things that feel existential and it’s ethically challenged is things around Trump. You know, the, you know, Trump feels to be a corrosive asset upon our democracy that is disfiguring us and staining us to the world. And so to have a dispassionate argument about it is, it's challenging. And it ends up with some uneven ground and statements like, I can't believe you're f*****g saying that, as part of dialogue.
But on the other hand, you know, maintaining dialogue is I think part of how we make progress as society. And I basically sympathetic to people as long as they are legitimately and earnestly and committed to the dialogue and discussion of truth between them and committed otherwise.
And so, you know, there are folks from the PayPal years that I don't really spend much time talking to, right?. There are others that I do because that conversation about discovering who we are and who we should be is really important. And you can't allow your own position to be the definer.
It almost goes back to what we were talking about, the AI side, which is make sure you're talking to other smart people who challenge you to make sure you're doing the right thing. And that's, I think, a good general life principle.
Eric: Well, you know, I feel like part of what my dream of like the Silicon Valley world is that we have these, you know, we have, Twitter is like the open forum. We're having sincere sort of on the level debates, but then you see something like, you know, the…
Reid: You don't think it's the modern Seinfeld show I got? Well, not Seinfeld, um, Springer, Jerry Springer.
Eric: Yeah, that's, yeah. Right. But I just feel like the sort of like, if the arguments are on the level issue is my problem with some of the sort of, I don't know, Peter Theil arguments, that he's not actually publicly advancing his beliefs in a sincere way, and that that's almost more corrosive.
Reid: Oh, that's totally corrosive. And as much as that's happening, it's terrible. And that's one of the things that I, um, you know, in conversations I have, I push people including Peter on a lot.
Eric: Yeah. Are you still, are you still gonna donate a lot, or what was, what's your, are you as animated about the Democratic party and working through sort of donor channels at the moment?
Reid: Well, what I would say is I think that we have a responsibility to try to make, like with, it's kind of the Spider-Man ethics. With power comes responsibility, with wealth comes responsibility, and you have to try to help contribute to… what is the better society that we should be living and navigating in?
And so I stay committed on that basis. And I do think there are some really amazing people in the administration. I think Biden is kind of a good everyday guy.
Eric: Yeah.
Reid: In fact, good for trying to build bridges in the country. I think there are people like Secretary Raimondo and Secretary Buttigieg who are thinking intensely about technology and what should be done in the future.
And I think there's other folks now, I think there's a bunch of folks on the democratic side that I think are more concerned with their demagoguery than they are with the right thing in society. And so I tend to be, you know, unsympathetic to, um, you know…
Eric: I know, Michael Moritz, it's Sequoia, that oped sort of criticizing San Francisco government, you know, and there's, there's certainly this sort of woke critique of the Democratic Party. I'm curious if there's a piece of it sort of outside of he governance that you're…
Reid: Well, the interesting thing about woke is like, well, we're anti woke. And you're like, well, don't you think being awake is a good thing? I mean, it's kind of a funny thing.
Eric: And sort of the ill-defined nature of woke is like key to the allegation because it's like, what's the substantive thing you're saying there? And you know, I mean we we're seeing Elon tweet about race right now, which is sort of terrifying anyway.
Reid: Yeah. I think the question on this stuff is to try to say, look, people have a lot of different views and a lot of different things and some of those views are, are bad, especially in kind of minority and need to be advocated against in various… part of why we like democracy is to have discourse.
I'm very concerned about the status of public discourse. And obviously most people tend to focus that around social media, which obviously has some legitimate things that we need to talk about. But on the other hand, they don't track like these, like opinion shows on, like, Fox News that represent themselves implicitly as news shows and saying, man, this is the following thing.
Like there's election fraud in 2020, and then when they're sued for the various forms of deformation, they say, we're just an entertainment show. We don't do anything like news. So we have that within that we are already struggling on a variety of these issues within society. and we, I think we need to sort them all out.
Eric: Is there anything on the AI front that we missed or that you wanted to make sure to talk about? I think we covered so much great ground.
Reid: And, and we can do it again, right. You know, it's all, it's great.
Eric: I love it. This was all the things you're interested in and I'm interested in, so great. I really enjoyed having you on the podcast and thanks.
Reid: Likewise. And, you know, I follow the stuff you do and it's, it's, it's cool and keep doing it.
Get full access to Newcomer at
www.newcomer.co/subscribe