For the first episode of the Newcomer podcast, I sat down with Reid Hoffman — the PayPal mafia member, LinkedIn co-founder, Greylock partner, and Microsoft board member. Hoffman had just stepped off OpenAI’s board of directors. Hoffman traced his interest in artificial intelligence back to a conversation with Elon Musk.
“This kicked off, actually, in fact, with a dinner with Elon Musk years ago,” Hoffman said.
Musk told Hoffman that he needed to dive into artificial intelligence during conversations about a decade ago. “This is part of how I operate,” Hoffman remembers. “Smart people from my network tell me things, and I go and do things. And so I dug into it and I’m like, ‘Oh, yes, we have another wave coming.’”
This episode of Newcomer is brought to you by Vanta
Security is no longer a cost center — it’s a strategic growth engine that sets your business apart. That means it’s more important than ever to prove you handle customer data with the utmost integrity.
But demonstrating your security and compliance can be time-consuming, tedious, and expensive. Until you use Vanta.
Vanta’s enterprise-ready Trust Management Platform empowers you to:
* Centralize and scale your security program
* Automate compliance for the most sought-after frameworks, including SOC 2, ISO 27001, and GDPR
* Earn and maintain the trust of customers and vendors alike
With Vanta, you can save up to 400 hours and 85% of costs. Win more deals and enable growth quickly, easily, and without breaking the bank.
For a limited time, Newcomer listeners get $1,000 off Vanta. Go to vanta.com/newcomer to get started.
Why I Wanted to Talk to Reid Hoffman & What I Took Away
Hoffman is a social network personified. Even his journey to something as wonky as artificial intelligence is told through his connections with people. In a world of algorithms and code, Hoffman is upfront about the extent to which human connections decide Silicon Valley’s trajectory. (Of course they are paired with profound technological developments that are far larger than any one person or network.)
When it comes to the rapidly developing future powered by large language models, a big question in my mind is who exactly decides how these language models work?
Sydney appeared in Microsoft Bing and then disappeared. Microsoft executives can dispatch our favorite hallucinations without public input. Meanwhile, masses of images can be gobbled up without asking their creators and then the resulting image generation tools can be open-sourced to the world.
It feels like AI super powers come and go with little notice.
It’s a world full of contradictions. There’s constant talk of utopias and dystopias and yet startups are raising conventional venture capital financing.
The most prominent player in artificial intelligence — OpenAI — is a non-profit that raised from Tiger Global. It celebrates its openness in its name and yet competes with companies whose technology is actually open-sourced. OpenAI’s governance structure and priorities largely remain a mystery.
Finally, unlike tech’s conservative billionaires who throw their money into politics, in the case of Hoffman, here is a tech overlord that I seem to mostly agree with politically. I wanted to know what that would be like. Is it just good marketing? And where exactly is his heart and political head at right now?
I thought he delivered. I didn’t feel like he was dodging my questions, even in a world where maintaining such a wide network requires diplomacy. Hoffman seemed eager and open — even if he started to bristle at what he called my “edgy words.”
Some Favorite Quotes
We covered a lot of ground in our conversation. We talked about AI sentience and humans’ failures to identify consciousness within non-human beings. We talked about the coming rise in AI cloud compute spending and how Microsoft, Google, and Amazon are positioned in the AI race.
Hoffman said he had one major condition for getting involved in OpenAI back in the early days when Musk was still on board.
“My price for participation was to ask Elon to stop saying the word “robocalypse,” Hoffman told me. “Because I thought that the problem was it’s very catchy and it evokes fear.”
I asked Hoffman why he thought Musk got involved in artificial intelligence in the first place when Musk seems so worried about how it might develop.
Why get the ball rolling down the hill at all, I wondered?
Hoffman replied that many people in the field of artificial intelligence had “messiah complexes.”
“It’s the I am the one who must bring this — Prometheus, the fire to humanity,” Hoffman said. “And you’re like, ‘Okay, I kind of think it should be us versus an individual.’”
He went on, “Now, us can’t be 8 billion people — us is a small group. But I think, more or less, you see the folks who are steering with a moral compass try to say, how do I get at least 10 to 15 people beyond myself with their hands on the steering wheel in deep conversations in order to make sure you get there? And then let’s make sure that we’re having the conversations with the right communities.”
I raised the possibility that this merely suggested oligarchic control of artificial intelligence rather than dictatorial control.
We also discussed Hoffman’s politics, including his thoughts on Joe Biden and “woke” politics. I asked him about the state of his friendship with fellow PayPal mafia member Peter Thiel.
“I basically am sympathetic to people as long as they are legitimately and earnestly committed to the dialogue and discussion of truth between them and not committed otherwise,” Hoffman said. “There are folks from the PayPal years that I don’t really spend much time talking to. There are others that I do continue because that conversation about discovering who we are and who we should be is really important. And you can’t allow your own position to be the definer.”
I suggested that Thiel’s public views sometimes seemed insincere.
“Oh, that’s totally corrosive,” Hoffman said. “And as much as that’s happening, it’s terrible. And that’s one of the things that in conversations I have, I push people, including Peter, on a lot.”
Give it a listen.
Find the Podcast
Read the Transcript
Eric: Reid, thank you so much for coming on the show. I'm very excited for this conversation. You know, I'm getting ready for my own AI conference at the end of this month, so hopefully this is sort of a prep by the end of this conversation, we'll all be super smart and ready for that. I feel like there've been so many rounds of sort of AI as sort of the buzzword of the day.
This clearly seems the hottest. When did you get into this moment of it? I mean, obviously you just stepped off the Open AI board. You were on that board. Like how, when did you start to see this movement that we're experiencing right now coming.
Reid: Well, it's funny because my undergraduate major was artificial intelligence and cognitive science. So I've, I've been around the hoop for multiple waves for a long time and I think this kicked off actually, in fact, with a dinner with Elon Musk years ago. You know, 10-ish years ago, Elon and I would have dinner about once a quarter and he's like, well, are you paying attention to this AI stuff?
And I'm like, well, I majored in it and you know, I know about this stuff. He's like, no, you need to get back involved. And I was like, all right.
This is part of how I operate is smart people from my network tell me things and I go and do things. And so I dug into it and I went, oh yes, we have another wave coming.
And this was probably about seven or eight years ago, when I, when I saw the beginning of the wave or the seismic event. Maybe it was a seismic event out at sea and I was like, okay, there's gonna be a tsunami here and we should start getting ready cause the tsunami is actually gonna be amazingly great and interesting.
Eric: And that—is that the beginning of Open AI?
Reid: Open AI is later. What I did is I went and made connections with the kind of the heads of every AI lab and major company because I concluded that I thought that the AI revolution will be primarily driven by large companies initially because of the scale compute requirements.
And so, you know, talked to Demis Hassabis, met Mustafa Suleyman, talked to Yann LeCun, talked to Jeff Dean, you know, all these kind of folks and kind of, you know, built all that. And then it was later in conversations with Sam and Elon that I said, look, we need to do something that's a for pro humanity. Not just commercial effort.
And my price for participation, cause I thought it was a great idea, but my price for participation was to ask Elon to stop saying the word robocalypse. Because I thought that the problem was that it's very catchy and it evokes fear. And actually, in fact, one of the things I think about this whole area is that it's so much more interesting and has so much amazing opportunity for humanity.
A little bit like, I don't know if you saw the Atlantic article I wrote that we evolve ourselves through technology and I'm, you know, going to be doing some writings around describing AI as augmented intelligence versus artificial intelligence. And I wanted to kind of build that positive, optimistic case that I think is the higher probability that I think we can shape towards and so forth.
So it's like, okay, I'm in, but no more Robocalypse.
Eric: I appreciate the ultimate sort of network person that you tell the story through people. I always appreciate when the origin stories of technology actually come through the human beings. With Elon in particular, I'm sort of confused by his position because it seems like he's very afraid of AI.
And if that's the case, why would you want to, like, do anything to sort of get the ball rolling down the hill? Like, isn't there a sort of just like, stay away from it, man, if you think it's so bad. How do you see his thinking? And I'm sure it's evolved.
Reid: Well, I think his instinct for the good and the challenging of this is he tends to think AI will only be good if I'm the one who's in control.
Eric: Sort of, yeah.
Reid: Yeah. And this is actually somewhat replete within the modern AI field. Not everybody but this. And Elon is a public enough figure that I think, you know, making this comment of him is not talking at a school.
Other people would, there's a surprising number of Messiah complexes in the field of AI, and, and it's the, I am the one who must bring this, you know, Prometheus, you know, the Fire to humanity. And you're like, okay, I kind of think it should be us, right? Versus an individual. Now us can't be 8 billion people, us as a small group, but I think more or less you see the, the folks who are steering with a moral compass try to say, how do I get at least 10 to 15 people beyond myself with their hands on the steering wheel in deep conversations in order to make sure you get there and then let, let's make sure that we're having the conversations with the right communities.
Like if you say, well, is this going to, you know, institutionalize, ongoing, um, you know, power structures or racial bias, something else? Well, we're talking to the people to make sure that we're going to minimize that, especially over time and navigate it as a real issue. And so those are the, like, that's the kind of anti Messiah complex, which, which is more or less the efforts that I tend to get involved in.
Eric: Right. At least sort of oligarchy, of AI control instead of just dictatorship of it.
Reid: Well, yeah, and it depends a little bit, even on oligarchy, look, things are built by small numbers of people. It's just a fact, right? Like, there aren't more than, you know, a couple of founders, maybe maximum five in any, any particular thing. There is, you know, there's reasons why. When you have a construction project, you have a head of construction, right?
Et cetera. The important thing is to make sure that's why you have, why you have a CEO, you have a board of directors. That's why you have, you know, you say, well, do we have the right thing where a person is accountable to a broader group? And that broader group feels their governance responsibility seriously.
So oligarchy is a—
Eric: a charged
Reid: is a charged word. And I,
Eric: There’s a logic to it. I'm not, I'm not using it to say it doesn't make sense that you want the people to really understand it around, around it. Um, I mean, specifically with Open AI, I mean, you, you just stepped off the board. You're also on the board of Microsoft, which is obviously a very significant player.
In this future, I mean, it's hard to be open. I get a little frustrated with the “open” in “Open AI” because I feel like there's a lot that I don't understand. I'm like, maybe they should change the name a little bit, but is it still a charity in your mind? I mean, it's obviously raised from Tiger Global, the ultimate prophet maker.
Like, how should we think about the sort of core ambitions of Open AI?
Reid: Well, um, one, the board I was on was a fine one and they've been very diligent about making sure that all of the controls, including for the subsidiary company are from the 501(C)(3) and diligent to its mission, which is staffed by people on the 501(C)(3) board with the responsibilities of being on a 5 0 1 board, which is being in service of the mission, not doing, you know, private inurement and other kinds of things.
And so I actually think it is fundamentally still a 501(C)(3). The challenge is if you kind of say, you look at this and say, well, in order to be a successful player in the modern sca