Transcripts

This Week in Tech 1051 Transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.

 

Alex Kantrowitz [00:00:00]:
It's time for TWiT, This Week in Tech. I'm Alex Kantrowitz, host of Big Technology Podcast, subbing in for Leo Laporte. We're going to talk about the AI bubble and whether we should worry about the Antichrist. And we're going to do it with a spectacular panel of brilliant tech experts. We have Brian McCullough, the host of the Tech Brew Ride Home podcast, Dan Schipper, the CEO of Every, and Ari Paparo, the host of the Marketecture Podcast. Stay tuned for a show you do not want to miss.


Alex Kantrowitz [00:00:43]:
You're watching this Week in Tech, episode 1051, recorded on September 28, 2025. Hype or true foreign? It's time for TWiT this Week in Tech, the show where we cover the week's tech news. Are we in an AI bubble? Let's find out with an exceptional panel of guests. I'm Alex Cantrowicz, host of Big Technology Podcast in for Leo, who's on vacation this week. So thank you, Leo and the TWiT team, for trusting me with the show this week. And thanks to everybody here in the audience. I promise you, as we go through the show, you are not going to be disappointed. We have so much to talk about, everything from the big AI investments to the antitrust, the state of big tech antitrust, the TikTok deal, whether that's ever going to happen.

Alex Kantrowitz [00:01:30]:
We're going to talk about whether AI is replacing jobs. Then we'll have some fun at the end. And we're joined by a killer group to talk with us about all of these fascinating news stories, including Brian McCullough, the host of the Tech Brew Ride Home podcast. Brian, great to see you.

Brian McCullough [00:01:46]:
Great to see you, Alex.

Alex Kantrowitz [00:01:48]:
We're also joined by Dan Schipper, the CEO of Every and the host of the AI and I podcast. Welcome, Dan.

Dan Shipper [00:01:55]:
Hello.

Dan Shipper [00:01:56]:
Thanks for having me.

Alex Kantrowitz [00:01:57]:
Thanks for being here. We're going to hear more about both of your shows as we go on. And let me now introduce our third guest, Ari Paparo. He's the host of the marketecture podcast. Ari, great to see you.

Ari Paparo [00:02:08]:
Happy to be here, Alex. Thanks for inviting me.

Alex Kantrowitz [00:02:11]:
All right, let's get right to our first story, because we've been used to some crazy headlines in the tech world with big numbers, but I don't think we've quite seen anything like the headline that kicked off the week, which is that Nvidia is expected to invest $100 billion in OpenAI this is from the Wall Street Journal. Nvidia and OpenAI, two US giants powering America's race for superintelligence, outlined an expansive partnership that includes plans for an enormous data center build out and a $100 billion investment in by the chip maker into the startup. The deal, announced Monday, will allow OpenAI to build and deploy at least 10 gigawatts of Nvidia systems. That amount of electricity is roughly comparable to what is produced by more than four Hoover dams or the power consumed by 8 million homes. There was a great information headline about this. Nvidia pays OpenAI to buy its chips. And I've been sitting on it for a bunch of days now and to me the headline, the whole thing feels a little bit odd. If everything was going so well, why would Nvidia need OpenAI need to pay OpenAI to buy its chips? Is the circular nature of these investments healthy? Brian, what do you think?

Brian McCullough [00:03:29]:
Is it healthy? No, but it does solve problems for everybody involved in the sense that if, if Nvidia is feeling like, well, we need to lock down our dominance in terms of this entire ecosystem, we lock down our biggest customer, theoretically or one of the primary customers, especially because Google has their own chips, Amazon can create their own chips, everyone can do their own stuff. The only people that theoretically can't, although they're trying to do their own chips as well, is OpenAI. And what does it help OpenAI do? Well, OpenAI has to raise s tons of money in order to continue to live. And so it's a, it's a great idea to lock down the money that will allow them to buy the chips, that will allow them to continue to exist and blah, blah, blah, blah, blah. It's, it's great for everyone involved. Now does that mean that this is sustainable? That's what we're gonna talk about for a long, long time. But I think that this is not only a genius move, it's primarily a genius move, I think, for Nvidia because it essentially keeps the chessboard in place for Nvidia for like, let's call it five. That would be, it's in their interest for if Nvidia has become the most valuable company in the history of the planet in the last five years.

Brian McCullough [00:05:19]:
It allows them to sort of keep the chessboard at least not locked in place, but in a reasonable facsimile of what it has been, so that they can continue to at least move things in the direction that is beneficial to them.

Alex Kantrowitz [00:05:36]:
Ari, what do you think about this? I mean, you've Seen some wacky things happen back in the dot com days. Is this just another march towards an inevitable collapse?

Ari Paparo [00:05:45]:
First of all, let me just say these people, these cowards, they should just build the Dyson sphere. What are they waiting for? Just, just go to the sun and capture the energy like this hundred billion piddly squat. So, but short of that, I will just say a lot of people have been talking about the.com bo because there was a lot of what was called round tripping back then where say AOL would invest your startup and then you would take that exact same money and spend it on aol. And what happened was that when the tides went out, people found out who wasn't wearing shorts. I guess that's the Warren Buffett expression. And in that case, in the dot com case, there were a lot of times where there was a big company who was making money from a lot of small companies and the small companies all went out of business and then the big company had a giant loss. It's a little different in this case because they're both kind of behemoths. So while if there is an AI bubble and it bursts, you might have problems where you might see Nvidia miss earnings because of expected spend that doesn't resolve.

Ari Paparo [00:06:51]:
I don't think anyone expects either of these companies to go under even in the worst case scenario. Although you never know. It's an interesting parallel, but it's a bit of a different dynamic than what happened during the dot com bubble when it was really abused by a lot of these companies.

Alex Kantrowitz [00:07:08]:
Okay. And we're going to get into the bubble dynamics of this moment in our second segment, but I'm curious first of all, like where is this money going to come from and what is it going to be spent on? Right. Because Nvidia didn't have 100 billion cash on hand. Now it's going to be, I think, a $10 billion investment to start and then $10 billion increments as we go.

Brian McCullough [00:07:32]:
But Alex, coincidentally, isn't it, over the last three years, Nvidia, their net profits were about $110 billion. So essentially, just like Ari was just saying, they made $100 billion over the last three years and they're turning around and doing that.

Alex Kantrowitz [00:07:55]:
Could be. Could be. All right, Brian, you're making me a believer here. But at the end, can I jump in real quick? Yeah, Dan, I'm going to turn it to you. But I think at the end of the day what we need though is use, right? We need companies, but we. What Nvidia and OpenAI need is, is use. OpenAI is just going to make 13 billion this year. It's going to lose 120 billion between now and 2029.

Alex Kantrowitz [00:08:16]:
So Dan, talk a little bit about what this money is going to be used for. Is it training, is it inference? And how are we ever going to see these companies drive the demand and the customer usage to make up the money?

Dan Shipper [00:08:30]:
I mean, I, I assume both. I don't have, Sam Altman hasn't sent me his, his, you know, balance sheet to tell me how he's going to spend it, but I assume it'll, it'll be on both and I think this is a totally normal thing to do. So for example, one of our big clients at every one of our big consulting clients also invested in us. Sometimes it's like weird. Sometimes it's a thing that you have to be concerned about, you know, for example, in the end, in the dot com era. And again, I think a good question is what is motivating that investment or that spend? I think in the case of OpenAI and Nvidia, the demand for AI is truly shocking. Like ChatGPT is the fastest growing consumer app ever. If you use these tools, especially on an enterprise level, either in the API or for example Codex, their coding tool, or cloud code anthropics coding tool, you know that there are regular outages because they cannot fill the demand for API use for developers, for users.

Dan Shipper [00:09:38]:
And I think it feels like a really good bet that that demand is only going to go up as things get cheaper and as things get smarter and also as humans get better at knowing how to get the most out of these models. And it makes a lot of sense to take, take money from one of your closest partners that's going to supply you with the chips. You need to build these data centers to get better aligned with them to do the gigantic build outs you're going to need over the next 10 years. And I think that's a separate question about whether we're in a bubble. It seems really clearly the valuations in AI are incredibly high and frothy, but there's fundamentally a lot of demand and they need the chips in order to fulfill the demand.

Alex Kantrowitz [00:10:23]:
Then why so many caveats? So this is from Spyglass. Nvidia intends to invest up to 100 billion in OpenAI over time. From MG Sigler, he writes, first and foremost, it's a letter of intent. But the real key is that Nvidia intends to invest up to 100 billion in OpenAI. That's not one qualifier, it's two. We got intense and up to. In other words, they may not invest 100 billion, they may not invest the full 100 billion, or both, or neither. And the sentence doesn't end there, as the new Nvidia systems are deployed.

Alex Kantrowitz [00:10:55]:
In other words, this investment has stipulations. It seems like this happens so often with OpenAI that there are these big announcements, 100 billion with Nvidia, 500 billion for Stargate. And then once the headline fades away, everyone looks at the fine print and they say, wait a second here, that's not quite what you told us.

Dan Shipper [00:11:14]:
I think that's a totally legit criticism. And it seems like Nvidia is making a $10 billion investment, and then they have the option to do more. And so there is something real here of substance that is meaningful, that OpenAI is going to need, that they're going to have. They're going to hit, need to hit a lot of things, get a lot of things right in order to unlock more of the investment, if that ever comes. I think it's totally true. It's clear that they really want these, like, gigantic big headline numbers. And in the same way that if you're a startup getting acquired, you're going to want to say the biggest number you possibly can includes all the incentives and all of the, like, if the stock price goes up and all that kind of stuff. And I think that, yeah, OpenAI is playing that game a bit, but I think they're, you know, $10 billion, not nothing.

Dan Shipper [00:11:58]:
So there is definitely substance there.

Brian McCullough [00:12:00]:
Yeah.

Ari Paparo [00:12:01]:
All right, Brian, I'll jump in here.

Alex Kantrowitz [00:12:02]:
Go ahead.

Ari Paparo [00:12:04]:
There's a lot of variables moving around. They want to lock in the two sides, both want to lock in certain variables. Right. OpenAI wants to lock in its val potentially at the peak, and say, like, you're going to invest at this level. And then secondly, OpenAI wants to lock in the ability to buy certain amounts of capacity from Nvidia, which, presumably because they're an investor, they would be able to. In contrast, they. If they didn't do this deal, like, and they anticipated buying, say, $10 billion worth of chips a year for the next couple of years, they would have to take the risk of raising that much money potentially at a lower valuation or in some other environment. So they lock in the valuation, they lock in the supply, and Nvidia gets this huge benefit, which is they get to put in sales, and it's great for their stock price as well.

Alex Kantrowitz [00:12:52]:
All right, Brian, you mentioned that Nvidia didn't want anybody, they didn't want the competitor chips to get a heads up because of OpenAI potentially going there. This is also from Spyglass. OpenAI is said to be working on their own chips. Undoubtedly not a full on Nvidia replacement, at least not anytime soon. But still, if they can move some of their workloads off Nvidia stack, that's a huge story. You also of course have Google working on their own chips, Amazon working on their own chips. How much of this is just Nvidia being a little bit nervous that, you know, as we move into more specialized AI workloads, there might be others that can provide what it provides?

Brian McCullough [00:13:32]:
I think from, if we're only looking at it from the Nvidia side, that's, that's the only story here is that Nvidia is trying to make sure the moat still has water in it and they like the idea that at least at this point they are the only game in town in terms of as long as everyone is supply constrained, they're supply constrained for Nvidia stuff. For Nvidia's purposes. If there are other alternatives out there, then if it is supply constrained and then, boy, this new chip comes online or whatever. They want themselves to be the only game in town from OpenAI's perspective. They need to maintain, for their valuation purposes, for their branding purposes, OpenAI needs to look like they are the leader of this new era. For the purposes of. We keep hearing that Microsoft is having trouble selling their AI to the enterprise because people inside of enterprises are saying, well, we just want OpenAI, we want ChatGPT. So OpenAI needs to maintain the sense that they are the leaders, that they are absolutely the tip of this wedge in terms of what the new era is.

Brian McCullough [00:15:05]:
It behooves both of them to be essentially wintel, to be the Ari, to use the historical analogy of this is Microsoft and intel of the 1990s. I think that whether or not, and Alex, I don't know if you mentioned this, but maybe actually this deal is not going to be buying chips. It might be leasing chips, but in the end it doesn't matter for either of them because for their bottom lines we won't know the real answer for what this means on the bottom line for maybe a year or two. This is really just maintaining the sense of leadership and the idea that both of them are the tip of the spear.

Alex Kantrowitz [00:15:56]:
Well, let me put it this way and let me throw this out to Ari and Dan. There is a notion that you really are starting to see Only two players left in AI and this might be a little controversial, but you have Google, right, which has the model development team. It has the products that it can deploy the models out onto and it has TPU's right, the chips. And then you have OpenAI which has the product development team. It has the surface with ChatGPT and that missing part was the chips. So are we seeing basically a consolidation of the AI race into these two poles, Google and OpenAI Nvidia? Is that what's happening here, Ari?

Ari Paparo [00:16:34]:
Well, it's definitely a scale game, both in terms of the investment required to be a leader and the ability to monetize that investment. Because if you're say like GROK would probably be a good example where they have, they're trying to compete, but they don't necessarily have the monetization available to them the way a leader would. So OpenAI announcing, or sort of announcing that they're going to get into ads and having just such mind space could potentially be funding this scale power law where they could be one of the only two or three models out there. I don't know if it's two or three or four, but it's not 10. And we don't really know what's going to come out of China and how that's going to relate to our, to the Western world's ability to use those models. So I think that the mindset among all these companies is, you know, you have to be one of those two or three and do anything you can to do that. I think I heard that Larry Page said something along the lines of he would rather go bankrupt than lose, which is a crazy quote from someone in charge of one of the largest companies in the world. There's definitely a scale advantage.

Ari Paparo [00:17:49]:
And I don't know about the chips in particular, maybe Dan can take that one. But it seems like they, they definitely believe that there's only going to be a small number of winners.

Alex Kantrowitz [00:18:00]:
Yeah. Dan, what do you think?

Dan Shipper [00:18:01]:
Yeah, I think if you're talking about like winners in AI and talking about specifically foundation model companies. Absolutely. Like Google and OpenAI are becoming two big poles. But you have to also there's, I think the market is like kind of complex. So for example, anthropic, which we haven't mentioned yet, they were at a billion in revenue at the end of 2024 and now they're at 5 billion. Like they're growing insanely fast. And the reason for that is they a, they're really winning really well in the programming market. If you're a programmer and you're on the edge of AI, you are using cloud code and you're using OPUS to do your work or Sona to do your work.

Dan Shipper [00:18:45]:
They're competing with ChatGPT and GPT5 but they're really giving them a run for their money and I think their models are extremely good at longer term long horizon agentic tasks, whether that's in programming or doing work or whatever. Like they're really doing a good job. I think if you look at for example for us internally inside of every in production we are using Gemini and GPT5 and that's because it has the best combination of power and price. So it makes it easy for us to serve AI to like large numbers of users. But when we're building we are using anthropic and GPT5. So it depends on where in the stack you're looking to know who's going to be a dominant player.

Alex Kantrowitz [00:19:34]:
Let's talk about the Microsoft angle quickly and Brian, we'll toss to you in a moment just on the Microsoft angle, this is also from MG Siegler was Do you get the sense that Microsoft is going to regret letting this relationship go so far south? Yes. They still have their ownership stake or will presumably in OpenAI and they're going to get some access to OpenAI's technology. But they might be in this weird position where they'd perhaps not be happy but relieved if it all goes south for OpenAI and that they made the right bet to take a step back and outsource all of OpenAI's needs to others. What do you think Brian? Is this, is it a mistake for OpenAI to. For Microsoft to have let OpenAI go in this way?

Brian McCullough [00:20:16]:
I don't think so. Really. That's a separate question because it and Dan can jump in on this whether or not what he just said about the large language models, if that becomes a commodity technology, if that's the case, then I'm not sure that it matters. But it would matter in terms of the strategic relationships as we've just been talking about between the chip makers and large language models and things like that. If you want to get into the idea of if this is a bubble or not or if you want to save that for the next segment. What does Microsoft care about? Microsoft actually makes its money by selling into the enterprise software tools that allows the enterprise to work. Do we believe that Dan, for Dan's business says that, listen, the adoption of this technology is going like this hockey stick sort of thing. We've been hearing conflicting things from not Microsoft itself, but from ancillary people about the adoption.

Brian McCullough [00:21:30]:
Now everybody's testing it, but we've heard those stories about, well, people adopt AI, but it's not actually driving roi, et cetera, et cetera. If, if we believe that the people like Microsoft are seeing that, yes, this adoption is happening, we're seeing from places like Databricks, and we're seeing from places like a bunch of other enterprise players like Salesforce, et cetera, that they swear, at least they swear to Wall street, that adoption of this by their customers is driving revenue for them. If that's the truth, then this is what is driving the decision making by these huge companies. We're willing to invest trillions of dollars over the next several years to build this up because we believe that this is the next generation of compute. If that's the case, then it's not a bubble. But I'm not sure that we know for sure that Microsoft believes that it needs OpenAI to do it, because if it becomes a commodity compute product, then they can do it in lots of different directions.

Alex Kantrowitz [00:22:52]:
Dan, what do you think about the Microsoft OpenAI situation?

Dan Shipper [00:22:56]:
I think I basically agree. I think the relationship has always been a little bit at odds because they have a little bit of different incentives. Microsoft wants to be able to offer the best AI to its enterprise customers. They ideally want that AI to be branded as Microsoft. And they used OpenAI to get there and gave them a bunch of compute in order to train the models and all that kind of stuff. But OpenAI also wants those relationships. And so there's this weird tenseness that you can see whenever there's a new model launch, because Microsoft wants to promote that. You can get it as, as a Microsoft copilot type thing, but it's also available as ChatGPT or, you know, as, as a model in Azure.

Dan Shipper [00:23:38]:
So I think it's. Yeah, it's a, it's a tense relationship. I. Are they going to regret it? I don't think. I don't think so. I think that they're, they're making the decisions with the, with the game board that they currently have. And I think they want a broad range of models and I think that that makes a lot of sense.

Alex Kantrowitz [00:23:55]:
Right. And there's also, I mean, it makes sense to suggest that Microsoft just could not provide what OpenAI needed unless they would facilitate this deal with Nvidia. I mean, that's always where this was heading as scaling increased, as demand increased. And so now you have what you have, which is that OpenAI has moved directly to those that can build the data centers, to Oracle, to Nvidia and said, you know, Microsoft will remain partners. You'll have a good chunk of what we do, but unfortunately we need to, you know, go to the places that can actually develop what we need them to develop. All right, we're going to take a quick break. Let's quick, quickly go around the horn. We have Ari Paparo here.

Alex Kantrowitz [00:24:39]:
He is the host of the Market Tech your podcast. He's also the author of this great book, Yield How Google Bought, Built and Bullied Its Way to Advertising Dominance. So, Ari, great to have you.

Ari Paparo [00:24:52]:
Thanks for the book plug. I appreciate it.

Alex Kantrowitz [00:24:54]:
That's right.

Ari Paparo [00:24:55]:
I'm really excited to be here.

Alex Kantrowitz [00:24:56]:
Yes. Good to have you. I'm also, I'm on the back of this book, so I appreciate you listing the blurb there. We also have Brian McCullough. He's here from the Tech Brew Ride Home podcast, formerly the techmeme Ride Home podcast. A twit favorite. Brian, good to see you.

Brian McCullough [00:25:12]:
Good to see you as always.

Alex Kantrowitz [00:25:13]:
Alex, how long has it been Tech Brew Ride Home?

Brian McCullough [00:25:17]:
I think two months now. I have never had you and I have talked about offline. Never had an actual media company behind me for anything that I've done in podcasting for almost 15 years. So, like the most recent episode that we did this weekend, if you look it up on YouTube, I was in a studio, an actual studio with cameras and people recording me and all that stuff. So I'm happy to be with Morning Brew and Tech Brew Right Home.

Alex Kantrowitz [00:25:50]:
Love it. And speaking of real deal companies, Dan Shipper is building one. He's the CEO of every and the host of the AI and I podcast. Dan, great to see you.

Dan Shipper [00:26:02]:
Great. Good to see you too. Thanks for having me.

Alex Kantrowitz [00:26:04]:
Dan, you have one of my favorite reviews of AI that I've read before. It was your review of, I think it was OpenAI's ChatGPT or O3 model. Why don't we put a pin in that and pick it up right after this break?

Leo Laporte [00:26:20]:
Thank you, Alex. Hello, everybody. I have returned from vacation just to tell you about my mattress. This portion of this Week in Tech is brought to you by Helix Sleep. And right about now, on my little trip back east, I'm missing my mattress quite a bit. Your mattress is so much more than just a place you go to sleep, right? Movie nights with your partner, little Netflix. And morning cuddles with my kitty cat, Rosie. Or even if you got a big dog, you know, cuddling up.

Leo Laporte [00:26:54]:
It's great. Your wind down ritual after long days. I love curling up with a good book. Reading that book for Stacy's book Club right now. Love it. Spend hours curled up with my Kobo on my Helix Sleep mattress. Your mattress is so much more, so much more than just a place to sleep. It's the center of the best parts of life, if you ask me.

Leo Laporte [00:27:16]:
But if you are waking up in the middle of the night in a puddle of sweat or you get up in the morning and your back's killing you because your mattress looks like a U, or if you're feeling every toss and turn your partner makes. The other night there actually was an earthquake. And I did wake up when I felt the earthquake, but the night before I woke up when the cat jumped on the bed. You don't want that, right? You don't want that. These are classic mattress nightmares. Helix Sleep changes everything. No more night sweats, no back pain, no motion transfer. I have never slept better in my life.

Leo Laporte [00:27:52]:
Get the deep sleep you deserve. It's so comfortable. We realized after a little research. I read an article that said you should replace your mattress every six to 10 years and we realized our mattress was eight years old. This is some months ago. And we started the search for a replacement. That's when I came across a review that said this is a real buyer. Five stars.

Leo Laporte [00:28:17]:
I quote, I love my Helix mattress. I will never sleep on anything else. I thought, well, that sounds extreme. But now I could write that I could. I love it. Time and time again, Helix Sleep remains the most awarded mattress brand. When we were doing the research, we saw they were Wired's best mattress of this year. 2025 Good Housekeeping's Bedding Awards 2025 awarded to Helix Sleep premium for plus size support.

Leo Laporte [00:28:45]:
Okay, I'm a little on the on the Plus Size GQ Sleep Awards 2025 for this year best hybrid mattress New York Times Wire Cutter 2025 featured for plus size support and Oprah's daily sleep awards for 2025 best hotel like feel. You know that feeling when you get in bed, you go into a and you're in a nice fancy hotel and you get in bed and you go, oh, this is nice. I could live with this every night. Now I do. And you can too. Go to helixsleep.com/twit for 25% off site wide during the Labor Day sale. Extended. That's helixsleep.com/twit 25% off site wide now.

Leo Laporte [00:29:31]:
This offer ends September 30th. And make sure you Enter our show name after checkout so they know we sent you. If you're listening after the sale ends, well still be sure to check them out. helixsleep.com/twit there's some always some great deals there. Helixsleep.com twit now if you'll excuse me, it's ready to take a nap. Back to you guys.

Alex Kantrowitz [00:29:55]:
I thank you, Leo. And we are back here on TWiT. My name is Alex Kantrowitz. I'm the host of the big technology podcast, guest hosting for Leo. This week I'm joined with by a great panel. Brian McCullough of the Tech Brew Ride Home podcast. Dan Shipper of Every is here with us and Ari Paparo, the host of the Market Texture podcast. First segment, we talked all about these big numbers and the investments and sort of the power plays that we're seeing in the AI industry.

Alex Kantrowitz [00:30:23]:
And now we get to that question, are we in a bubble? Because, well, Sam Altman says so. So if he's saying it, who are we to disagree, right? But let's, let's get serious about this conversation because as I was teasing before the break, we've definitely seen this invention, right? This amazing invention, which is generative AI. Dan, I was talking about your review of O3. O3 was really a spectacular model. I'm sad not to see it in ChatGPT anymore. And it was like a great, really great thought partner. OpenAI then comes out with GPT5. It's been a disappointment for a lot of people.

Alex Kantrowitz [00:30:59]:
I know it's pretty good on the coding front, but I'm seeing the sort of disappointment. And the latest releases, we know Meta's also had a disappointing release. That's why they've had to spend a billion or 2 billion on talent. And the list goes on of companies that have tried to make models bigger and the improvements haven't been there. Is it possible that we're in this moment right now where investors saw this promise of this great technology and all of a sudden started to bet that it was going to go to infinity, perhaps. And literally putting if we just talked about with Nvidia's and Nvidia's case, all of its profits into this? Is this a mistake? Are we at the beginning of a bubble? What do you think, Ari?

Ari Paparo [00:31:47]:
Well, you have to ask yourselves, what's the bubble? So is it the unrealistic expectations versus the time period in which it pays off or is it not going to pay off at all? Because when you start speculating on the payoff, it really is just almost unimaginably large. Even without AGI. If you just get to the point where you're replacing individual knowledge workers who earn $150,000, $250,000 a year with AI. I'm not saying we'll talk about unemployment, but if you just think about the ROI of potentially enhancing the work of a knowledge worker or replacing the work of a knowledge worker, that ROI is just astounding. It's equivalent to going from craftsmanship to the assembly line 150 years ago. When you talk about a bubble, it's like, okay, these companies may have valuations that are skyrocketing and Nvidia might be valued properly but for a ridiculous amount of pent up demand. But if the ROI is within five years, maybe that makes a ton of sense. If the ROI happens within 10 years or 15 years, if we get a delayed ability to utilize these models for real economic value, then we might see some problematic like trough of disillusionment type years where some things come crashing down.

Ari Paparo [00:33:13]:
So that's how I look at the problem.

Alex Kantrowitz [00:33:16]:
Dan, you seem like you're going to be on the other side of this.

Dan Shipper [00:33:20]:
I mean, no, I think that was a really reasonable take. I generally. Well, I think the bubble question is a tough one for me because it assumes a lot of different things. Like one bubbles are bad and I think we're seeing a lot of infrastructure build out and a lot of people using a entirely new technology that is like I think pretty amazing. It's changed my life, it's changed the company that I can build. And I think it's so, it's so clearly the case that it's not going to go on forever. The way that markets work, the way the technology works, you kind of tap out the thing that you know how to do and then you need a little bit of, a little bit of time to figure out the next thing. Markets always get overexcited.

Dan Shipper [00:34:08]:
They're like all of the big executives from all these companies are like, AGI is coming, it's going to replace all jobs or whatever, which I personally don't, I don't like the way that they talk about it. And it can simultaneously be true that the technology that they are building is extremely transformative and is improving really, really rapidly. It's just that AGI is an extremely hard thing to build and there's actually way more complexity than it seems once you start to really think about it. And so do I think that we're in a bubble? Yeah, I think valuations are Frothy. And the rhetoric is pretty sky high. And I think the underlying substance is also there. It's like actually an incredible technology that's improving rapidly. And this is just how markets work.

Brian McCullough [00:34:56]:
Here's. Here's the thing, Alex. On a long enough timeline, every thing is a bubble. It doesn't matter what market you're in. If you say 30 years is your timeline, 50 years is your timeline, even 10 years is your timeline. Like people remember, like the dot com bubble. There were two bubbles in that era. It wasn't just pets.com, it was also the build out of fiber and things like that where capitalism built out a bunch of infrastructure that they weren't wrong to do it.

Brian McCullough [00:35:33]:
It's just that at the time that they did it, they did it too fast and too quickly. Then the reason that Facebook and a bunch of folks could come along behind and create the things that became profitable was because they built out too fast and too soon and overdid it. The thing that's going on right now is it's that the thing that rhymes to me, and maybe Ari can dispute me on this, is the prisoner's dilemma sort of aspect of. It reminds me of, of the late 90s in the sense that if you are a CEO of any company, and I did a story this week about, what was it, one of the Chinese companies that they're hitting a five year high because they announced, hey, not only are we going to keep spending on AI infrastructure, we're going to spend more or whatever. The prisoner's dilemma, if you are anybody that is a major tech company, or not even a tech company, anybody, it's like the 90s where it's like you have to have an Internet strategy, you have to have an AI strategy or whatever. The prisoner's dilemma is if you don't say that you have AI religion and you're building out and you're going to spend everything on AI or whatever, you get killed. Because the market believes that if you're not moving faster than everybody else in your cohort, you will be left behind and your stock will get killed and you will get replaced as the CEO or whatever. So the incentive structure right now, until this changes, and this is how bubbles burst, is that you got to run, run, run, spend, spend, spend, et cetera, et cetera.

Brian McCullough [00:37:17]:
And so there's nothing that is going to stop that at the moment. That is a classic bubble situation, because whether right now inside of each company, whether the underlying bottom line, metrics and revenue is there, doesn't matter. Because you have to project to the outside world that it is there. Now again, I'm going to say back to Dan. I know I was saying to Ari, but I also keep hearing though that internally people at companies are saying that, listen, when we add AI to our products, it is growing sales, it is growing usage, it is growing revenue. So I don't know that it's exactly only a prisoner's dilemma where it's like we have to project, that we have AI religion. I also believe that a lot of companies are seeing that this is a new paradigm, that they are seeing tangible results from end users, from customers or whatever. And if that's the case, then it's not a bubble.

Brian McCullough [00:38:25]:
But it could be a bubble if you just get ahead of the reality of how fast it can happen.

Alex Kantrowitz [00:38:31]:
It could definitely be a bubble if.

Brian McCullough [00:38:33]:
You'Re both things, both things can be.

Alex Kantrowitz [00:38:35]:
True progress and overinflated valuations and an absurd amount of money that will never be paid back. Let's just talk a little bit about what needs to be paid back and then we're going to go to you, Ari. But this is from the Wall Street Journal story about whether AI spending will ever pay off. Dave Kahn, a partner at venture capital firm Sequoia, estimates that the money invested in AI infrastructure in 2023 and 2024 alone. So this is before this big partnership with OpenAI and Nvidia and Oracle that alone requires consumers and companies to buy roughly 800 billion in AI products over the life of these chips. All right, and by the way, those chips are only going to be, you know, they depreciate, so they'll be like three or four years. So this, these returns need to happen fast. Also Bain and co this week estimated that the wave of AI infrastructure spending will require 2 trillion in annual AI revenue by 2030.

Alex Kantrowitz [00:39:33]:
By comparison, that is more than the combined 2024 revenue combined of AML, of Amazon, Apple Alphabet, Microsoft Meta and Nvidia, and more than five times the size of the entire global subscription software market. I'm optimistic about this technology. I like this technology. I use it every day. But to me you can be in my position and I think in all of our positions to say this is legit and then also read those numbers and say there's going to be a problem. And the reason why that matters is because it will eventually trickle into these companies ability to build this stuff. Ari?

Ari Paparo [00:40:11]:
Yeah, I think on the Hard Fork podcast this week they pointed out the interstate highway system was $300 billion investment. So we're investing A.B. i wanted to build on what Brian said about the investment here as sort of a fomo, you have to do it or else you're in trouble. There's one big difference in this versus other what are called speculative bubbles, which is that we don't have as much retail investor problems this time yet. In the dot com bubble in particular, one of the biggest problems was that people felt they could invest as retail investors or even institutional investors in these stocks and they would instantly make money because they were constantly going up. And that was a real problem for a lot of companies. Whereas right now obviously we have high flying stocks, but Google's worth a couple trillion dollars anyway. Whether they're over investing in AI or not, probably at this point today is not going to take a trillion dollars off their market cap.

Ari Paparo [00:41:10]:
If we get to the point where these stocks are becoming investment vehicles because people perceive it as no lose as it will pay off no matter what. I'm going to put a mortgage on the house to buy Nvidia shares. I think that's when we get into the really dangerous territory.

Alex Kantrowitz [00:41:30]:
Let's play a little game at this point where we rate the statements of AI CEOs whether they're this. We play a big technology podcast every now and again we play it. It's called hype or true. Is this hype or is it true? Let's go with Sam Altman this week in talking about the investment with Nvidia. He says the stuff that will come out of this super brain will be remarkable in a way I think we really, we don't really know how to think about yet. The stuff coming out of the super brain will be remarkable in a way I think we don't really know how to think about yet. Is this hype or is this just the cold hard truth?

Dan Shipper [00:42:14]:
What's that super brain he's talking about?

Alex Kantrowitz [00:42:17]:
I think it's the thing that he's trying to build this data center with Nvidia.

Dan Shipper [00:42:20]:
I see.

Alex Kantrowitz [00:42:22]:
To say, yeah, go ahead. You go ahead.

Dan Shipper [00:42:23]:
Yeah, say the quote one more time.

Alex Kantrowitz [00:42:27]:
I'm glad we're repeating it because to me it's ridiculous. I'm giving away what I think my answer is. I was gonna say that's great.

Alex Kantrowitz [00:42:33]:
Stuff that will come out of this super brain will be remarkable in a way I think we don't really know how to think about yet.

Dan Shipper [00:42:42]:
That does. That seems very true to me for a specific reason. Right. A way to interpret or the way that I take that is when we build these models, we sort of like Throw a bunch of data and a bunch of compute into a soup, and then we do a bunch of training on it, and then something comes out. And when something comes out, it actually takes a really long time to understand what it is and how it works and what it does. And, and even when they release it, it still takes some time because there are so many different things inside of models like this, and there are so many little corners and crevices that it's impossible to know all, all of the things about it until you just use it a lot, until the world uses it. And so I think he's saying something that is just basically, factually true.

Alex Kantrowitz [00:43:34]:
But if somebody came to you and said, give me all of your profit over the last couple of years, Dan, and I will guarantee you that you will have something that's so wonderful that we can't even wrap our minds around it yet, you would say, is it.

Dan Shipper [00:43:51]:
Sam Altman at the head of OpenAI? I'd probably say yes. I mean, like, proof's sort of in the pudding.

Alex Kantrowitz [00:43:58]:
That's such a good fundraiser, right?

Brian McCullough [00:43:59]:
Yeah, no, but actually. But Alex, what you just said is, the point is we will look back in history as Sam Altman being. You call certain people business geniuses like Jeff Bezos or all the Microsoft folks or Mark Zuckerberg or whatever, but Sam Altman is the greatest fundraiser of all time because he's like, hey, I need a trillion dollars, and I need a trillion dollars before you are going to see dime one. And by the way, once you see dime one, you're going to see a trillion dimes, right? So, like, it is kind of genius, his ability to do that. And if he's right, then he is the greatest business person of all time. But look, how did you frame it? It was hype versus what?

Alex Kantrowitz [00:44:54]:
Truth. Cold hard truth.

Brian McCullough [00:44:56]:
Sam is hype. It's all hype because until a truth comes around where Sam becomes our, our great overlord, that is, is the, the marshal of the, The, The. The overmind that controls everything. It's only hype because that's the only way he can get there.

Dan Shipper [00:45:19]:
Okay, I feel like that's pretty unfair. Only hype is pretty unfair.

Alex Kantrowitz [00:45:25]:
Well, Ryan, 30 seconds to respond.

Brian McCullough [00:45:27]:
It's only hype. It's only hype because that's the only way that this, it can happen. I don't. When I say only hype, I don't mean that. That's all bs. I'm saying that it's. That's the only method to get there is to keep promising that everything is going to be infinitely better because I need a trillion dollars.

Alex Kantrowitz [00:45:44]:
Okay. We have a, we have a great setup here. We get, we have the prosecutor and the defense here, Dan and Brian. Now we take it to the judge. Ari, your decision.

Ari Paparo [00:45:55]:
Yeah. I'm going to put this statement in what people like to call non falsifiable. This thing we're building will not be able to be comprehended. Is that what it boils down to? I mean, that's, that's three vague statements on top of each other that build on one another point on it.

Alex Kantrowitz [00:46:14]:
It's going to be so marvelous you can't wrap your head around yet.

Brian McCullough [00:46:18]:
Okay.

Ari Paparo [00:46:18]:
Yeah, right. Well, I'm going to go no, just on the technicality that the statement cannot be analyzed in any normal way.

Alex Kantrowitz [00:46:27]:
Okay. So that you're going hype. No. Is hype in your.

Ari Paparo [00:46:30]:
Yeah, that's like the definition of hype. That's like Charlie and the chocolate factory level hype.

Brian McCullough [00:46:36]:
Even if Sam wins, you're not going to be able to comprehend that he won is what you're saying.

Ari Paparo [00:46:40]:
If he sung, if he sung it and danced to it, that would have sealed the deal for me.

Alex Kantrowitz [00:46:46]:
Golden tickets. He's getting the golden ticket and Jensen is handing it over. I want to commend Dan for making a very convincing argument for true. I didn't know which way Ari was going to go. But we'll, we'll decide this one on hype. Dan, final word. Final word to you on this one.

Dan Shipper [00:47:04]:
Final word. I, I, I stand by what I said.

Alex Kantrowitz [00:47:07]:
Oh, wrong.

Dan Shipper [00:47:08]:
I stand by what I said. And I'll, I guess I'll add to that if we, if we had to pick one or the other. I think I stand by what I said. I, I agree with Brian's point ultimately that what he said is, is true for the reasons that I, I laid out. And his job is to paint a picture so that he can get the money to do the thing that he wants to do. And he's not sure if it's going to work because no one's ever sure if it's going to work and he's willing to take more risks than most people ever.

Brian McCullough [00:47:37]:
Let me, let me interject real quick. What about Dan? What if Sam isn't the one that makes it happen? What if Sam isn't the one that wins? Like, so ultimately AI can win, but what if it's not OpenAI that wins? And so then what, what if, I.

Dan Shipper [00:47:56]:
Mean, it's, it's hype.

Brian McCullough [00:47:57]:
So then it's hype. It's the, it's the sort of, you know, I gotta keep, I gotta keep hustling until people realize there's no clothes underneath or something.

Dan Shipper [00:48:08]:
Yes. But I think that I, I feel like that the no clothes underneath thing is the, is the thing that I take issue with because obviously there are closed there. They may not be the like ultimate Iron man suit or whatever that he's promising, but it's at least a suit of armor. Like there's something underneath or something. He's building something. Right. It's not like a, it's not a fraud.

Alex Kantrowitz [00:48:30]:
Let's continue our game. Pumpal Loompas. We're going all Chocolate waterfall today. Let it. Let's continue our game. Speaking of what, speaking of the clothes. The question is, what are the clothes? And Jensen Huang was giving, the Nvidia CEO was giving an interview this week and he was asked by Brian Gerstner whether we're going to see, or Brad Gerstner, whether we're going to see a trillion dollars in AI revenue by 2030. Jensen says, you know what? We already are and we're seeing it today in 2025.

Alex Kantrowitz [00:49:09]:
This is a direct quote from Jensen. The hyperscalers, they went from CPUs to AI. Their entire revenue base is now all AI driven. You can't do TikTok without AI. You can't do YouTube shorts without AI. You can't do any of this stuff without AI. I think what Jensen is doing in this, and forgive me if I'm getting this wrong because I'm going to try to interpret his words, is he saying all the revenue coming in from cloud today, all the revenue, let's say, coming in from Facebook advertising And Google or YouTube advertising today is AI revenue because they have AI baked in. Now, I don't want to lead the witnesses here because we're going to give you all an opportunity to weigh in in this game of hyper true.

Alex Kantrowitz [00:49:52]:
But to me it's like, yeah, a lot of this stuff was happening either in parallel or before ChatGPT. And so, so for you to claim all cloud revenue and all big tech advertising revenue as AI revenue versus the revenue we're seeing from generative AI, which would be implementation of Anthropic and OpenAI models in the enterprise, which would be Pro subscriptions to ChatGPT and Claude, to claim everything as AI, even though you know full well that the thing that has sparked the spending increase has largely been the generative AI piece. I'm just going to vote. I'm saying it's hype. Let's go. I'm going to give Dan the last word on this one. So let's go to Brian. Brian, Hyper true.

Brian McCullough [00:50:37]:
All revenue is AI revenue, 100% hype and clever branding. Because in that case then since Google has been around, it's been all AI, which it has been. And since Facebook moved from a chronological feed to whatever algorithm they use, it's AI. So yes, clever branding, but hype.

Alex Kantrowitz [00:50:58]:
Ari, up to you.

Ari Paparo [00:51:00]:
Yeah, I do business in the advertising sector and it's been using machine learning for 10 years and every company is now an AI company, even though they didn't change a thing. So they're using, you know, advanced math, but it's pre transformer non AI math, but now they're calling it AI and I think that's what's going on here. There anything using math and algorithms is being sort of grandfathered in as, as AI and it's hype.

Dan Shipper [00:51:26]:
Dan, I will say I think I need, I need more of the quote and the context in which he said it and how he said it to really say because I think it's a legit point.

Ari Paparo [00:51:35]:
Point.

Dan Shipper [00:51:35]:
It's, it's absolutely legit point that people forget when they're like, is it possible that AI could even do any of this stuff? And to be like, you're already using AI, you just don't call it that. If he said it that way, is, it is a real point that I, I totally agree with. If, if he said it in a more, you know, car salesman any way, then it's hype.

Alex Kantrowitz [00:51:53]:
No, I think he said it in the first way. And, and Dan, let me put this to you. Do you think if it was, if we were just seeing AI's impact on recommendation systems, for instance, and in advertising, do you think we would be seeing these $100 billion investments or the, the many, many billions of dollars will sort the fact from the fiction here in AI, because I would argue that we have generative AI is the thing that's sparking all this. That's the ChatGPT category as opposed to the recommender agents. And when it comes to an roi, that's going to be the area where we're going to need to see the revenue in order for this to make sense.

Dan Shipper [00:52:34]:
Definitely, I think that's definitely true. The other AI that we're talking about is AI from a previous generation that we call machine learning that sparked the tech boom in the 2010s.

Brian McCullough [00:52:47]:
The question is though, the capex, okay, everybody has already been spending on data centers on cloud Computing, et cetera, et cetera. But, but we haven't seen anything like the spending of like, you know, what our capex was three years ago, $5 billion and next year it's going to be $50 billion. That's the question of do what, what is the timeline? That's the problem. What is the timeline to justify a 10x change in what you're spending and what does it need to be? It's so funny to me because remember there was a long period of time where everyone was so obsessed with the fact that all of these big tech companies are sitting on all this cash sitting on their hands. They're not reinvesting. What are we going to do? It's not helping the economy that there's hundreds of billions of dollars that they're just sitting on and doing stock buybacks or whatever. Okay, now they're spending it. And there's been stories about how there's a certain percentage of like the US economy that is maybe being propped up just by this capex spend.

Brian McCullough [00:54:01]:
But the real problem in terms of if it's a bubble is what is the market and, or for these companies, what is the timeline that everyone should expect there to be a return on investment for which this is not just burning cash in the backyard?

Alex Kantrowitz [00:54:24]:
That's a great question. I want to shift gears really quickly because there was another interesting thing that happened this week. I don't know if any of you saw. There was a conversation between Dwarkesh Patel, the podcaster, and Richard Sutton, this AI luminary. And Sutton is somebody who has come up with this concept called the bitter lesson. And the bitter lesson is basically if you add more compute and data into an AI system, it will just get better infinitely. And there's, there's really no limit on it. Now Dwarkesh basically asked Sutton point blank, is that true for large language models? And he basically said not really, because he doesn't believe necessarily I'm paraphrasing here, but he believes more in reinforcement learning where you just give an AI system a goal and then you just give it a ton of compute to go figure it out, as opposed to the way that AI has been trained.

Alex Kantrowitz [00:55:24]:
These large language models, which of course incorporate reinforcement learning but aren't, are trained in a different way with self supervised learning at the core. It really is interesting because it plays in this conversation that the entire industry is having, which is, are you going to continue to have gains or are you going to continue to be able to build better models by making these systems larger and your Sutton, the guy who came up with this bitter lesson idea, who's basically saying maybe not so obviously we'd have real implications of scaling, is dead. We've ask this question a lot. Anyone who's been watching the AI conversation. We have just a few minutes left of this block, but I want to go around the horn and get your perspectives on are we at the end of scaling? And do Sutton's comments matter? Start with Dan?

Dan Shipper [00:56:15]:
Well, I want to start back at the bitter lesson because there's a slight tweak on what the bitter lesson is, which is not necessarily that more data and more compute scales infinitely, which is something else that is a thing like scaling laws that OpenAI discovered. But the point of the bitter lesson is that all of the clever architectures that we've come up with to try to make machines smarter have failed when put against a neural network that we just add more data and more compute to. And so we should just stop being so clever and stop trying to make complex little algorithms or whatever and just scale the data and compute and find ways to make the neural networks more efficient to do that. Whether or not scaling is dead, I think there's a. It seems pretty clear to me that people inside of the big labs think that there's more juice out of the, you know, lemon to, to get in terms of scaling. And most of the, most of the hype or most of the thought is going into post training right now and reinforcement learning. So there, there is, there is definitely something there. We've figured out that simultaneously that we can scale different parts of the stack.

Dan Shipper [00:57:29]:
So now we are scaling, inference, time, compute. So how much time the model spends thinking instead of how much compute, we spend training it, which is another curve that we can ride. And I think it's also really clear to me that these models are deeply, deeply inefficient. And I think you can see that with a lot of the open source models that are coming out that take something that took like billions of dollars to train or whatever, and they can do it with $10 million. And I think over time we will find that and unleash a lot of extra capacity because it'll be much cheaper to do this kind of thing. We'll have much more startups competing to find new interesting things to do and that will also create a lot of new demand because we'll be able to use them for more things.

Alex Kantrowitz [00:58:13]:
Ari, Brian, any thoughts on whether this is the end of scaling or.

Brian McCullough [00:58:18]:
Ari, go first.

Ari Paparo [00:58:20]:
I'm just hesitant to ever say it's the end of X. You know, the technology tends to compound on one invention after another and without, you know, trying to prognosticate exactly where that value will move in the whole training, inference, etc. I'm just a tech optimist. I do think one thing that is meaningful here is that Most of the LLMs were trained, trained on sort of a static corpus, the Internet, and that seems enormous. And then they ran out of stuff and they said, oh no, now we're going to have slop training and it'll be a dead Internet that keeps training. But as the models are actually used in real life environments, they also collect data. And so they become sort of a self perpetuating machine. The more Tesla self driving car drives around, the more data it collects, which then translates the model.

Ari Paparo [00:59:10]:
And I think that is likely to be the case in all the applications that scale where they collect their own data. And that's another kind of avenue for growth.

Alex Kantrowitz [00:59:20]:
Brian, any final thoughts?

Brian McCullough [00:59:21]:
Yeah, that's beautiful because I was going to say, number one, there is no such thing as a perpetual motion machine in physics. But at the same time Moore's law still exists because even though in theory it should have died via physics a long time ago, we keep squeezing efficiencies out of it. And this technology, we haven't even begun to squeeze the efficiencies out of it. So there's a lot more that we can do. So hype in the sense that, yes, I don't believe in anything that can go to infinity except for the time, I suppose, but it won't get there on a, on a smooth path. There will be bumps along the road. And one thing that I want to say about the bubble, and this is not what we were talking about because we're talking about technology, but one of the things that kills bubbles, at least on an economic or pops bubbles, at least on an economic sense, is when debt gets involved. And one of the problems that we need to start thinking about in terms of the data center buildout and the capex build out is the fact that people are now starting to turn to debt to make this happen.

Brian McCullough [01:00:30]:
Because at this point we had all of these big tech companies that had all of this cash sitting on their balance sheets and they're like, yeah, let's just spend it. And now a lot of the, the build out is going to turn to debt. And debt is when, when people can't pay back debt, that's when bubbles burst and that's when it can affect the larger economy.

Alex Kantrowitz [01:00:50]:
Okay, well on the other side of this break, we're going to talk about actual AI products coming out and actual evaluations that OpenAI has introduced to see whether the models are doing economically valuable work. We're here with Brian McCullough from the Tech Brew Riot Home Podcast. Dan Shipper is here. He's the CEO of Every and host of AI and I and Ari Paparo, the host of the marketecture podcast and the also the author of the great book Yield How Google Built a Bot, Built and Bullied Its Way to Advertising Dominance. We'll hear more about all three and then we'll talk about the actual economic activity in the world of AI right after this.

Leo Laporte [01:01:31]:
We'll be back to Alex Kantrowitz and his great panel in just a moment. But I want to tell you about our sponsor for this segment of this week in Tech ExpressVPN. I got a question for you. I think I know the answer. Have you ever browsed in incognito mode, you know, or private browsing? Probably not as private as you think. In fact, Google recently settled a 5 billion dollar lawsuit. They were being accused of secretly tracking users in incognito mode. Google's defense Incognito does not mean invisible, you know.

Leo Laporte [01:02:08]:
In fact, the truth is, all your online activity is still 100% visible to third parties unless you use ExpressVPN. It's the only VPN I use and trust. And you better believe when I go online, especially when I'm traveling and airports and coffee shops and other countries, ExpressVPN is my go to. How does ExpressVPN unblock content? You see, Netflix hides content based on your location. When you go to netflix.com on your device, it says, well, where are you? And you say, oh, I'm visiting in England right now. And it says, well, you can't see the British Netflix stuff. You can only see your stuff unless you use Express vpn. It lets you change your online location so you control where you want.

Leo Laporte [01:02:57]:
Even if you're in the US you could pretend you're in Great Britain and suddenly you're seeing the Netflix stuff from the uk. You tell Netflix, I'm, I'm in Paris right now. Even if you're in Petaluma, they have servers in over 100 countries so you can gain access to thousands of new shows and never run out of stuff to watch. It doesn't just work with Netflix, it works with many other streaming services too. Disney Plus, BBC iPlayer, and more. One of the reasons ExpressVPN works so well for this is they invest the money to rotate IP addresses so that these streamers can't really tell that you're using ExpressVPN. That's very important. ExpressVPN is the best VPN also because it hides your IP address.

Leo Laporte [01:03:40]:
It reroutes all your traffic through the encrypted tunnel. People see not your IP address, but that ExpressVPN IP address which others are using, which rotates. It's very hard, impossible for them to track you based on that. And I love how easy it is to use. I got to the airport. You fire up the app, you click one button, you're protected. It works on everything. You've got phones, tablets, laptops, even on routers.

Leo Laporte [01:04:05]:
So you could protect your whole house. You could stay private at home and on the go. And of course, ExpressVPN is rated number one by the top tech reviewers like CNET and the Verge and me. I love it. Protect your online privacy today. Visit expressvpn.com twitt that's E X P R-E-S-S vpn.com twit to find out how you can get up to four extra months free. Expressvpn.com twit thank him so much for supporting this week in tech. Now back to the show.

Alex Kantrowitz [01:04:39]:
Thank you very much, Leo. Let's take a moment and meet our panel. So, Dan Shipper, you're a first time guest here on the Twit network. You're the CEO of Every. Every's around for five years or so.

Dan Shipper [01:04:51]:
Five and a half years.

Alex Kantrowitz [01:04:52]:
Amazing. I remember speaking with you when you were just launching and then beginning of the pandemic. So was big technology. Here we are in 2025.

Dan Shipper [01:04:59]:
I know it's fun to come up together.

Alex Kantrowitz [01:05:02]:
That's right. Tell me a little bit about or tell us all about the AI and.

Dan Shipper [01:05:05]:
I podcast on AI and I. I talk to the smartest people in the world about how they use AI in their work and in their lives. And so we always start with the practical, like how are you actually using it? And then we get into a little bit more of the philosophical, psychological how has it changed you and how has it changed, changed how you see the world?

Alex Kantrowitz [01:05:23]:
Any answers that have really stood out to you or tips that you can share with us?

Dan Shipper [01:05:29]:
Answers that have stood out to me. I mean, I think generally this is, this is true for me. The thing that I like to explore is I think there's a, a big fear that AI is going to take away our creativity, take away our ability to think, take away Our human agency. And the thing that I experience and the thing that I think a lot of the guests that I bring on experience is really the exact opposite, AI as an enhancer for our creativity, our thinking, and our sense of self. And it's just sort of about how you use it, and it's really fun to explore.

Alex Kantrowitz [01:06:03]:
Very cool. We're also here with Ari Paparo. Ari, you and I go way back. I started my career really in ad tech, as did you. You had a much more illustrious career. You were early on, you were part of the DoubleClick team in the early days of Google, helped Google figure out how to sell ads across the Internet.

Ari Paparo [01:06:23]:
Yeah, I've been doing ad tech for like 20 years at Google DoubleClick app Nexus, which is now part of Microsoft. And then I had my own company, Beeswax, which we sold to Comcast in 2021. So ad tech is kind of this interesting sector where it's really complicated. Complicated and extremely high tech, but doesn't get a lot of headlines. It's sort of behind the scenes. Sometimes a lot of people don't like it. A lot of people, you know, sort of say, oh, that's, you know, oh, those, those are cookies. Those are the spyware and, you know, surveillance capitalism and all that stuff.

Ari Paparo [01:06:57]:
But it's my life and I enjoy it. So now I'm podcasting about it and writing about it and. And just being a member of the community.

Alex Kantrowitz [01:07:07]:
Very cool. And you were down in D.C. watching Google lose its antitrust case that found that Google had a monopoly around its advertising and ad network solutions.

Ari Paparo [01:07:18]:
Yes. So Google has three different antitrust problems. They have the Play Store, they have Search, and then they have adtech. And I've been covering the ad tech trial probably more comprehensively than anybody. It's in Virginia, in the Eastern District of Virginia. The search trial was in D.C. and I was there last fall where the trial took place. Two weeks.

Ari Paparo [01:07:40]:
Really interesting stuff. That generated a book that I wrote that you plugged earlier, and they were found to be a monopolist. And now I'm back down in Virginia last week and this coming week for the remedies trial where the judge tries to figure out what to do about it, which is a pretty interesting topic. I think we'll hit on the later.

Alex Kantrowitz [01:07:59]:
Yeah. So for those who are interested in non AI news, after this block, we're going to go to antitrust and ad tech and we'll ask the question, as Big tech antitrust completely failed. And Ari, who's been in the courtroom, will be able to give us some color on that. All right, last but not least, Brian McCullough is here. He is known to you. He's the host of the Tech Brew Ride Home podcast, previously Tech Meme Riot Home Podcast. It's a great show. You can catch it every day.

Alex Kantrowitz [01:08:27]:
Brian, talk a little bit about the shift to techbrew. How long have you been doing that for?

Brian McCullough [01:08:33]:
Just a couple months. It's been great, Ari, by the way, I interviewed Kevin Ryan of DoubleClick fame this week. I don't know when that episode will actually air, but I've been doing, I used to do before the the Ride Home shows, I did the Internet History podcast. And spoiler alert, I'm going to relaunch that next month and I've been booking a ton of interviews for that, including Kevin Ryan, I just did Jimmy Wales, founder of Wikipedia, et cetera, et cetera. But yeah, I do a daily show that's just a 15 minute news roundup. The reason that Alex and I know each other, aside from the fact that we're almost neighbors, is the fact that it's a good compliment to any. If you're interested in tech, I can give you in 15 minutes what happened today in tech and in and out, and I'll catch up on what you missed.

Alex Kantrowitz [01:09:40]:
And it's a great show. And for those who don't know me, I'm Alex Cantrewtz. I'm the host of Big Technology Podcast. It's a twice weekly show. Every Wednesday is a flagship interview with somebody who's making big decisions in tech or fighting whatever's happening inside. So Tony Stubblebine, the CEO of Medium, is going to be on this week to talk about how AI has changed writing or is changing writing. And then on Fridays we just break down the week's tech news. Always a good time.

Alex Kantrowitz [01:10:07]:
So one more AI segment and then we get to antitrust. TikTok and Peter Thiel believe period of Tyrell's belief that we should be paying more attention to the Antichrist. So don't leave yet if you're still with us now. All right, so we have some news about how the big tech or big AI labs are now trying to get this is going to dovetail so well because we're going to talk about jobs as well, but trying to get AI to take the role of our co workers. This is from the information how anthropic and open AI are developing AI co workers. Anthropic, OpenAI and other artificial intelligence developers are sending large language models to the office. The AI models are being taught how to use everything from Salesforce's customer relationship management software to Zendesk customer support and Cerner's health records app apps. The idea is to teach AI how to handle some of the complicated tasks white collar workers do.

Alex Kantrowitz [01:11:03]:
The training isn't like anything AI models have done before. Researchers gave the AI fake versions of the apps to play around with and also hire specialists in various subjects to show the models how to use apps. I'm going to go back to you, Dan, for this one. Contextualize this for us. Is this a big deal and is this going back to our scale conversation from earlier? Is this the AI lab saying, well, we're kind of run out of just making the models better, so now we need to train them on like fake Zendesk?

Dan Shipper [01:11:33]:
No, I think this is, this is just business as usual. One of the things I've been talking about for a long time, this is back in the sort of like GPT 3 GPT 4 days, is we're moving to this shift from a knowledge economy to what I've been calling an allocation economy. So in a knowledge economy, you're paid based on what you know. In an allocation economy, you're paid based on how you allocate the resources of intelligence. So how you use AI and the, and the skills that are valuable in an, in an allocation economy are the skills of managers. How well do you, can you articulate the thing that you want done? How well can you break that task up and then give it to different people on your team? All that kind of stuff. And I think OpenAI and Anthropic and all these companies are realizing or are building towards this goal of having an agent that you can ask to do work end to end. And they kind of have that.

Dan Shipper [01:12:26]:
So I know if you're a programmer, you, you know that Claude code, for example, you can just give it, give it a task, it'll run for 20 minutes and it'll build a feature internally. One thing that we're doing, we built a little paralegal for ourselves where it has all of our legal docs and it's in a Google Drive folder and we just ask a question and it goes and says, okay, this is our template for this, or this is the, you know, you signed this deal three years ago and here are the terms, all that kind of stuff. And I think the, the way that the headlines are written is it's going to replace workers or whatever. And I do think it will change jobs. And there are certain specific segments of jobs or job sectors that. That might go away entirely. But I think by and large what it. What it will enable us to do is just do more and do more interesting stuff.

Dan Shipper [01:13:17]:
Everybody in the org moves up a level from being an IC to being a manager. And we've certainly seen that inside of every. Like, we run five businesses inside of every. We have 15 employees. And we definitely, definitely, definitely could not have done that previously without AI. And I think the same thing is coming for a lot of the rest of the economy.

Alex Kantrowitz [01:13:42]:
All right, what's your thoughts here?

Ari Paparo [01:13:45]:
Well, I think if the AI can figure out how to use Salesforce, we're all doomed to.

Alex Kantrowitz [01:13:51]:
We might be doomed.

Ari Paparo [01:13:54]:
I have to totally agree with what Dan said. I think as a heavy AI user myself, it is amazing and it extends my abilities in enormous ways, but it really can't replace full tasks yet outside of certain domains, like coding, we're all podcasters. Would you trust an AI to do all the work, to publish a single podcast, including, like, you know, the summaries, but also editing, but also promoting and all of that stuff? I use AI for pieces of that, but not a chance. It's not there yet. Or maybe it's just the scaffolding's not there yet and you can program Claude to do it with enough work, but that's the next step where the AI is coordinating tasks across complex activities effectively. Maybe not replacing a frame full worker, but replacing a chunk of work without intense supervision by the human being.

Alex Kantrowitz [01:14:51]:
Brian, to you.

Brian McCullough [01:14:52]:
Yeah, what do you want me to say? So this gets back to like the. The bubble question and the prisoner's dilemma, which is, is everybody investing in this because they believe that, you know, even. Even for the big tech platforms that have, you know, 68, 70% margins, their biggest cost center is labor. Does everyone really believe that this is going to obviate a certain percentage of their labor? So if you have a 70% margin, you can take that to an 85% margin or whatever. If that's the case. Is. Is that why everyone's spending all this, all this Capex and things like that, or is it the historical thing, which Ari can also speak to, which is the last 25 years of tech taking over the economy, essentially five players won, right? That's not true. There's hundreds of players.

Brian McCullough [01:15:58]:
Everybody's a tech company these days. But what we have seen is, you know, the Magnificent Seven or whatever it is, like all of the money of the tech revolution, all of the revolution of the tech revolution of the last 25 years accrued to a small handful of players. So is it that okay, we're going to if again, if what is the incentive structure? If I am Microsoft, if I am Meta, if I am Amazon whomever, or if I'm Oracle or Salesforce or Ford or whatever, is it that I can eliminate a cost structure or lower a cost structure which is labor, or is it the prisoner's dilemma of if what happened in the last 25 years plays out over the next 25 years and there's only going to be a handful of winners, I have to be one of those winners, otherwise I'm roaching kill. So again, I'm going to sort of reiterate what I said about an hour ago, which is that this is the incentive structure is a FOMO sort of situation where there is no incentive structure right now to not go full hog on this and believe in it 100% because you could be proven wrong, but you'll be proven wrong 10 years from now.

Alex Kantrowitz [01:17:20]:
Well, we actually now have OpenAI going out and saying we're going to measure. You know, this is actually perfect for our conversation. We are going to go out and measure how well the technology does on economically valuable tasks. This is from TechCrunch. OpenAI says GPT5 stacks up to humans in a wide range of jobs OpenAI released a new benchmark on Thursday that tests how its AI models perform compared to human professionals across across a wide range of industries and jobs. The test, called GDP VAL, is an early attempt at understanding how close OpenAI systems are to outperforming humans at economically valuable work. GDP VAL is based on nine industries that contribute the most to American gross domestic product, including domains such as healthcare, finance, manufacturing and government. For GDP for GPT5 High, a souped up version of GPT5 with extra computational power, the company says the AI model was ranked better than or on par with industry experts 40.6% of the time.

Alex Kantrowitz [01:18:26]:
By the way, OpenAI also tested Anthropic's Claude 4.1 model, which ranked better or on par with industry experts for on 40 in 49% of tasks. But OpenAI said that it believes Claude scored so high because of its tendency to to make pleasing graphics rather than on sheer performance. Any reactions to this?

Dan Shipper [01:18:49]:
I have a reaction which you may not expect, but this is hype. Okay.

Alex Kantrowitz [01:18:56]:
Wow. Dan has taken his time, but he has come and dropped the hammer.

Dan Shipper [01:19:01]:
This is hype. And, and I think, I think they're, I think they're doing a respectable job here, like trying to measure this and also the, the way that they're measuring it, they're sacrificing precision for actually measuring the thing that they want to measure, which is how good is it at replacing humans at any economically valuable task. And the, the way that they've chosen to do this is to reduce jobs to something like an sat, where it's like a, you know, you have a question, you have a prompt, and then, you know, you fill in your, you fill in your SAT essay response. Right? And the problem with that, there's, there's two big problems. One is constructing the prompt. You are smuggling intelligence into that. So human beings work in a really complex, really dynamic environment where they have to choose among like a billion different possibilities in order to, in order to like, choose what the next thing is to do. And by reducing a really complex job into a single prompt, you've like, essentially slow, smuggled in a lot of expert performance to be like, this is the time at which you would call the AI model.

Dan Shipper [01:20:10]:
And that just does not translate to real world use cases. And you can see this a lot in the last 10 years. They've been talking about. Radiologists are going away for a long time because you can identify breast cancer on a scan really easily if you're an AI model. But there's actually a lot of complexity in the radiologist job and a lot of corner cases that get you to the point of trying to, trying to recognize whether there's breast cancer on a scan. I'm not a radiologist, so I'm probably like butchering this. But like, the point still stands, actually. There's a lot of, there's a lot of complexity.

Dan Shipper [01:20:43]:
And when you, when you reduce it in that way, you get rid of the complexity and make it look smarter than it is. So, for example, a better benchmark in my opinion for this would be, and I've, I've tried to build this. And if you try to build this, you realize how hard it actually is, is predict what I'm about to say in a meeting. CEO bench, we have meeting transcripts, we have, you know, Slack messages given the last like, you know, 10 minutes of meetings. And you have a tool that can research everything I've ever said or whatever, predict what I'm about to say. In other words, replace me. And it's so hard. It is so hard and they are so far away.

Dan Shipper [01:21:22]:
And the last thing that I'll say about this is when they talk about doing economically valuable tasks, they're talking about it as if that is a static thing. And economically valuable tasks, knowledge worker tasks, Evolve with tools. So the minute we have a thing that given a prompt gives this kind of response, the job of the human changes. And the AI needs to be guided in those situations. So I think it's, I think they're, they're trying to do something worthwhile, but I think the approach they're taking is just doesn't work and, and then it gets misinterpreted as open AI is like better than humans at most, you know, economically valuable tasks. And like that's just not what it is. This is just bad. This is wrong.

Alex Kantrowitz [01:22:06]:
All right, a challenge to Ari or Brian. Do either of you have any interest in taking the true side now that Dan is on Team Hype?

Ari Paparo [01:22:13]:
No, I'll take the it's hype, but it's also really good news because I totally agree with everything Dan said and I just want to build on it, which is that even if it is an isolated example, like given a set of symptoms, an AI is better than a doctor at diagnosing, which might be one of those prompt based things that's really exciting for future developments that allow some interaction with a patient that would utilize that type of knowledge. It's just unrealistic as of today that a really sick human being would walk into an AI's office and complain and be diagnosed. That's just, there's so many modalities in that process that would have to be worked out. But you can, but that doesn't mean it won't be worked out. And Maybe it'll take 20 years to do that. And it's really exciting.

Alex Kantrowitz [01:23:02]:
I think people are already diagnosing themselves using ChatGPT and Copilot and all this stuff. And when you think about the. And I'm not saying it's a good thing, right? I think it says more about the gaps in our current medical system than how great these AIs are. But if you think about questions of availability, cost, expertise. I've even been in cases where I've been sick and I've been like, you know what, ChatGPT is going to get me through this and I'm just going to skip the doctor's visit. Brian. So I'm going true, I guess, well.

Brian McCullough [01:23:34]:
But OpenAI recently in one of their last product rollouts or something, they sort of leaned into that and people were like, is that legally safe for you to do that? They know how people are using ChatGPT and the fact that they sort of leaned into like medical diagnoses, like they can see, like Mark Zuckerberg can see how we're actually sharing things and doing everything in our lives and Google do the same. I think that it's interesting to me and I don't know if I'm helping you segue into the OpenAI daily thing that they want you to do. They can see how we're using it and there's a difference between the consumer side of it and the enterprise side of it. And the, oh, we're using LLMs for coding and we're using LLMs for reading X rays and things like that versus how the consumer side of it is. It's basically like they always use the example of hey, help me plan my next vacation or whatever. No, it's like, should I break up with my girlfriend? What is this bump on the side of my head? That's basically how consumers are using it. And it is interesting to me the degree to which OpenAI is leaning into that. And if I did give you the seg into what they announced this week, Alex, there you go.

Alex Kantrowitz [01:25:06]:
You did. That was a perfect segue. But first I will say, doesn't that. Does the AI typically suggest to break up with the girlfriend because you could just date ChatGPT, right?

Brian McCullough [01:25:14]:
Exactly. Yeah, that's. I don't know the answer to that.

Alex Kantrowitz [01:25:17]:
I wouldn't be surprised. We've already seen it with Bing, although they had to put that Bing in a cage for a while. But I think a lot of these companies are coming back around to the fact that people do want to date these bots. In fact, I think OpenAI last week had data about the ways that people use ChatGPT and they're like, well, only 2% is companionship. And it's like, well, do the math. And that's like 13 million people. If you look at the, the total universe of, of ChatGPT users, that's neither here nor there. Let's, let's go to what Brian wants to talk about, which is that this week OpenAI launched this new.

Alex Kantrowitz [01:25:50]:
Or yeah, this new thing called ChatGPT Pulse to proactively write you morning briefs. This is from TechCrunch. OpenAI is launching a new feature inside ChatGPT called Pulse, which generate. Which generates personalized reports for users while they sleep. Pulse offers users 5 to 10 briefs that can get them up to speed on their day and is aimed at encouraging users to check ChatGPT first thing in the morning. There was a OpenAI product lead. His name's Adam Fry. He went to TechCrunch and he showed him some.

Alex Kantrowitz [01:26:20]:
They showed the reporter some use cases for Pulse, including a roundup of news about the British soccer team Arsenal, a group of Halloween costume suggestions for his wife and kids, and a toddler friendly travel itinerary for his family's upcoming trip to Sedona, Arizona. Let's go to Ari, who is the advertising veteran here in the group. This is an advertising product, is it not? Obviously an advertising product.

Ari Paparo [01:26:45]:
Well, it's obviously very suited for advertising. Let's say that it's a media product. That's really what it is. It's about giving people information in more of a passive way than just search queries, which search queries obviously incredibly valuable, but it's a different kind of value being able to push information to the consumers and get them hooked and get them on a regular basis and not just bottom of the funnel intent I'm interested in buying supplement type searches could be a hugely valuable thing to the company if they do it right and consumers get excited about it. And I would just build on this, which is to say you really have to imagine in a very short amount of time, I would say within the next five years that we'll all be able to have either audio or video AI generated news broadcasts personalized entirely to our interests. I can't imagine that not happening.

Alex Kantrowitz [01:27:45]:
So basically the four of us who all podcast were screwed.

Brian McCullough [01:27:49]:
Oh, I almost said that just a second ago that I'm the most bearish.

Alex Kantrowitz [01:27:54]:
On podcasting just when something started working in media. Now, now is that to happen.

Ari Paparo [01:28:00]:
The real risk, the AI destroys the creator economy. I think the, the creators are all high on the hog right now, but I would be thinking they're one of the first things that could be totally annihilated.

Dan Shipper [01:28:11]:
I will take the opposite on that. Yeah. So I think the most important question before the question of how is this monetized? Is it any good? Is Pulse any good? And so far I think there's a, there's a lot that needs to happen. It's obviously not great yet, but I check it every morning. I think it's freaking awesome. And it also competes with something that I make. So like I'm kind of incentivized to be like, it sucks, but it's good question to you. So it's.

Dan Shipper [01:28:45]:
I wish I could like read it off, but it's. I'm using my phone to. As my camera. So like I can't actually, I can't read you the whole thing but I tweeted my, tweeted my, my post the other day so you can, you can look at us on my Twitter at Dan Chipper. But. But basically, like, every morning, it's like, for example, I always use it to think about reading stuff. So it's like giving me, like, quotes from Wittgenstein, like, Homer. And it's giving me, like, craft notes on, like, how to write better.

Dan Shipper [01:29:13]:
And then it'll, like, you know, put at the bottom, like, here's like, a new AI thing that you've been thinking about that you should think about in this way, and then I can go back and forth with it. So, for example, I asked it, hey, can you. Every. Every day I want. I want to go through this book. It's called Philosophical Investigation. Investigations is a very dense Wittgenstein philosophy book that's like, written these numbered aphorisms. There's like 300 of them or something like that.

Dan Shipper [01:29:39]:
And I was like, every day, one of my pulses. Just make it, like, one of the. In order. Go through each of them, each of the aphorisms, and then explain it to me line by line. And something that, like, I've read a few times, but, like, I've never gotten all the way through. And now I have, like, a little, like, partner that's, like, just going through this book with me, and you could do all that kind of stuff. And I think it's very clearly the future of media in a really interesting way. We have a product like this that we build called Quora, which does this for your email.

Dan Shipper [01:30:07]:
So it just takes all your emails and then it gives you a brief every day, twice a day, with all the emails that you need to see, but you don't need to respond to. So you can just scroll through and then you're done with your email. So it's a very similar type of product, and I think they got a lot of things right that are actually really, really hard to do. To your point about the creator economy. Yeah.

Alex Kantrowitz [01:30:26]:
Because, damn, the way that you're setting this up is like kind of. I'm seeing Ari's reaction and it's like, well, yeah, if this is going to be such a good personalized feed, then maybe the creators are done. So it'll be great to hear your perspective.

Dan Shipper [01:30:39]:
Well, what I was about to say, though, is, like, I'm saying I'm reading Wittgenstein, I'm reading Homer, I'm reading. These are all, like, copyrighted works by authors or, you know, Homer's. Who knows if he was real or not? But, like, it's a. It's a new format, it's a new content format, and people are going to want that content format to come from people that they know and care about and like and have a relationship with. And so, so for us, for example, I'm thinking about how do I get all my subscribers that are reading me by email right now, how do I get them to go tell ChatGPT that they want every in their pulse? It becomes a new surface area to reach people. And if you're a creator that understands that and gets in early, you have the chance to build a really, really big reach and a really, really big business in the way that if you're a, if you're a creator who understood stood x or Twitter like 20 years ago, or email lists like Ben Thompson, for example, you can do the same thing. So I think it's an incredible opportunity for creators to think about what AI first content formats look like, how to make their content friendly so that it comes up in a way that is actually interesting because that's actually pretty hard to do. And just to your broader point about the creator economy, what I found, like I said, you know, we started as a daily newsletter, we still write a daily newsletter.

Dan Shipper [01:32:09]:
I still write almost every week. I'm working on a book. And we have five, we have four software products and we have a consulting business. And so we've been able to grow the revenue that we make from the, from the audience that we have in a way that looks a lot more like a tech company than it does just a traditional media company. And I think that that is very, very open to every creator now because it's much easier to make software. And so I think it there's a lot more opportunity to monetize the attention that you get. And that's really exciting.

Alex Kantrowitz [01:32:43]:
R Do you regret.

Ari Paparo [01:32:45]:
I'll just say that if you're a friend or a loved one of Dan, get him the Wittgenstein of the day calendar for Christmas.

Dan Shipper [01:32:53]:
So, you know, look, I'll give you my address, you can send it to me.

Ari Paparo [01:32:59]:
I don't recant. I love that tip about trying to get your content in there. I just think that we're in danger of this sort of first level thinking when it comes to media, which is sort of what happened during the dot com boom. I know I'm so old, I keep bringing it up, but all the established media companies, they said, oh, the Internet, we're going to put our content there. And that didn't work. And, you know, huge amount of time was wasted and they missed all of the new business models that came out on the west coast. And I think that there's a real worry that everyone who creates media is looking at AI and saying, oh, that AI generated slop, that's terrible. I'm not doing that.

Ari Paparo [01:33:41]:
I'm going to take my human created content and feature it. And. And they may be wrong. It may be that slop gets better and better. And purely AI generated content with educated human prompts and structure is a really important change that folks who don't adapt may get swept up by.

Dan Shipper [01:34:04]:
I think. Can I just briefly.

Alex Kantrowitz [01:34:06]:
And then on to Brian briefly.

Dan Shipper [01:34:08]:
Yeah. So a. I think that that could be totally true, which is that, you know, I think of myself, for example, we have this product, Quora, right? I write the prompts for Quora. So I am the writer of this newsletter that is automatically generated. But it's like a, it's this meta thing. And I think that that is totally the case that a lot of the, you know, maybe creative or creator work might go into doing the sort of meta writing or meta creation of a show that is customized for, for each listener but is like from the perspective and with the voice of a particular creator. And that's totally true. I do think there will always be, even if that's the mass thing, there will always be room for actual human generated, human to human type stuff that is more expensive and more aspirational in the same way that you want a handmade shirt still.

Dan Shipper [01:35:03]:
But the point about the dot com thing, the reason that I don't think it applies is because we were talking about media companies with very different cost structures and very different setups that were made for a different world. And if you're a newsletter writer like Alex or like me or I don't know if either of you have newsletters, but if you're a newsletter writer and you're set up as this like one person business or one or two or three person business, you're actually set up for a world with AI Much, much better than for example any of the like magazine like Hearst or Connie Nast was when the dot com boom came around. And a lot of the reason why we think media is a bad business is because we're watching these old big behemoths like slowly die as the place in the world that they used to occupy and the cost structures that they used to be able to support die. But if you look at the more ground up people that are in the actual creator economy, you actually see a lot of it's still very hard. There's not that many of us, but there are like really thriving and good businesses that you can build. And I don't think that's going to go away with AI.

Alex Kantrowitz [01:36:10]:
Let's go to Brian and then we'll.

Brian McCullough [01:36:11]:
Go to break again. Bearish. Because to bring it back to the dot com bubble again. One of the things that media, the mistake that they made is they use metaphors from the previous era. The reason that it's page views is because what's a page? It's a friggin magazine or book page or we still sell ads on a CP CPM basis which goes back 100 years. The metaphor of what? The only metaphor that matters now is sort of the Netflix idea of the time spent consuming. Netflix says their only competitor is sleep. Right.

Brian McCullough [01:36:51]:
And so people in the TikTok era don't really care where they're getting either their information or their content or whatever. It's just, is it, is it accessible to me? I'm, I'm bearish on this because I don't know that it matter. I would love to believe that people care that my prompts matter, that my version, like if I create an AI that works for you or whatever. Like because you got it from me, you trust it. I don't. I. I'd love to be proven wrong on that. I think that with this product we are seeing that to use the analogy of they're going back to the previous this is a feed.

Brian McCullough [01:37:32]:
They are going to be able to put ads into this, right? That you'll wake up every day. OpenAI doesn't want to put ads into. Hey, you're asking ChatGPT something and by the way, here's an ad that pops up. But if it's a feed, if it's information that flows through, they can throw ads in between. The other thing that I would say is we talked an hour ago about is all of the AI spending based on CEOs belief that they can obviate a certain percentage of their workforce. Don't you believe that Zuck and everybody else wants to obviate creators? You don't need creators if you're YouTube, if you're Zuck, if you're whatever, if AI can be the creators. And I don't see any sense. And based on like all of the AI stuff that goes public, all of the what they call slap now when, when people are flipping through Tick tock, they don't care.

Brian McCullough [01:38:24]:
And so if you can obviate the creators, I believe the platforms will do it and they don't care either.

Alex Kantrowitz [01:38:30]:
Well, that is a perfect segue into this.

Dan Shipper [01:38:32]:
Wait Wait, you can't just, we can't just like, leave it like that?

Alex Kantrowitz [01:38:34]:
No, we're gonna, we are, we are going to pick it up. We're gonna pick it up on the other side of this break because I had been meaning in this segment to talk about Vibes, this new feed of AI slop, I don't know, you might call it, AI production to produce content that Meta is releasing. So we're going to do a big block on the other side of this break. We're going to talk about vibes, we're going to give Dan an opportunity to respond to Brian. Then we're also going to talk about the TikTok deal and the state of big tech antitrust. Can we get that done in one block? We're going to find out right after this. Hey, Alex.

Leo Laporte [01:39:06]:
Hey, everybody. I want to tell you about something I've just discovered that I am so impressed with. Field of Greens, our sponsor for this segment of this week in Tech. You know, a hard reset on a computer sometimes is the best thing to do to get it back to factory settings and feeling good. Did you know you could do the same thing for your health? A smart reset of your health to get you back to the factory settings. That's what Field of Greens does. I don't know if that's exactly the right analogy, but I could tell you this is scientifically backed from Brickhouse Nutrition. This is Field of Greens and I have been using this now for some time and I am so impressed.

Leo Laporte [01:39:45]:
But what really impressed me, and you can see this on their website if you want to read it. They just did an impressive sizable biological study with Auburn University. The goal was to see if just taking Field of Greens daily could slow down the test subject's aging, slowing the rate at which your body ages. Now, that generally means you're going to live longer, you're going to live healthier and you're going to feel better, you're going to be happier. Every fruit and vegetable, not just broccoli. It's really amazing what's in here. Medically selected for specific health benefits. They're all 100% organic and they actually on the label, you can read it.

Leo Laporte [01:40:24]:
You can see it on the website too. They've grouped it into heart health, there's a cell health group, lungs, kidneys, a liver health group. And if you look at each of the ingredients, I think you'll see that these are time tested fruits and vegetables, real foods that make a difference in these areas. There's even a metabolism group for a Healthy weight. So what did they do in Auburn? They did a biological study. Participants, you know, they got all the tests, the blood tests and stuff. Some of them with a test learned that they were aging much too quickly. I've noticed that.

Leo Laporte [01:40:59]:
Unfortunately, I have a variety of things, like my aura ring, my apple watch, my why things scale. And they say, you know, Leo, your biological age and your epigenetic age, as they call it, they'll match. You're aging a little quicker than you should. So what did they do with these participants? They were told, don't change your lifestyle. Eat normally if you drink, fine, if you don't, exercise, fine. They didn't want them there to be any other variables. The only thing they did, they had two groups. One that had a placebo and one that had field of greens.

Leo Laporte [01:41:34]:
And the results were dramatic and remarkable. The group that added field of greens literally slowed down how fast their bodies were aging. Literally. Now imagine how much slower you might age, how much better you'll look and feel with field of greens from Brick House Nutrition. And by the way, I've got my. This is the. I just did the lemon. Strawberry.

Leo Laporte [01:41:57]:
They have strawberry. Lemonade, I guess they call it. They have a variety of different flavors. You shake it up here. I'll put it in my cup so you can see it. Yes, it's green. There's all sorts of good green stuff in here.

Leo Laporte [01:42:09]:
Let me. Let me drink. Tastes fantastic. Ah.

Leo Laporte [01:42:17]:
And it really works. You could tell. Check out the university study and get 20% off when you use the promo code TWiT at fieldofgreens.com. that's fieldofgreens.com the promo code is TWiT. Field of Greens from Brick House Nutrition. I feel better already. I think you'll enjoy it. Here's to your good health.

Leo Laporte [01:42:38]:
Fieldofgreens.com promo code TWiT.

Leo Laporte [01:42:43]:
Yeah, I actually look forward to this. Back to the show.

Alex Kantrowitz [01:42:47]:
Thank you so much, Leo. First of all, where'd you get that shirt? That's a great shirt. For those who are on audio only, Leo is wearing this very cool technicolor shirt. Second of all, I will say that myself. Brian, Dan, Ari, we've been marveling at your ad reads. Done with zest and aplomb. I think that's how you pronounce it. So anyway, we have a lot to learn on that front.

Alex Kantrowitz [01:43:12]:
And then, Leo, I guess you might be watching, but to you and the Twit team, thank you very much for allowing me to guest host this week. Again, for those who are with us, now or right seeing me for the first time. I'm Alex Cantrowicz. I'm the host of Big Technology Podcast and I'm being joined by a great group of guests and you can find their podcasts as well on your podcast app of choice. We have Brian McCullough from the tech Brew Ride Home. You can find the Tech Brew Ride Home on your podcast app of choice. Dan Shipper is here, CEO of Every Every.com and you can get his podcast AI and I on the same podcast app. You're listening to all these others and oh sorry, every to correction.

Alex Kantrowitz [01:43:53]:
Every to.com is passe. To is the future. And Ari Poro is here, the Marketecture podcast. He's the host of the Marketexture podcast and he's also the founder and chairman of Marketecture Media and the author of Yield How Google Bought, Built and Bullied Its Way to Advertising Dominance, where Aria chronicled some of the issues that Google has found itself in which we are going to get into in this segment. This is our mega segment. We're going to be talking about Meta's vibes, the TikTok deal, and of course, state of Big Tech antitrust. Is that too much for 25 to 30 minutes? I think not. We left our last segment with a passionate feeling from Brian McCullough about Diatribe.

Alex Kantrowitz [01:44:48]:
I was, you know, that's the word that came to mind. I didn't want to use it, but yeah, let's go. Diatribe. To which Dan, Dan Shipper here wanted to respond, but I'm going to throw a wrench in the works and talk about this Meta Vibes app. And we're going to give it Dan everything to respond to and go around the horn. This is from TechCrunch. Meta launches Vibes, a short form video feed of AI slop. This is TechCrunch's words, not mine, although I don't think I disagree.

Alex Kantrowitz [01:45:19]:
And then move no one asked for. Meta is introducing Vibes, a new feed in the Meta AI app and on Meta AI for sharing and creating short form AI generated videos. Think TikTok or Instagram Reels. But every single video you come across is essential, essentially, AI slop. Of course, this is coming from Meta's super intelligence group, I believe. Does this feel super intelligent? To me, maybe not. So let's lob this in the in the lap of Dan Shipper once again, who has valiantly decided that he will play, I think, the role of AI defender on today's show. And we are grateful for it.

Alex Kantrowitz [01:45:57]:
Dan you're doing great.

Dan Shipper [01:45:58]:
So thank you.

Alex Kantrowitz [01:46:00]:
Dan, why don't you field this, the AI Vibes thing? Is there something the world is not getting about it? And then you can have some time to respond to Brian as well.

Dan Shipper [01:46:09]:
I mean, I think obviously the way that it's. That it was done, which is like, you know, a bunch of people that just got hired for like hundreds of millions of dollars throwing this trailer that had, you know, just sort of mid. AI generated clips up as if it's like the, the best new thing in the world without any sort of creative, real creative vision or like storytelling vision or anything like that. It's just like primed them to get made fun of. And it's in. And it's in poor taste. I, I do think to. To Brian's point, it's very.

Dan Shipper [01:46:41]:
It seems like there's a real chance that one of the possible futures of Mass Media is AI generated content. And, and I will say ChatGPT counts. That is AI generated content. You can think of that as AI generated content. And I think Pulse is an, is a new. Is a new form of that. That is. Yeah, like, like we've been talking about.

Dan Shipper [01:47:08]:
It's more like a feat. So I think I'm actually like totally in your corner, Brian. Like, that is very likely to be the new. The new mass medium or one of the, one of the top new mass mediums. And if you're were sort of saying, well, everyone's always going to want to read my, like, my beautiful newsletter, which I think that they should. Content formats change and that's. That is. That is totally true.

Dan Shipper [01:47:29]:
I would. It is very easy to hate on Vibes, but I think that they're probably getting something right. Everyone hated on Threads when it came out in the Twitter sphere, and apparently Threads. I don't know anyone that uses Threads, but apparently it's a gigantic, gigantic app. Whether or not they should do this is another good question. But I think the interesting thing is Zuck has a particular playbook for catching social waves, which is watch what's taking off from the ground up and then buy it. And he's not allowed to do that now. And so he's done the next best thing, which is he's bought all the different component people and then put them in a room together and said like, go do something, something.

Dan Shipper [01:48:13]:
And that's a, That's a different way of doing things. And it's. I think it remains to be seen whether or not that's going to work in the same way. The, the, the advantage of like the true bottom up thing is that the entire world is running simultaneous experiments and all you have to do is figure out which one's working best and then buy it. The, the, you know, buy all the component parts thing and then put them together together in a room. They still have to like gel together as a team and then search and then find the solution. And I think it's really unclear whether or not they're, they're going to be able to do this, but I will say I would bet them to make it much better over time. And, but if I, if I had to put, if I had put my money somewhere I would say like very likely there is another company that starts as a small company that figures out how to do this in a way that's actually interesting and compelling and, and for sure people are going to hate on it.

Dan Shipper [01:49:07]:
And people especially in the sort of established creative professions are going to hate on it, as is the grand tradition of people in established creative professions talking about any new medium, whether it's TV or radio or the Internet or whatever. And some people are going to figure it out and turn it into a great art form that we look back on and with nostalgia when we're uploading our consciousnesses to, to the galaxy brain or whatever. This is going to seem like the, the halcyon days of the, of AI generated content and, and to some degree I think it kind of is. Like if you, if I look at my chat GPT pulse, it's so wholesome, it's so educational. It feels like the Discovery Channel or like the History Channel before it got, you know, bought and then all they did was cover Nazis and do ice road trucking or whatever and looks different for different people. You get out what you put in over time I'm sure it will get more shittified as always happens. But right now I think it's pretty awesome. And to go back to your point Brian, I think we agree about the mass media thing and that we're moving in that direction of the mass media being AI generated content.

Dan Shipper [01:50:19]:
But I think the thesis that in order to do that these companies don't need creators and it'll just be like algorithms just like churning out slop and, and, and so creators are going to go away. I actually think that they will need creators to help them figure out what to generate and, and what to what to do. Like they, these algorithms can't explore the entire space, the entire reality possibility space by themselves. And so I think even in the world that you're talking about I still think there's actually a large opportunity for creators and it's just a matter of learning how to use technology and being part of it.

Brian McCullough [01:51:00]:
Want to give Ari space. But let me just respond to that real quick for a second. I said on my show this week that when TechCrunch said that, oh, people are going to hate this. No, Zuck has a history since the beginning of Facebook that he knows what you like. Even when you tell him you don't like it going back to the news feed and any other change that they've done. He watches what you consume, what you engage with, and he gives it to you. And so what I said was, I think we're already seeing that. We can see that the people that are watching have their thumbs on the algorithm.

Brian McCullough [01:51:38]:
They can see that you like that Star Trooper sort of AI generated video, which was generated by humans prompting AI or whatever. But the second thing would be, I disagree, Dan, because I think that because it's all algorithms, I think that once the algorithms can also create the content, then I don't know that there's room for creators there. And again, I'm not quitting my job and I'm not telling people to pack it all in. I just can see sort of the. Its algorithms all the way down once the AI can create the content that. That YouTube is asking creators to create content right now that their AI is like suggesting for them. This is what you should be creating. So now we're 18 months away from AI be able being able to create that video content.

Brian McCullough [01:52:33]:
Sorry, go ahead, Ari, if you want to jump in as well.

Alex Kantrowitz [01:52:35]:
Yeah, so we've had. We've had. This is our second moment here where we've had Dan and Brian on, you know, opposite sides. Let us go once again to his Honor, the honorable Ari Paparo to weigh in here and settle this debate for us. And then we'll move on.

Ari Paparo [01:52:50]:
I have a bunch of things to say that I'm not sure I'm in order. First, Threads is kind of awesome. I really like it. I'm still a Twitter guy. Really. Threads is really good. I just check it every once in a while.

Dan Shipper [01:53:00]:
What do you like about it?

Alex Kantrowitz [01:53:01]:
Yes, go ahead.

Ari Paparo [01:53:02]:
It's upbeat, it's kind of fun, it's interesting. There's a lot of good, funny jokes. People are funny on threads and in a non racist way. So that's threads. So here's my take here, which is a little different from what either one of you said, which is this is obviously an experiment because there's A lot of legal and political reasons to not go all in on AI generated content. As a thought experiment, if Zuck and his superintelligence team could create really compelling videos using AI, they would just put them in reels. The fact that it's not in reels is the tell here. And it's the same reason that Spotify, which has this enormous corpus of data, is banning AI music.

Ari Paparo [01:53:46]:
They are obviously the ones who would do AI music. And AI music is no doubt useful. I mean, the top playlist on Spotify is like low key music to study by or something like some nonsense like that that clearly could be AI generated, but they can't, because a lot of people do. But, you know, Spotify can't go in with both feet because they have this sort of strategy, tax, legal liability problem. And I think Zuck is in the exact same situation where he has the opportunity to create AI videos but is not really ready. He'd probably, probably get sued and there's a lot of other hair on it. So this is really just like a dipstick test to see what people want and like and see how it evolves. And maybe version two, version three turns into something useful.

Ari Paparo [01:54:37]:
But right now, this is, I don't think that meaningful.

Dan Shipper [01:54:40]:
Can I add to that? Like, Facebook has a history of doing this. Like, they've launched a lot of different things that they just end up folding back to the very, very beginning of Facebook. They eventually get something right generally. But one thing that I think that, that that analysis sort of missed just in the. Would they just put it in reels? I think that they would accept that. I believe that AI generated video is a new medium. It's a new content format, and it's really hard to stick a new content format into an old channel. And so, like, for example, they had to do stories as a separate thing rather than something on the grid.

Dan Shipper [01:55:22]:
And, and you know, TikTok videos are just different enough that they had to kind of find a whole new form factor for them. So I would, I would guess that when AI generated video starts working, it will be in a different app that has a different way of working that we just have not seen yet.

Brian McCullough [01:55:42]:
Last thing.

Dan Shipper [01:55:43]:
Yeah.

Alex Kantrowitz [01:55:43]:
Okay, briefly, Brian.

Brian McCullough [01:55:44]:
And then we're gonna go to TikTok AI slop. We keep calling it AI slop, but all that the current generation of AI is doing for a lot of people is obviating the, the, the. The boring stuff, the, The. The Drudge work. Right? So if you were calling what AI video and things like that or AI Created content as AI slop. It's just slop. It's just that it's easier to produce. So I don't know that there's anything different.

Brian McCullough [01:56:11]:
It's just the same content. If you want to call it slop, then all that we've been doing and putting up on the Internet for the last five years has been slop. It's just now generated by AI and it's easier to generate. That's all that the difference is.

Dan Shipper [01:56:26]:
That is an important difference. I think slop is whatever is cheap to make. And the fact that it is expensive to make is an important signal like that. That is an expensive. Broadly considered, like in terms of the amount of experience. Experience that it took or the amount of money or time or anything like that is, I think, the broadest indicator of slop.

Alex Kantrowitz [01:56:47]:
Agreed. Okay, so we have some agreement here, and we could all agree that AI is not the first format where slop has emerged. All right, we have. We are. We are really blazing through the show. We haven't even touched this. The fact that TikTok, right, which is still made of mostly human generated content, we think may end up being sold. And there's a kind of an interesting one, too, that happened.

Alex Kantrowitz [01:57:15]:
First of all, the AP had this story that says Trump approves TikTok deal through executive order. And Vance says the business is valued at 14 billion. Then the AP updated its story. I didn't see any, like, notice on the bottom that said they changed the headline, but they changed the headline to something much weaker. Trump signed. Trump signs executive order supporting proposed deal to put TikTok under US ownership. I mean, you have to approve a deal if there's a deal. You support a deal if there's no deal.

Alex Kantrowitz [01:57:44]:
So my question to the panel here is, is there a deal?

Brian McCullough [01:57:51]:
Not yet. If people aren't tired of hearing of me. Where's China on this? China still has the veto. The stories are that Chinese media has not been talking about this. And so, yes, again, the executive order was essentially that Trump was saying, well, you know that law that said it has to be past these legal things to happen. Okay, it's past my legal threshold for this to happen. But as far as I know, unless I miss this, bytedance hasn't said, hey, we're on board. China hasn't said we're on board.

Brian McCullough [01:58:30]:
Trump has said that Xi Jinping said, we're on board. But unless I missed it, still not official. Official.

Alex Kantrowitz [01:58:38]:
Okay, so TikTok remains in limbo, in which case maybe Meta's feed of Vibes will one day supplant it just because it's banned.

Brian McCullough [01:58:47]:
Alex, one more thing.

Alex Kantrowitz [01:58:49]:
Of course. But, yes, go ahead. The 14 billion.

Brian McCullough [01:58:51]:
Yeah. Holy God. Listen, $14 billion is the cost of one data center. How the hell. If that ends up being the valuation is. It is. Okay. TikTok.

Brian McCullough [01:59:09]:
ByteDance, we think, is a 300 or $400 billion company, largely because of TikTok. TikTok US would be a subsection of TikTok's business. ByteDance's business, but the US market would be a huge chunk of that. $14 billion is.

Dan Shipper [01:59:28]:
That's.

Brian McCullough [01:59:28]:
That's Snap. That's Snap's market valuation. How could TikTok be worth the same as Snapchat?

Alex Kantrowitz [01:59:36]:
No, it's definitely one of those stories where, like, you read the details and you see the updates and you're just like, we are the. The long limbo that TikTok has stayed in. In this country ever since Biden signed the bill or the order that forced it to. To sell or forced ByteDance to divest the US assets. It doesn't seem like we're exiting that. And it's actually quite amazing that it's still even active in the US now that we have these executive orders every 75 days, extending and extending. All right, Ari, your thoughts?

Dan Shipper [02:00:08]:
Yeah.

Ari Paparo [02:00:09]:
The only meaningful tech legislation that's been passed by our Congress in 20 years, and it's been made a hash of. It's basically not been implemented by the executive branch until they found it in their own personal interest to give it to some of their donors. This is a nightmare of public policy activities. I'll also point out the whole reason for the divestitures. Everyone seems to forget, is to avoid the potential for the Chinese Communist Party to manipulate the news and to spread propaganda or whatever. And now it's turning out it gets owned by partisan allies of the administration. I'm not. I made this joke on my podcast.

Ari Paparo [02:00:49]:
It's like one of those jokes that's too close to being real, which is, who would you rather control the algorithm, the Communist Party or the Ellisons? Not sure. Toss up for me. And so that's where we're at.

Alex Kantrowitz [02:01:01]:
Dan, any thoughts on the TikTok ownership saga?

Dan Shipper [02:01:04]:
No thoughts. Out of my. Out of my depth, but interested to hear everyone else's.

Alex Kantrowitz [02:01:11]:
All right, so why don't we then take this moment to move to. I think the crux of this issue, which Ari just touched on, which is that we haven't really seen any legislation, despite the government's big talks. I mean, we saw this TikTok bill that's not being enforced. The government has made a very big show of being tough on big tech. And you've seen the Justice Department and the FTC go after Amazon and Facebook and Google and what's happened. You had the slap on the risk remedy for Google when the just when, when the court found Google guilty of being monopolist or lie or found that Google had been a monopolist. You just had Amazon. By the way, Amazon's FTC case, the big case against Amazon where Amazon was supposed to pay, was being taken to court for making it too hard for prime members to cancel.

Alex Kantrowitz [02:02:06]:
They just settled that, you know, $2.5 billion, which is a half sneeze for Amazon. All right. Cases are still ongoing against Meta. Ari, you've been following this closely. I want to ask you first, is big tech antitrust in the US when it comes down to it, has it just been a farce?

Ari Paparo [02:02:26]:
I think that the government and leaders of the doj, especially under Biden with Lina Khan, became very ambitious and very much felt that they had the wind at their backs to pursue pretty wide ranging demands of the courts. And there were real, there was real evidence of malfeasance, which is why they have been somewhat successful at getting guilty verdicts or adjudicated as monopolies. They're not criminal trials. So it's not really guilty.

Alex Kantrowitz [02:02:55]:
I said guilty there because I was just being dramatic.

Ari Paparo [02:02:57]:
But I know, right?

Alex Kantrowitz [02:02:58]:
I mean, I guess if we wanted to be. They were found liable of being. Of preserving an illegal monopoly.

Ari Paparo [02:03:04]:
Exactly. So Google's now been found to be a monopoly three times. Like I mentioned earlier on this pod, and the first one, the App Store case, seems to have actually turned into something. The search case is a nothing burger. And now we're in the middle of the ad tech case. And what I've talked to lawyers who are sitting in the case and who have been observing it with sort of a little bit more of a keen eye to the details, is that there's this need to give a remedy to the courts. You can't just say you're a monopoly, we're going to break you up. Unless the breakup is justified and kind of natural because the judges don't want to go out on a limb and do something that might be damaging or complicated or whatever.

Ari Paparo [02:03:49]:
And that's really what happened in the search case, I think. And what other people who observed it said, which was they didn't, they didn't prove to Meta. That's Meta is actually the name of the judge, which is confusing, but in the Google case, Judge Meta, which is very meta. Judge Mehta didn't feel like he was given an opportunity, like they, they didn't tell him what he should do and why he should do it in a way that was justifiable. So he came back and said, you wanted a Chrome divestiture that's really complicated and really dangerous and is not justified. And you wanted this behavior and that behavior, and that doesn't make sense. So the government needs to have not just this aggressive theory of antitrust, but they also have to have practical solutions they could offer these judges because the judges aren't technical and the judges don't want to be over overturned on appeal. And they also don't want.

Ari Paparo [02:04:46]:
The last thing they want is to make the situation worse with a remedy that's not fully thought out. And that's exactly the crux of the current case, the ad tech case I've been following, which is that the remedies are potentially dangerous. And that's the case Google's making. Google's very aggressively saying these remedies you're asking us for are difficult, dangerous, impossible technically, which is a bit of a stretch, et cetera, et cetera. So this is where the rubber hits the road. And because in our country antitrust is pretty much the only way to rein in tech, there's no legislative option, we end up with these pretty adversarial approaches to actually trying to implement tech policy through the courts.

Alex Kantrowitz [02:05:29]:
So a quick follow up to that. I mean, big tech antitrust, right? This has been printed on mugs and there was big celebrations in the streets about how the US was finally cracking down on big tech. Your thoughts? No.

Ari Paparo [02:05:44]:
Well, yeah, nothing's happened. I mean, the only thing that's happened through the Lina Khan era has been a hiatus on acquisitions. The nature of those policies meant that there have been none of those big companies have done big acquisitions in years. And I think they still are pretty skeptical whether they would have available acquisitions moving forward. But in terms of actually making the market more competitive for businesses that either compete with the big tech or interact with the big tech, I don't think we've seen much of anything in terms of moving forward. And as one example, this crosses into Europe, but the whole Apple App Store 30% that they pretty much are thumbing their nose at all the regulators and courts about, and charging 28% or whatever it is they've done lately is an example of how there are sort of these power imbalances between the Tech companies who have infinite amount of money and the courts and legislatures who have a pretty hard time changing the way they.

Alex Kantrowitz [02:06:50]:
Operate, just bringing it full circle. We started this episode talking about how much money is going around in the AI world. And you look at the companies that, you know are the quote unquote startups you got. OpenAI is going to take 100 billion potentially from Nvidia and billions more from Microsoft. Anthropic billions from Amazon and billions from Google. And the list goes on. I mean, Nvidia is also investing in other infrastructure plays like Core Weave. So Dan, is the, is the long promised era of little tech in the tech field, is that sort of, does that not come to fruition?

Dan Shipper [02:07:29]:
No, I mean, I think, I think there's a, there's a incredibly healthy startup ecosystem. There are power laws in tech. So there are gigantic, gigantic companies. But I think there's a, there's a really, really healthy number of startups being started. Startups that are growing, people that are bootstrapping. I mean, for us, we haven't raised a ton of money, but we're growing really fast. And I don't, I feel like the environment is, is really, really good for us. But I want to go back to what, what Ari said earlier because I do see, and I do see what you're saying, Alex.

Dan Shipper [02:07:58]:
Like you see these cases and there's something in me that is like, yeah, probably they're doing some bads, like some, somebody's doing some bad stuff and they should be, you know, we should figure out how to make the market more competitive. If you're getting like, you know, two true two or three trillion dollar corporations and we're talking about antitrust, probably there's some antitrust stuff going on and you see the penalties and you're like, okay, probably those penalties to some degree when you're making decisions inside a company like that you are thinking about that, like it, it's, it's not going to bankrupt them obviously, but the individual people inside of these companies are probably thinking about like what is the eventual lawsuit going to be and am I going to have to go testify? I'm like all that kind of stuff. So it is a little bit of a deterrent. But I think the thing that Ari said that I think is really interesting is when they're thinking about divesting, for example, Chrome, the judge is non technical and like doesn't want to do something that's going to completely screw everything up. And that's a, that's such an interesting question because on the whole, if the judge doesn't understand how to do this, I would rather them be like, great, we're not going to, like, go on and fuck everything up. But I, I do think if you took, let's say, Sam Altman and you were like, your job is to figure out how to spin Chrome out of Google, he would get it done, it would work and it would be good. And so there's a, there's a really interesting problem here, which is that the people who are responsible for being the judges and being the prosecutors to like, you know, propose the solutions are so, so out so far outside of the whole being experts at doing this, that they can't actually create, they can't actually make the divestiture, for example, make sense. And I'm very curious to solve that because I think that's a, that's a really interesting problem to solve.

Alex Kantrowitz [02:09:52]:
I think that's a great point. I, I will say I, I looked at Judge Mehta's remedy ruling and thought that there was a great amount of detail there in terms of the way that generative AI works, Google's business works. And I think he basically said, this is on the search monopoly case. He basically said, look, if we make a ruling here, we're probably going to put our thumb on the scale in a moment where the market is figuring this out because generative AI is upending things the way that they are. And to me, that seemed like a very reasonable stance, although he did levy some fines and, and make some data sharing enforcements on them. We'll see if that holds up upon appeal. All right, Ari, briefly, quick anecdote and then we'll go to break.

Ari Paparo [02:10:34]:
Quick anecdote in court this week, there was a document they were reading. It was a technical document and it had C Sharp in the text they were reading. The lawyers were reading for the doj and this happened. One of the lawyers said, I'm not sure how to say that. Is that C ampersand? And she was corrected by a different lawyer who jumped in and said, no, it's C. Oh, no.

Dan Shipper [02:11:03]:
Okay. This is why we have to have GPT5 replace, you know, replace the judge and whatever GPT GPT5 would get it right.

Alex Kantrowitz [02:11:11]:
Yeah, we'll put that on OpenAI's GTP ver and say, yes, it can do that. All right, folks, let's go to break. We're going to come back, we're going to talk about the news on jobs, whether whether people still are able to work, what's going on with all these entry level jobs and is there a tech element to that? We'll do that when we come back right after this.

Leo Laporte [02:11:33]:
Hey, Alex, thank you so much for filling in for me this week. I just wanted to stop by and tell you about our sponsor for this section of This Week in Tech: ZScaler, the leader in cloud security. We've been talking about them a lot on all of our shows and man, you need it these days because AI, right, AI is out there. And it's both good and bad for enterprises. It's great for all the things AI can do for your enterprise. It's bad because you know what? It's not just you using it. Hackers are using AI and they're using it to breach your organization faster, more effectively than ever before. Look, we know AI drives innovation and efficiency, but in the enterprise.

Leo Laporte [02:12:17]:
But it could also help bad actors deliver more relentless and effective attacks. Phishing attacks, for example, over encrypted channels last year increased 34.1%. And that's primarily fueled by the growing use of generative AI tools. These phishing emails. Now, we used to say, oh yeah, look for the errors, the bad grammar, that kind of thing. Not anymore. These things are perfect, letter perfect and very deceptive. So we know hackers are using AI, but we also know organizations in every industry from small to large are using AI beneficially to increase employee productivity with public AI, engineers using IT with, you know, coding assistance, marketers are using AI writing tools, finance, creating spreadsheet formulas with AI.

Leo Laporte [02:13:04]:
And it's incredible. It's a revolution. Companies are automating workflows for operational efficiency across individuals and teams. They're embedding AI into applications and services that are both customer and partner facing. Ultimately, AI is helping companies move faster in the market and gain a big competitive advantage. But it's not enough just to say, let's just use it. Companies really have to think about how you protect your private and public use of AI and of course, how we defend against these amazing, powerful AI attacks. That's what Jeff Simon thought about and solved with ZScaler.

Leo Laporte [02:13:45]:
He's a senior vice president and chief security officer at T Mobile. He said, and this is direct quote, ZScaler's fundamental difference in the technologies and SaaS space is that it was built from the ground up to be a zero trust network access solution to which was the main outcome we were looking to drive. This is now it's me talk, and this is zero trust done right, done so that it's easy for you to use. It protects you see, the problem is for years, decades, we've been using perimeter defenses. You know, the traditional firewalls, punching holes in them with VPNs, which then gives you a public facing IP and suddenly you've got an attack surface that hackers are using AI to pound on. And frankly, these perimeter defenses are no match in this AI era. It's time for a more modern approach with ZScaler's comprehensive Zero Trust architecture plus AI. It it does both.

Leo Laporte [02:14:39]:
It ensures safe public AI productivity, it protects the integrity of private AI and it stops AI powered attacks. It's kind of an amazing tool. Thrive in the AI era with ZScaler Zero Trust plus AI to stay ahead of the competition and remain resilient even as threats and risks evolve. Learn more at zscaler.com/security. That's zscaler.com security we thank him so much for supporting This Week in Tech. I thank you, Alex Kantrowitz for filling in for me. There's been a great panel. Keep it going.

Leo Laporte [02:15:14]:
I'll be back in a minute. But. But I'm enjoying it. Twit. Keep going, keep going. Guys, you're on.

Alex Kantrowitz [02:15:21]:
Thank you, Leo. Well, we got you and we're going to have, I think, a pretty heated and interesting next block here. We're going to talk about AI and jobs. We have new data, we have an interesting substack report and we'll cover it all. So just to reintroduce our panel, thank you all for being with us today. Ari Paparo is here. He is the host of the Marketexture podcast, founder and chairman of Marketexture Media and also the author of Yield How Google Bought, Built and Bullied Its Way to Advertising Dominance. Dan Schipper is here.

Alex Kantrowitz [02:15:52]:
He is the CEO of Every and he's the host of the AI and I podcast. Brian McCullough is here. He is the host of the Tech Brew Ride Home show in the Morning Brew Umbrella. What a great panel we have. I knew this was going to be good. I've assembled a team, folks. I've assembled a team. We have a great group, very brilliant folks all around and differing perspectives which has helped us draw out some interesting contrasts and compares.

Alex Kantrowitz [02:16:26]:
And I love it when we don't all agree. So thank you all for being here and making your points so eloquently. And now let's continue to do it because there's been a long conversation about AI and jobs, whether AI is causing what seems to be a slowdown in hiring, especially for entry level workers. And I want to start in this sort of untraditional place. Which is with a a post that I read on Substack this week, not about AI, but about the corporate job itself. The headline is the Death of the Corporate Job. Let me just read a bit to you and then we'll get your reactions. Last week I had coffee with someone who works at a building a big consulting firm.

Alex Kantrowitz [02:17:07]:
She spent 20 minutes explaining her role to me, not because it was complex, but because she was trying to convince herself it existed. I facilitate share stakeholder alignment across cross functional work streams, she said, then laughed. I genuinely don't know what that means anymore. She's not alone. I keep meeting people who describe their jobs using words they'd never use in normal conversation. They attend meetings about meetings. They create PowerPoints that no one reads, which gets shared in emails no one open, which generate tasks that don't need doing. The strangest part, everyone knows when you get people alone after work, maybe after they've had time to decompress, they'll admit it.

Alex Kantrowitz [02:17:44]:
Their job is basically elaborate performance art. They're professional email forwarders. They're human middleware between systems that could probably talk directly to each other. Let me just throw that out there to sort of kick off our Is AI really taking our jobs? Is it just that the jobs. Is there any truth to this? And is it just that a lot of jobs, a lot of corporate jobs sort of follow along this. This path, which is that, you know, maybe there's like 10% that that really needs doing, but a lot of it is this sort of forwarding Forwards and making PowerPoints that no one reads. Ari, I feel like you've been in a big, big company. What is your perspective here?

Ari Paparo [02:18:26]:
Yeah, I'm pretty cynical as I'm an entrepreneur. So when I get shoved into a big company, as I was after I sold my company, but just in general, it does feel like there are a lot of people whose jobs would disappear if you had someone with a vision or someone a little bit smarter above them in the hierarchy who could get things done. And I think this is the only part of you remember the fad about two years ago for Founders Mode when Elon took over X and stuff like that? I think there's a little truth to that because as a founder you really can get things done by just saying no, no, no, that's stupid. Do it this way. And in a corner corporation you like might need to hire a consulting company and pay them $500,000 to do that exact same thing. Because you need to convince all these People who are uninformed about the subject matter, about the right way of doing things. And the only way to do that is with a hundred slide PowerPoint. No one reads.

Ari Paparo [02:19:22]:
And it's very frustrating. But that said, it's not everybody. It's not every circumstance. But there are definitely these pockets of jobs that shouldn't exist and workflows that shouldn't exist and companies that should be able to make decisions if they were empowered to do so.

Alex Kantrowitz [02:19:41]:
Ryan, do you think that companies, instead of using AI to replace work, have just used AI as a signal to themselves to be like, let's actually look at what people are doing in jobs and then realize that maybe they didn't need to hire too much?

Brian McCullough [02:19:58]:
I said again. And did I come off as being anti AI throughout all this? Alex, is that what you said earlier?

Alex Kantrowitz [02:20:05]:
No, no, I don't think so. You know, I feel bad because I put people on different sides because I'm trying to conflict. Yes, our ratings here. No, I'm just kidding. It is good to have some contrast here. Well, no, I think skeptical of certain areas for sure.

Brian McCullough [02:20:21]:
Which because again, listen, you know, I have a fund that invests in AI startups. Let me give you this, what I said earlier, I'll double down on which is AI in this current generation of it, of what the technology is. The thing that it's best at is obviating the drudge work, the menial tasks. Right. And so I'll give you an example. If you reach a certain age, you have to go in for a colonoscopy and you go in and you have to. To be put under sedation and then you come out of sedation. And when that happened to me for the first time recently, I was in there for an hour coming out of the anesthesia.

Brian McCullough [02:21:05]:
And so you're in there for two hours. And I could hear all of the nurses and doctors and RNs in the wing that I was in that had helped me throughout this entire day. And what are they doing for two hours while I'm coming to? They're calling people to tell them. That is a funny joke, Ari, but you. I'll let Alex put that in. They're calling people like they did me the week before to prepare them for the procedure. So one person after another. Hello, Mrs.

Brian McCullough [02:21:36]:
So and so. Hello, Mr. So and so. Here's what you need to do for your procedure. These are people that went to school, that got degrees, that did whatever, and. And what they are doing over and over and over again is just calling people and reading From a script. And I was coming to and I was saying this will be done by AI within three years. And then the second thing I thought was in the same.

Alex Kantrowitz [02:22:01]:
I just need to pause there. I love that this is your reaction when you wake up from a colonoscopy is like the first thing you're thinking is like, what can AI automate? I am a new man. I'm looking at things with frank fresh eyes.

Dan Shipper [02:22:12]:
The first thing you see is give me a mic, I've got something to say.

Brian McCullough [02:22:17]:
The other thing that I thought and Alex, this is true, or maybe this was just the chemicals in my head, was that in the same way that if it bears out that self driving cars are better than humans at scale, there will come a day when things like that. Why are the nurses and the doctors making those calls? Is because the insurance companies are requiring them to do it for liability purposes. Well, we called them and we told them how to prepare for the thing or whatever, but that's still a human doing it. And the reason that in three years it won't be a human doing it, it'll be AI is because I bet that AI as a bot will be better in aggregate than humans are. And so then it will be, well, you will get sued if you didn't have an AI call you and prepare you for your procedure.

Alex Kantrowitz [02:23:05]:
I don't know about that. I mean, we have a really good comment in the Twitter discord here which, which says they're reading from a script, but they're also responding to questions that a patient may have. So there is that. It's not only I'm telling you what to do. I mean that could be.

Brian McCullough [02:23:19]:
No, but, but the fact that you're.

Alex Kantrowitz [02:23:20]:
Getting a call from a person is actually. That is the care part.

Brian McCullough [02:23:24]:
But is the bot going to be better at responding to a person than somebody, an RN that is stressed out or is just going through the motions or whatever. Like if we are getting to the point where the bots are more responsive and like better at being human on that rote sort of level. Anyway, sorry, I don't know.

Alex Kantrowitz [02:23:46]:
I'm starting to feel like I need a colonoscopy so I can have a similar eureka moment. Dan. All right, we'll go to you.

Dan Shipper [02:23:55]:
I don't think you want a Eureka when you get a colonoscopy. You only want the opposite.

Alex Kantrowitz [02:24:00]:
Look at how Brian's life has changed.

Dan Shipper [02:24:06]:
Where do I even start here? We've this. I feel like we've been on a real journey. I want to just go, let Me. Let's talk about AI and jobs generally. I think there's, there's this really funny thing which is if you read the headlines, you're like, well it's, it's being deployed in companies and it's actually not that useful. But also companies are not hiring junior people because it's so useful that they are, they don't need to hire junior people people. And I think it's, it is very a, it's really, really hard to know what's causing what. And I think we should always take with a grain of salt these, those kinds of, those kinds of things to be like X is definitely causing Y.

Dan Shipper [02:24:47]:
I think it's, it's very, I think it's, it, it may be true that middle managers and CEOs are looking at this new technology and saying, hey, like let's pump the brakes for a second and maybe we can get more done without hiring these junior people. And if it is, it's a gigantic mistake. But I think that there are plenty of other reasons why hiring might be slower. It's really possible we're going into recession. It obviously always depends on like what sector you're talking about and where in the economy we're talking about. So it's just really hard just to answer that question with a, with a, at a really general level level. But I will say that I really don't think that long term this is going to be a worry specifically junior people if, if any, if anyone here on this panel or anyone here who is listening has a company and has seen what a 23 to 25 year old can do with chat gbt. It's wild.

Dan Shipper [02:25:48]:
I have had several people that work at every where, in writing, in coding, in design. They would not have been able to do the job and they would not have been promoted had they not had access to this tool. And what's incredible about people who use ChatGPT who are young is they record everything that you say and they never make the same mistake twice. And the, the minute that managers in the economy realize that kids like that can, can perform at that level, I think the narrative is going to flip a lot to what happens to middle managers or whatever. And I think that's actually a more probably interesting longer term question is in an economy that's like, that's really AI pill that's going, going AI from the ground up and everyone in the workforce is using it. There's, you know, back to, back to your earlier point about do people do real work? One of the interesting things about, about AI. And one of the interesting things about big companies is the reason big companies are like that is in order to get a lot done, you need to coordinate between lots of different people. And the more coordination there is, the more like, like politics and weird stuff happens where you're just sort of a bureaucrat.

Dan Shipper [02:27:11]:
And I think AI will allow us to have smaller companies that require fewer people and therefore less coordination. And, and, and that will affect people who are used to being middle managers and gigantic, gigantic companies.

Alex Kantrowitz [02:27:27]:
I just want to throw this out there just for the sake of, I don't even want to call it pushback. I don't know if I fully believe in it, but this is one of the more wild articles that I've read in a long time. And anyway, let's just talk about it. Susie Welch writes in the Wall Street Journal. Is Gen Z unemployable? He writes a mere 2% of Gen Z members hold the values that companies want most in new hires, namely achievement, learning and an unbridled desire for work. So she said that they asked employers to identify the values they desired in new employees. Achievement value to achieve is number one. That came in a number 11 for Gen Z.

Alex Kantrowitz [02:28:12]:
61% wish they had less of it in their lives. Next for hiring managers was Scope, which reflects the desire for learning, action and stimulation. That ranks 10th for Gen Z and the third for hiring manager managers. The, the third quality that they want, number three is work centricism, the desire to work for work's sake. And that's ninth for Gen Z.

Dan Shipper [02:28:34]:
What ranked highest?

Alex Kantrowitz [02:28:35]:
I, I am looking in the article and I don't know what the number one thing for Gen Z was but maybe it was, it was, I don't know, chilling or being on YouTube. Doing. Doing. Yeah, doing YouTube which I, which I buy completely encourage that. Does these numbers ring true? I mean what do you think about. Is this accurately described Gen Z or you're giving a skeptical look? Let's go to you.

Ari Paparo [02:29:02]:
I have two Gen Z ish children so I have, I have a little perspective. They are a little bit as generations they're a little odd. They, they definitely they're the last generation was partially brought up on the non Internet. Maybe millennials have that honor. So I feel like they are a little grounded in reality but also warped by the digital world. I think there was a study that said their number one career aspiration was to be a YouTube influencer. When they're interviewed about that, that's not a great sign. I think the COVID experience and I'M not talking about my kids.

Ari Paparo [02:29:45]:
I'm talking about the generation. But the COVID experience really did some. Some damage to that generation's kind of, well, being around, how to work with groups, how to socialize, a lot of other things. So I think we're in for a bit of an adventure as they enter the workforce. That's as much as I'll give here from personal experience.

Alex Kantrowitz [02:30:06]:
Ryan, your thoughts?

Brian McCullough [02:30:08]:
I don't buy it at all, because I'm old enough to know that the baby boomers were considered unemployable and we're gonna burn down society. Oh, wait, maybe that happened. But as a Gen Xer, we were unemployable and couldn't work. And the same thing was said for. Look, everyone says the same thing about the kids coming up. They're terrible and unemployable, and they don't know what to do. And guess what? They also know what the world is gonna be like more than we do.

Alex Kantrowitz [02:30:40]:
You is the is. Oh, Dan, let's go to you, because you. You talked about basically the opposite of what the study found.

Dan Shipper [02:30:46]:
Yeah, I mean, I think that I agree with every. What everyone has said, and I think a way to talk about it and think about it is Gen zers come from a slightly different culture. And anytime you're interacting with someone with us from a slightly different culture who's supposed to be from your culture, it can be a little bit jarring. And they have slightly different ways of thinking and working and communication styles. And if you're an older person and you're talking to them and you're like, I've had experiences like that where I'm just like, I don't get this. I would never have done that if I was your age. But people were saying that to me when I was that age. So I feel like.

Dan Shipper [02:31:31]:
I think, Ari, what you're saying about COVID is true, and. And the different levels of socialization because you're on your devices at home instead of in school, I think has probably had an impact. But I really think the kids are going to be all right. And if anything, the thing that makes me. Kind of makes me feel like I don't understand them even more is they seem to be very anti AI a lot of them. And for me, I'm like, this is the coolest opportunity for you. Like, I don't understand why you wouldn't just want to use this all the time. Because, like, when I was in high school or in college and whatever, I mean, I was a nerd.

Dan Shipper [02:32:17]:
But like the idea of starting a company and using, like using an iPhone and getting all this new tech was like really cool. It was, it was amazing. It was awesome. And now there's a sort of big sentiment against it. And so it's taken me a little while to understand that and I still don't think I understand them totally because it's their whole generation. There are a lot of individual people, they are totally different. But it's taken me a little bit of time to understand that. If you're being told that all of the things that you thought you had to do in order to be successful are going to change that that's going to be like kind of scary.

Dan Shipper [02:32:50]:
And you're being told, I need to go to college and like I'm going to get a job doing xyz. And now it's like college is a waste of money and you can get everything that you need from college from ChatGPT. And by the way, that job you thought you're going to get after college, like might not exist. That would be kind of scary. But I don't think that that will last. And, and I think that, that they will take this and run with it in a way that I can't predict. And that feels exciting to me.

Alex Kantrowitz [02:33:16]:
I really respect the, the fact that work centrism is at the bottom of their list. Like the fact that recruiters are like telling these researchers, the thing that I really need is people that just like to work for work sake. Like to me and I work hard, but the idea of working for work sake and making work centric as opposed to working for an ideal or working for a goal, I don't understand how that's a valued thing that recruiters. I mean maybe you just want workaholics, but I respect that Gen Z is saying, you know what? Not for me.

Dan Shipper [02:33:54]:
I'll also add, I'll go for Ari.

Ari Paparo [02:33:57]:
I'm just going to add that if you ask Gen Z about Gen Alpha after them, they think they're lunatics. Like they're probably Gen Z people are really scary. Of the Gen Alpha people, what scares them, Ari? That they're just basically 100% online. Like they think memes are reality. They, they don't. They've lost touch with the real world.

Alex Kantrowitz [02:34:20]:
And they live now as well.

Ari Paparo [02:34:22]:
They live in this like nether world of, of second or third generation memes have nothing to do with, you know, actual real life existence.

Dan Shipper [02:34:34]:
Dan, I would, I wonder what would happen if you went to like someone from the 1850s and told them about how we live. I feel like they would be like definitely not in the real world, definitely not in the real world. And, and they would be right to some extent. But like you know the world, the world changes constantly and, and technology is.

Alex Kantrowitz [02:34:55]:
Part of the world, much less dysentery.

Dan Shipper [02:34:57]:
Which we all appreciate that is true.

Alex Kantrowitz [02:35:00]:
Ryan, any final word before I bring in some more data here?

Brian McCullough [02:35:03]:
No, just that I have two alphas and they are feral crazy people but I love them. But also again I just believe that I will always have faith in just cause I don't understand the kids coming up today. They'll figure it out because the old people didn't know what I was into or understand me. And look, that's the world turning.

Alex Kantrowitz [02:35:27]:
One bit of data before we go back to break. This is, I think this is from Yahoo Finance. Top economists and Jerome Powell agree that Gen Z's hiring nightmare is real and it's not about AI eating entry level jobs. The dramatic rise in unemployment Among Americans under 25, especially recent graduates, has become one of the most troubling economic headlines of 2025. For many gen Z workers, the struggle to land a job can feel isolating and fuel self doubt. And that frustration recently got some high level validation. Federal Reserve chair Jerome Powell echoed economists concern about the cooling labor market. He says you have a low firing, low hiring environment and that it's harder than ever for young job seekers to break in.

Alex Kantrowitz [02:36:13]:
But Powell on AI said AI might be part of the story but the main drivers are broadly slowed economy and hiring restraint. I think it was nice actually to hear him chime in on this to give us some insight to say hey listen, AI might be part of it but the headlines that are blaming AI are saying AI is mass automating nowhere near there or nowhere there. Any reactions?

Dan Shipper [02:36:38]:
I agree with that and I think it is probably true that the economy is slowing. I mean just talking to friends who are run any kind of real business like making physical stuff that they have to ship from anywhere, like the tariffs for example, have a real impact. It's really hard. It's not even just that, it's not even just the actual tariffs. It's just hard to plan because you don't know when something new is going to drop. And so it's hard to know who to hire and how to hire and all that kind of stuff. And I think that that trickles down a lot of different ways and I would bet it has much more to do with just general economic conditions than AI. So I think Powell's Right.

Alex Kantrowitz [02:37:19]:
I just want to make. Yeah, go ahead, Ari, go ahead.

Ari Paparo [02:37:22]:
I think one underappreciated thing is the collapse in the market for computer science majors. There was a shortage of engineering for so many years and it was considered sort of the easy way to leave college with a six figure salary and people, people are competing for internships at Google and whatnot. And the market has shrunk enormously. The whole structure of the market for recent grads has shrunk before AI and now AI is making it worse, such that really only the elite CS degrees matter and ordinary developers are having a very hard time getting jobs and that it might not be an absolute number of people that is dominating the economy, but the very high paid jobs and often from, you know, wealthy families or recent immigrant families. That probably has a bigger effect psychologically on this question of whether Gen Z can get jobs.

Dan Shipper [02:38:14]:
I would guess. Oh, do you need to go?

Alex Kantrowitz [02:38:17]:
No, no, go ahead, Dan.

Dan Shipper [02:38:19]:
I would guess so. I don't, I don't, I don't have all the data but, and I do think that the traditional software engineering job where it's like you just learn how to, you know, you learn what pointer is or you learn how to write Python or whatever and like you can just get a six figure job is, it's going to change, it's going to change a lot. And the people that come out with CS degrees right now and adopt AI to code are going to be dangerous in a way that their predecessors never were. And I would bet that if you're someone who's coming out of college and you have a CS degree and you're trying to make it on just the CS degree, that's not going to work very well. But if you spend like three or four months getting really good at cloud code, the market for your expertise is going to be really high. And so I would, so there's all this debate about, you know, should you study CS or should you not? And I actually really think that being a programmer and learning how to program is, is super, super valuable even in a world where AI is doing a lot of coding. Because people that can go down into the stack and have a little bit of understanding of what's going on underneath are just going to be needed and are much more dangerous than people that can't. I would expect that narrative to be different in a couple years, but it will be a different reason to get a CS degree and the things you need to learn will be different.

Alex Kantrowitz [02:39:42]:
I just want to say one last thing before we toss to Break. I think that we are so underwhelmed in term. I am very underwhelmed in terms of the measurement and the data of this broader discussion. Not on the show, but just the way that society has this discussion saying, first of all, I think we have way too many conversations that AI automates tasks and people say that those are jobs. Like, I've seen it repeated in front of millions of people that if AI can automate 30% of tasks, then. Then AI can automate 30% of jobs. I think we've all discussed that's not true. Even if AI does automate a job.

Alex Kantrowitz [02:40:16]:
Right. The question is, what does the company do with that person? And I think the overwhelming evidence is most good companies don't just fire a high performer or even a moderate performer if they think there are other things that they can put them towards. This entire conversation is based on what I think is a myth. And that Myth is that CEOs are fine doing what they're doing today. And so if they could automate, they would just take more profitability as opposed to doing anything else on their roadmap. Most CEOs are ambitious. That's why they're in the job. And they're not content with what they're doing today.

Alex Kantrowitz [02:40:49]:
They're thinking about competitors, they're thinking about new initiatives. And so this idea that a CEO, just because they can, let's say, automate one function, which we know I can't today, well, therefore fire, as opposed to reallocate or readjust or put somebody on the. On the project that they've been dying to do but haven't had the resources to do. To me, that's ridiculous. And that's where I think these. These conversations in the broader media really fall short. Okay, end rant. Joined again by Brian McCullough, the host of the Tech Brew Riot Home podcast.

Alex Kantrowitz [02:41:24]:
Dan Shipper is also here, the CEO of everything. And the host of AI and I, which is a great podcast name. And Ari Paparo is here.

Dan Shipper [02:41:32]:
I would love to be the CEO of everything.

Alex Kantrowitz [02:41:34]:
Oh, did I call it everything?

Dan Shipper [02:41:36]:
You did everything.

Alex Kantrowitz [02:41:37]:
CEO of all of. All of it. Hype or true Dan, CEO of Everything. We can all agree that that's hype. CEO of every. The host of the AI and I podcast. And Ari Paparo is here, the host of the marketecture podcast. Also the author of Yield How Google Bought, Built and Bullied Its Way to Advertising Dominance.

Alex Kantrowitz [02:41:55]:
You can get it from your bookstore of choice. All right, we'll be back with a shorter last segment where we're going to have, I was going to say some fun, but we've been having fun all the way through. Let's have even more on the other side of this break.

Leo Laporte [02:42:07]:
All right, one more time. Thank you, Alex, for letting me interrupt. But this episode of this week, tech brought to you by Starlight Hyperlift. I'm really happy to tell you about this. We've been talking about Spaceship for a while. I'm really kind of a fan of Spaceship. They're a domain and web platform that simplifies choosing, purchasing and managing domain names and web products, domain names at below market prices. It's a really big modern platform with modern interface, but they also do hosting and they've just launched something new, which is fantastic.

Leo Laporte [02:42:41]:
It's called Starlight Hyperlift. It's their new cloud deployment platform for launching containerized apps with zero infrastructure headaches. I know you know, anybody who's working on minimal viable products, MVPs or maybe coding as, you know, vibe coding at home or in their spare time or any enterprise that wants to try out code has been trying this idea of containerized apps. But imagine going from code to cloud with GitHub based deployments, real time logs, pay as you go, pricing, no servers, no YAML files, no DevOps. This is incredible. Just your project in the cloud in seconds and you pay as you go. You don't pay for more than you need. Spaceship.

Leo Laporte [02:43:30]:
Of course, I'm not surprised they have been so innovative with Hyperlift. Spaceship takes that same philosophy that has made this incredible domain registrar and brings it to cloud native deployment. It's made for devs, for indie hackers, for innovators who need to test fast, iterate faster and ship smarter. Once you try it, you will never go back. Go to spaceship.com/twit to find out more about Starlight Hyperlift custom deals on spaceship products. That's spaceship.com/twit. spaceship.com/twit. Just check it out. I think you'll be very impressed.

Leo Laporte [02:44:12]:
Back now to Alex Katrowitz and This Week in Tech.

Alex Kantrowitz [02:44:17]:
Thank you very much, Leo. It's great to be back here with our terrific panel, Ari Paparo of marketecture. Ari, tell us a little bit more about the podcast. Where can people find it?

Ari Paparo [02:44:28]:
Sure. Marketecture Podcast is available where you listen to podcasts and on YouTube. It's about ad tech and advertising and how it's evolving at a very granular level. We have Guests of CEOs of public companies and practitioners in the advertising market. We talk about privacy, all kinds of stuff like that. And Then I have a newsletter at Markitecture TV where you can hear more about this to things.

Alex Kantrowitz [02:44:54]:
Great. And Brian, why don't you tell us a little bit more about the tech.

Brian McCullough [02:44:57]:
Brew ride home daily, 15 minute long. Catch you up on what you missed today in the world of tech. Originally it was your ride home because before COVID it was coming out every afternoon for your commute home. But now I just put it out when I'm done with it, so. Usually comes out around noon.

Alex Kantrowitz [02:45:17]:
Love it. Yeah, it was the ride home then the kind of. Okay. Your daily walk around the block for sanity.

TWiT.tv [02:45:23]:
Yeah.

Brian McCullough [02:45:23]:
Or people in New Zealand or whatever say, hey, it's my shower podcast, so.

Alex Kantrowitz [02:45:29]:
All right. And Dan, one more time. AI and I, the smartest people.

Dan Shipper [02:45:33]:
How the smartest people in the world live and work with AI how to use it to live a better life and build a better business.

Alex Kantrowitz [02:45:41]:
Love it. And again, my show is big technology podcast. I think we've had. Well, we've had Brian on or you've been on a version of my show. Dan, you haven't been on yet, but we're gonna have to bring you on.

Dan Shipper [02:45:50]:
Haven't invited me. Bring me on. I'm there.

Alex Kantrowitz [02:45:53]:
I know we started here with Twitch, so we should definitely have you on big technology. Also, I just want to thank Benito. Benito, are you around? Are you able to put yourself on camera? Either way, thank you again for all the support. Hey, Benito, how are you, man?

Brian McCullough [02:46:07]:
I'm doing all right.

Dan Shipper [02:46:08]:
How are you all doing? Oh, I guess my camera's not working.

Alex Kantrowitz [02:46:12]:
I just wanna publicly shout out Benito because what you haven't seen behind the scenes is my frantic DMS to him. How we doing? How is our pacing? Are we. Is it time for an ad break? So we wouldn't have a show without you, Benito, thank you again for being here today.

Ari Paparo [02:46:29]:
No problem. Thank you for hosting.

Alex Kantrowitz [02:46:32]:
My pleasure, My pleasure. Yeah, this is really fun. I'm enjoying this. Thank you to Leo for letting me do it. Okay, we're in our etc fun block. We'll be shorter this one. And then we're going to send you all on your way for the week. I don't know what to start with.

Alex Kantrowitz [02:46:45]:
Peter Thiel, the Antichrist. Let's go to Peter Thiel's interest in the Antichrist. Don't want to get sued. Let's Talk first about friend.com. this is from Adweek. AI startup friend bets on foes with 1 million dollar subway campaign. The founder of Friend is betting on making enemies. Avi Schiffman, CEO of AI companion startup Friend poured than $1 million into a new York City subway takeover, plastering stations and cars across all five boroughs with stark white posters that practically beg to be defaced.

Alex Kantrowitz [02:47:16]:
This is the world's first major AI campaign. We beat OpenAI and Anthropic and all these other companies to the pose, to the punch. Some of the copy is really unbelievable. Friend noun. Someone who listens, responds, and support you. They're all in service of The Friend Device company was founded in 2023. The Friend Device listens to conversations without being prompted, tracking interactions and offering commentary on daily activities. Tapping it prompts responses through a companion app, which replies, delivered by voice or text.

Alex Kantrowitz [02:47:51]:
Do we have any Friend users here on the panel? What do we think about the Friend campaign?

Dan Shipper [02:47:56]:
I'm friends of Friend users.

Ari Paparo [02:48:01]:
That means you've been recorded whether you liked it or not. This is chaos marketing. It's the newest thing. It's the same as Cluly, where there's so much saturation in the market for attention that marketers are doing things which are so insane that they break through the clutter. Example being like Kraft macaroni and cheese ice cream where they did a collab with some. I forget who they did a collab with where they do something that's intended. They used to call it a PR stunt. Now it has a fancy name, Clulee, with their videos about how they're.

Ari Paparo [02:48:37]:
I don't even know, they have strippers in the office and stuff like that. And I think this is the same genre where it's not the message, it's the fact that the message was out there. I think in this case it's not a great one because the product is kind of offensive as well and probably unlikely to get a. A lot of attention, good attention, but, you know, you got to spend your VC money somehow.

Alex Kantrowitz [02:49:01]:
Are we playing into the trap now that we're talking about it here on Twit? Was this exactly.

Dan Shipper [02:49:06]:
He got what he wanted going to.

Alex Kantrowitz [02:49:08]:
Be in the PowerPoint where they, like, do results of the campaign.

Dan Shipper [02:49:12]:
I think it's always a shame when you see a media founder not. Not pursue their true calling. I think he would be. He would be an incredible media founder. And I think that. I think this has been going on forever. Like, this is how Virgin got built. You know, Richard Branson did the same stuff.

Dan Shipper [02:49:29]:
It's just with a different coat of paint. But the difference is that you need. You can. It only works to do this when your product's working. And I think it's pretty clear that they don't have product market fit with the current version of the product. I think something like this will exist and could be interesting, but this is not it. And I think they're. I think he's a really good marketer, but you're sort of putting the cart before the horse.

Dan Shipper [02:49:56]:
If you're pouring tons of attention onto a product that people don't retain with so a for effort but like it's just the product's not good enough.

Alex Kantrowitz [02:50:08]:
Yeah, I never heard the term chaos marketing or Brian, go ahead.

Brian McCullough [02:50:11]:
He does say that he's almost out of money. He spent all my money on this so huge gamble, he says. So it might not matter, but to do the thing that we've been doing. I'll show. Bring it back to the dot com bubble. There was one startup that raised $30 million and spent it all on a Vegas party where they got like, I think the Dixie Chicks or whatever. So I would say that we did it better back in the day where you spent it all on a party as opposed to subway ads. So get better Gen Z.

Alex Kantrowitz [02:50:47]:
Does anybody here. Does anybody here have any interest in using the friend? I mean, this is a thing, you know, we were making light of it, but it is a thing that even Sam Altman and Johnny I've are working on a potential device where you kind of have the AI with you all the time and it records. Maybe it records what you do and gives you insights. So are we. Are we selling friend too early? Should we be friends to Friend?

Ari Paparo [02:51:11]:
These people need to get out of San Francisco. I mean, this is the most San Francisco coded thing in the world. The idea that I need to improve my life so much by measuring everything and recording everything. You take that to a place like New York and it's just offensive in so many ways to the social contract. And I feel like this happens kind of over and over again with, you know, a certain kind of subculture trying to use technology to. To move forward in the way they look at the world.

Dan Shipper [02:51:45]:
Coming to you live from New York. I record all my meetings. I don't wear a. I don't wear like a pendant, but I probably would. I bet you there's a form factor that. That would work for me. I think that the way that he's branding it is intentionally like really polarizing. But I think that someone will figure out how to do it in a way that actually feels natural and normal.

Dan Shipper [02:52:09]:
I. I will say like if, if anyone here uses granola, like it's bec really quickly become just a Sort of standard thing to record all your meetings. And I. I bet, like, for example, the Facebook glasses. There are people who are in my office that, that wear those all the time. The meta. The meta glasses. And they record stuff and doesn't matter.

Dan Shipper [02:52:30]:
And so I think the recording thing is. Is. Is interesting, but I think we will find social conventions that work for us. And the companionship thing, like, I mean, I use ChatGPT sort of like that. Not. Not really in the same way where it's like, it's not. It's not. I have other friends, but I talked to ChatGPT.

Dan Shipper [02:52:52]:
I talked to Chat GPT a lot about a lot of stuff that I probably wouldn't be able to talk about to anybody else in my life. Like, the way I think about it is it's this. It's my smartest friend. Like, it's into everything that I'm into and that's actually like, helpful. It's fun. I. I would like something that does that even more.

Alex Kantrowitz [02:53:12]:
Brian, would you use something like this, A friend pendant or an always on meta glasses.

Brian McCullough [02:53:20]:
Glasses? Possibly. But I need plausible deniability for I'm not just being a creeper. So. So until social mores change, I need the glasses and plausible deniability.

Alex Kantrowitz [02:53:32]:
Yeah, I was just in the gym today and I was like walking through the locker room and there's like, no cameras, please. And it's like, well, I'm not going to go in my Kodak. My phone is in my pocket, but I sure as hell can't walk in here with any meta glasses, even if they're always on computer. Go ahead, Ari.

Ari Paparo [02:53:48]:
Yeah, gym locker room is the worst case scenario here. But yeah, I think what Dan said is totally right, which is at work, recording meetings is a great use case. And I actually would go further and I think we'll see a trend, I don't know how soon of companies requiring the same way cops nowadays have to have body cams. You may have a situation where employees in the office must have everything recorded at all times. And I. And analyzed by AI. I don't think we're that far away from that.

Alex Kantrowitz [02:54:20]:
It's going to be very, very creepy because you'll have like CEOs. Like I've been in companies where CEOs get all the emails in the companies in the company and you get called in. Like, I think that could happen easily in this environment and I wouldn't like it very much. All right, let's talk about Peter Thiel's new quest to have everyone think more about the Antichrist. It's from the Wall Street Journal. Peter Thiel wants everyone to think more about the end of the world. Of course. He's the billionaire investor in data, AI, defense and weapons.

Alex Kantrowitz [02:54:49]:
For about a year now, Thiel has been publicly laying out his understanding of biblical prophecies and the potential for the rapid advance of technology to bring about an apocalyptic future. In a lecture, he encouraged an audience to continue working towards scientific progress, whether in artificial intelligence or other forms of technology. Fearing it or regulating it or opposing technological progress would hasten the coming of the Antichrist. This is how Thiel says the end of the world might happen. Existential risks will present themselves in the form of nuclear war, environmental disaster, dangerously engineered bioweapons, and even autonomous killer robots guided by AI. As humans race toward a last battle, the Armageddon. A one world government will form promising peace and safety. In Teal's reckoning, the totalitarian authoritarian regime with real teeth and real power will be the coming of the modern day Antichrist, A figure defined in Christian teachings as the personal opponent of God who will appear before the world ends.

Alex Kantrowitz [02:55:50]:
I guess. Is Teal saying here that you need unbridled technology progress to fight the Antichrist? If so, is that the right path? Ari, you're nodding your head.

Ari Paparo [02:56:02]:
I have fatigue here. Like, okay, so I'm Jewish. Let's get the disclaimers out of the way. I'm Jewish, but I went to a Catholic university. I believe his theology goes as follows. Which is the goal of humanity is flawed. Jesus died for our sins. The goal of believing, not the goal, but the empowerment of Christianity is to better yourself, to make yourself more like Christ.

Ari Paparo [02:56:27]:
In a sense, that's not exactly right, but that's how he's expressed it. And the way we're improving humanity is technology. And that is the fullness of humanity is through technology we become more like Christ. And therefore the Antichrist is Greta Thunberg, who is the representative of being anti technology. And therefore, if you're following me so far, the Antichrist is going to be an anti technology zealot. And the only way to stop technology is through autocracy, because technology naturally happens. So you need top down autocratic governments to stop technology. And so the Antichrist will use government to stop progress.

Ari Paparo [02:57:12]:
That is my understanding of the argument he's making. Thank you very much.

Alex Kantrowitz [02:57:18]:
So what's your take? Are you on board?

Ari Paparo [02:57:20]:
I just love this. This is the best that's happened in technology in a long time. This whole conversation's got me going.

Alex Kantrowitz [02:57:27]:
Is $100 billion. Nvidia and OpenAI tie up a good way to fend off the Antichrist.

Ari Paparo [02:57:35]:
We should ask ChatGPT what it thinks.

Dan Shipper [02:57:38]:
You just have to hand it him. He's a master of strategy and branding and it's incredible. I think he believes this and also it's just masterfully put.

Ari Paparo [02:57:54]:
I love the fact that he believes it. I love the fact that he's. That he's talking about it too. It's. I mean, I don't necessarily love this guy. I'm not a fan of his necessarily. I just, you know, the audacity and the self confidence to go out and start talking about this is just next level.

Alex Kantrowitz [02:58:10]:
Why would the slowdown of technology usher in the Antichrist's ability to roll rule in a totalitarian way? That's the point I think it's getting.

Ari Paparo [02:58:19]:
I think it's the opposite, is that the Antichrist would be the representative who would take down technology or try to oppress technology, and the only way that person could be successful would be through, you know, total despotic control over humanity across countries and whatnot.

Alex Kantrowitz [02:58:36]:
And sorry, but Greta is the one that he's worried about here.

Ari Paparo [02:58:39]:
Yeah, Greta Thunberg is sort of a candidate for possible Antichrist because she represents everything that's. That's like she wants. She's against technology and she also would love to have like a global world government oppressing technology.

Alex Kantrowitz [02:58:53]:
You know, Ari, when I asked you to come on the show, I knew there was a lot of really good reasons to have you here. Your experience in big tech, your time watching antitrust trials. But I think your explanation of this Antichrist dynamic really is where you brought us home today.

Ari Paparo [02:59:08]:
So when I saw it on the agenda, I was, I was raring to go.

Alex Kantrowitz [02:59:15]:
All right, should we do one last headline and then call it a week? I think this is worth talking about. Apple has a ChatGPT style chatbot that Mark Gurman says deserves a public release. So the company has a internal, fully fledged chatbot app code named Veritas, for submitting queries without the need for voice interaction. The app allows employees to type requests, receive information, to hold back and forth conversations like they would with ChatGPT. They are not going to release it to the public, or at least they don't intend to. This, I believe, is a mistake, says German. While the improvements to Siri will bridge some of the gap with AI leaders, services like ChatGPT, Perplexity and Gemini have made it abundantly clear that people want a proper chatbot experience. And Apple By God, should give it to them.

Alex Kantrowitz [03:00:04]:
I'm summarizing in that last statement. But should they release what is it called, Project Veritas to the masses?

Brian McCullough [03:00:12]:
Did I say this when we were Talking about that OpenAI daily check in thing? This is what Apple like that was the other thing about it. This is what Apple should be doing because the only way that OpenAI can give you your daily update is to, if you, if you plug in all of your information to them. And this is what Apple is promising. Well, we'll do it with privacy and stuff like this. It's just. Yeah, I agree with Mark because if they release it three years from now and the ChatGPT thing has been on the market for three years, it's too late.

Alex Kantrowitz [03:00:45]:
Dan, if it was good, if it.

Dan Shipper [03:00:47]:
Was good, I think they would have released it. Yes, I think, I think they should release something, but I don't think, I think if it was good, it would be out.

Alex Kantrowitz [03:00:56]:
Ari, are you, are you, you're a former Google guy. Are you Android or iPhone?

Ari Paparo [03:01:01]:
I'm iPhone, but that was exactly going to be my point, which is how much better does the AI and Android have to be before people are willing to switch phones? It. And I think we're pretty close already. I mean, Google's running commercials during the NFL constantly that says, you know, has your phone been promising AI for a year? You know, more or less paraphrasing that, that I think it's pretty close to where the lack of decent AI on an iPhone is a, is a product decision criteria for certain subset of people. And that's only going to get worse.

Alex Kantrowitz [03:01:34]:
All right, so I think that rounds out our show. We've covered a wide variety of topics. We've talked about the AI investment. We've talked about whether we're in a bubble. I still don't know whether we're in a bubble, but I appreciate the arguments on both sides. We've talked about the new AI products like Pulse and Vibes, the creator economy. We've covered AI automation, we've covered TikTok's, the TikTok deal or the lack thereof that the, you know, whether big tech antitrust is, is actually having an impact. And then of course, is Greta the Antichrist, really, and why Peter Thiel wants us to think about it.

Alex Kantrowitz [03:02:13]:
So, so if you're standing up at home, join me in giving our amazing panel a round of applause. You guys were so great today. You really brought it. Such fascinating insights and perspectives and I just learned so much listening to all of you. So thank you so much.

Dan Shipper [03:02:30]:
Thanks for having us.

Ari Paparo [03:02:31]:
Yeah, thanks.

Alex Kantrowitz [03:02:32]:
Okay, I'm just going to run through the podcast one more time. AI and I from Dan Shipper. Dan, thanks for coming on.

Dan Shipper [03:02:38]:
Thanks for having me.

Alex Kantrowitz [03:02:40]:
The tech Brew Riot home from Brian McCullough. Brian, great to see you as always. Thanks for coming on.

Brian McCullough [03:02:46]:
Salute Emoji.

Alex Kantrowitz [03:02:48]:
Okay. And the Marketexture podcast from Ari Paparo.

Ari Paparo [03:02:51]:
Thanks for having me.

Alex Kantrowitz [03:02:53]:
I cannot wait to see what they call the show, what the show title is. We have many good candidates from Colonoscopy Eureka to Antichrist AI Antichrist. So I'm sure the good folks at TWIT will find a good show title for us. My name is Alex Kantrowitz. I'm the host of Big Technology Podcast. If you've enjoyed our conversation today, I hope you check out my show as well on your podcast app of choice. And with that, as we're known to say around here, at the end of this show, another Twit is in the can.

TWiT.tv [03:03:26]:
Hey, thanks for tuning in to TWiT, your tech hub for intelligent, thoughtful conversations. Conversations. If you want to take your experience to the next level and support what we do here at TWiT, say goodbye to ads and say hello to Club Twit. With Club Twit, you unlock all our shows ad free. You also get exclusive members only content. We do a lot of great programming just for the club members. You also get behind the scenes access with our Twit plus bonus feed and live video streams while we're recording. And don't forget the fantastic members only discord.

TWiT.tv [03:03:58]:
It's where passionate tech fans like you and me hang out, swap ideas and connect directly with all of our hosts. It's my favorite social network. I think you'll like it too. Club Twit. It's not just a subscription. It's how you support what we do and become part of the Twit family. Your membership directly supports the network, helping us stay independent and keep making the shows you love. If you're ready to upgrade your tech podcast, head to twit.tv/clubtwit and join us today.

TWiT.tv [03:04:27]:
Thanks for being here and I'll see you in the club.

All Transcripts posts