Intelligent Machines 865 transcript
Please be advised that this transcript is AI-generated and may not be word-for-word. Time codes refer to the approximate times in the ad-free version of the show.
Leo Laporte [00:00:00]:
It's time for Intelligent Machines. Jeff's here. Paris is here. Our guest, Daniel meisler is a 24 year security expert, but also a YouTube host on unsupervised learning and an AI guru. We're going to talk about the new anthropic model Mythos. They say it's too dangerous to be released. Is it? We'll find out next. Podcasts you love from people you trust.
Leo Laporte [00:00:26]:
This is Twit. This is Intelligent. Intelligent Machines with Jeff Jarvis and Paris Martineau. Episode 865, recorded Wednesday, April 8, 2026. Mythic. It's time for Intelligent Machines, the show where we talk about AI robotics and all the smart little doodads surrounding us all these days. Paris Martineau is here, investigative reporter for Consumer Reports and is brightly sunlit in her brightly sunlit Brooklyn apartment.
Paris Martineau [00:00:58]:
Yeah, it's getting brighter and brighter here in New York and that's great for me, a person who needs sunlight to feel happiness, but bad for me, a person who podcasts from 5 to 8pm wow.
Leo Laporte [00:01:12]:
Yeah, Paris pulls no punches, Daniel, you'll understand that in a moment. I love it. Also here, Jeff Jarvis, author of the Gutenberg Parenthesis magazine and his new book Hot Type, coming out in July. You can get that all@jeffjarvis.com and of course, professor emeritus of journalistic innovation at the Craig Newmark graduate Descriptions new at the City University of New York.
Jeff Jarvis [00:01:35]:
There we go. I thought we'd miss it.
Leo Laporte [00:01:37]:
I, I, you know, I was going to bypass it and I decided to do it.
Jeff Jarvis [00:01:41]:
I sensed, I sensed you were on. You wrote a fork of the.
Leo Laporte [00:01:43]:
I was on my way. I was on my way.
Jeff Jarvis [00:01:44]:
Did the right thing in the end.
Leo Laporte [00:01:46]:
Hey, I'm, I'm very excited about our guest this week. I've been watching his YouTube channel for quite some time. I've installed a number of his tools for AI. He is. And I want to thank Larry Gold in our Discord Club for making the connection because I've been trying to get ahold of Daniel Meisler to get him on the show for some time. Daniel, thrilled to have you. Let me give you a little bit of his CV because it's so interesting. He was an infantryman for the 101st Airborne Division, became an intelligence expert for his battalion, has been a security guru at a number of companies, some you might have heard of, like Apple, Robinhood, hp.
Leo Laporte [00:02:31]:
His company, now Unsupervised Learning is the name of his YouTube channel, but also of his company that advises companies on cybersecurity and AI. And I Am thrilled to have Daniel on the show. I've used your Pai for some time with my Claude code, which is your personal assistant software. Some skills that are really incredible. Your fabric, which you've done, you did some time ago actually, but it's still really, really good. You told me Pai is going to be in version five soon. That's exciting.
Daniel Miessler [00:03:02]:
Yeah. Hopefully this week.
Leo Laporte [00:03:03]:
Yeah, yeah. And let's see, what else? Telos, which is an interesting skill that came along with Pai, which was kind of about exploring your worldview and how you feel about things and stuff. And it was quite a fun exercise to go through that. Daniel, welcome to Intelligent Machines.
Daniel Miessler [00:03:21]:
Thank you for having me. It's a tremendous honor.
Leo Laporte [00:03:24]:
Yeah.
Daniel Miessler [00:03:24]:
I can't remember when I started watching you originally, but had to be in 90s, right?
Leo Laporte [00:03:30]:
Yeah, yeah, yeah.
Daniel Miessler [00:03:32]:
It was like actual, actual TV and everything.
Leo Laporte [00:03:34]:
Actual television, yeah.
Daniel Miessler [00:03:36]:
You and Chris Perillo.
Leo Laporte [00:03:37]:
Yeah, that's right, yeah. On tech TV. Yeah. You actually been blogging since that time, since 1999. There's a lot of good posts on your blog. Let me start because there was a very big story. Actually, I was thrilled to know that we were going to get you on today because yesterday Anthropic kind of threw everything up in the air. Scrambled everything with an announcement of something we knew or had heard or guessed was coming.
Leo Laporte [00:04:04]:
A new model it calls Mythos.
Paris Martineau [00:04:07]:
Great name by the way. I'll just put that out there.
Leo Laporte [00:04:10]:
Wasn't it Capybara? Internally, Capybara does not carry the same weight. Well, you know What? Next week OpenAI is going to release Spud. So it could be worse.
Paris Martineau [00:04:22]:
Of course they are.
Jeff Jarvis [00:04:23]:
It sits on the couch.
Leo Laporte [00:04:26]:
We don't know how capable Spud would be. But Anthropic has raised some alarms about Mythos. They did release benchmarks that made it look significantly better than what is clearly the Premier AI right now. Opus 4.6. In some cases twice as good as software benchmarks. Software engineering benchmarks, 50% better. I mean these are massive improvements.
Jeff Jarvis [00:04:52]:
And what Anthropic have it.
Leo Laporte [00:04:54]:
Well, what Anthropic has said and that, you know, is it, is it marketing? I don't know, is it real? Is this what they have been doing with it over the last few weeks is setting it on open source software, operating systems and browsers looking for zero days for flaws. They claim to have found more than a thousand severe flaws, some of a flaw in OpenBSD that have been around for 27 years. Flaws in, you know, serious CV10 exploits in some very well known software and they say if we release this to the public now, the bad guys will use it to take over the world. And so what they've decided to do is release it in a very limited way to some very big companies. It's Project Darkwing.
Jeff Jarvis [00:05:48]:
Glasswing.
Leo Laporte [00:05:48]:
Glasswing. I'm sorry. Which is a moth, right? I think it's a moth. The idea is we're going to let these companies fix their zero days using Mythos before we let anybody else have it.
Jeff Jarvis [00:06:02]:
Are they ever going to. Well, two questions before we get into it. Just quickly, so I understand the background here. One, do they plan to ever release it to the public?
Leo Laporte [00:06:08]:
It's unknown.
Jeff Jarvis [00:06:09]:
And two, is this really good at security because it was trained in that way, or is it just because a model this powerful will inevitably be this good at security, in other words.
Leo Laporte [00:06:20]:
Oh, this is a great question for Daniel because you posted some of the those. Yeah, yeah. So tell us.
Daniel Miessler [00:06:26]:
Yeah, yeah. So I think they have said already that they intend to have some of the features and capabilities come to future models, so future versions of Opus. So they're going to bring some stuff over from Mythos into opus. They don't have any direct plans right now, I don't think, to release Mythos itself. But to your second question, which I think is super, super important, it is not trained on cybersecurity. It's just a regular model. They're only focusing on the cybersecurity because they're super worried about it, because that's one capability that's like. It just produces a visceral impact in us to think about with the hacking stuff.
Daniel Miessler [00:07:11]:
But also. Yeah, you can actually just go hack stuff with it. But I find it extremely significant that it's just better across the board at work in general. And cybersecurity is just work.
Jeff Jarvis [00:07:24]:
Right.
Daniel Miessler [00:07:25]:
So if we're worried about, like, knowledge workers being replaced, potentially.
Jeff Jarvis [00:07:32]:
Paris.
Daniel Miessler [00:07:32]:
Well, it just got that much better at everything, not just cybersecurity.
Leo Laporte [00:07:37]:
So genuine that this is. This coming from anthropic is not just marketing material. That it is a genuinely.
Paris Martineau [00:07:45]:
It seems like a paradigm on the precipice of ipoing. It does. I mean, to play devil's advocate.
Leo Laporte [00:07:51]:
Well, that's right. No, that's why you might suspect it. Yeah, no, that's why you might suspect it.
Daniel Miessler [00:07:56]:
I believe it. I've not ever seen them in terms of morality. I've not seen them sort of misstep. I've believed Dario since the very beginning that he is actually concerned everything the way they set up the company the way they do their marketing. They were the first to release all these reports showing like their, the ugliness, the things that went wrong. They started this trend which other companies then started following. I, I feel like they are morally fairly pure and clear as much as that can be the case. So I don't have any reason to doubt that the model is actually this good.
Jeff Jarvis [00:08:38]:
Daniel, to follow up on the discussion we just had a minute ago though, because these models leapfrog each other. Is there a ticking time bomb here that says that whether it's Deep Seek or whether it's Google or whoever it is, that when their model gets this powerful it will also be this good at cybersecurity? Ergo, cybersecurity is borked permanently.
Paris Martineau [00:09:00]:
Ergo cyber attacks for everyone.
Jeff Jarvis [00:09:03]:
Yeah. So where does that take us? Just at a, at a, at a high level.
Daniel Miessler [00:09:07]:
Yeah, yeah. Yes. I mean there's multiple ways that this stuff seeps into everything else. So one is the thing can be distilled as soon as it becomes available or visible in any sort of way. There could be leaks inside community.
Leo Laporte [00:09:25]:
The distillment would happen from other AI companies. In fact, Anthropic has accused Chinese AI companies Alibaba and Zai and others of doing exactly that and training their models by asking Claude, using Claude to train their models. So if it's a really good model, it could propagate, in other words into Quen and GLM and all these others. So that would be leak number one. What's the other?
Jeff Jarvis [00:09:52]:
So go ahead.
Daniel Miessler [00:09:52]:
Yeah. The others are, I mean community of like researchers and stuff. It tends to be tight knit. It seems like what we've seen over the last few years is that whenever a great idea happens, it's supposed to be behind closed doors, like cordoned off from everyone. But somehow the world learns about it like three to six months later. Somehow it's in all the models. Somehow all the competitors seem to have it. So it's almost like it could be like a co developing of like calculus with like Newton and whoever the other guy was where it just.
Daniel Miessler [00:10:28]:
Right time. Was it Leibniz? Yeah, it's just like the right time for the idea. It could be that or it could be that they're, they're all going the same parties and they're all talking. Or it could be security leaks. But either way what we've seen is that just a few months afterwards it starts. All these tricks start to seep into everywhere else.
Leo Laporte [00:10:49]:
There doesn't seem to be much secret sauce in AI, partly because it's all based in the same, you know, five papers. And everybody who's working with transformers basically understands what's going on. Do you think there's any secret sauce in Mythos? Is it beyond rl? Is it beyond massive compute?
Daniel Miessler [00:11:08]:
I mean, I don't know for sure. I would say there probably is secret sauce. I mean, I think the trick with the secret sauce, though, is it'll be something weird. Like, you know, if you actually just transpose these numbers. Right, and you just like, let do three iterations instead of five.
Leo Laporte [00:11:27]:
Right.
Daniel Miessler [00:11:27]:
Do five instead of three. That is like a few words that someone can say to someone else. And they get on slack and they text their friends, and suddenly that's being done in their lab, and suddenly that comes out in their thing. So it's like it's a combination of smaller little tricks and they accumulate into these big advantages. That's the way I currently understand it.
Leo Laporte [00:11:49]:
I think there's also a sense among the scientists that are doing this that it should be more open, that no one company should have complete control of this. And now I think if Mythos lives up to its Mythos, there's good reason to think that, I mean, this is too dangerous for one company to control.
Paris Martineau [00:12:12]:
But I think there also could be an argument made that it's like, too dangerous to be dangerous when it's spread around to anyone and everyone.
Leo Laporte [00:12:20]:
I mean, how do we handle that, Daniel?
Paris Martineau [00:12:22]:
How do you suddenly handle, like, if every. Every yokel that could use ChatGPT or Claude could suddenly be in control of a very sophisticated cyber attack?
Daniel Miessler [00:12:34]:
Yeah, exactly. I mean, yeah, it's. It's dangerous for one company to have it. The only thing more dangerous is for every company to have it.
Leo Laporte [00:12:42]:
It's like the atomic bomb.
Daniel Miessler [00:12:44]:
Yeah. Yeah. And the thing I'm actually worried about, and this is what I'm worried about, an actual event. But there's smaller things that could go really, really crazy and just, like, change everything overnight. Somebody could have an open source model or claim that they got it from one of the main models. But there could be, let's say there's, you know, heaven forbid. Forbid, some sort of, like, attack. Let's say an entire apartment building smells a smell, and there's a 911 call and everyone gets flooded out of the building.
Daniel Miessler [00:13:20]:
They all get oxygen outside, and it's like, oh, now there's a rumor it was a terrorist event. It was a chemical attack. It was a biological attack. Then they do some sort of research and they found out there was an AI model involved. In this current climate, how close do you think we are to the government saying AI is now able to create biological attacks whatever whatever. Therefore, OpenAI and Anthropic now belong to the government. And Hugging Face is now illegal. And open source models and open source development is now illegal.
Daniel Miessler [00:14:03]:
Like, I don't think we are that far away. We are one news story away. Which, by the way, could be completely true, could be partially true. Or it could just be kind of like someone ran with the story. Yeah. And they, like, they. Maybe they searched it and it came back with an answer, but that's not what they actually made. And it could be that the thing that they smelled in there was they tried to combine the chemicals, but it actually was just ammonia.
Daniel Miessler [00:14:29]:
Like, not even dangerous. You know what I mean? But just think of, like, how, how much hype and how much fear this could spawn. And with the current situation, like, politically, government wise, worldwide, I just think there's a very high chance of, you know, things going crazy policy wise.
Leo Laporte [00:14:52]:
It does seem like if Mythos is as dangerous as it. As Anthropic is claiming, it's actually we should say. And you said this. It's both. It's incredibly powerful. For security experts, it's a tool for finding zero days. They found thousands already in a few weeks. But the same tool can be used to find zero days for bad guys too.
Leo Laporte [00:15:16]:
I would not be at all surprised if government stepped in and said, we need to control this. And in fact, I think that in a normal situation, we have a little bit of a different kind of government right now. But in a normal situation, I would applaud that. That would be the right thing. The people should control this. Not a single company.
Jeff Jarvis [00:15:35]:
It's a printing press. It's very dangerous. Somebody has to control it. The Pope must own this. No one else can use it.
Leo Laporte [00:15:41]:
But the primary, Jeff is an expert on the printing press. We should point out. Daniel might be wondering why you're invoking the Pope.
Daniel Miessler [00:15:48]:
Interesting.
Leo Laporte [00:15:49]:
But this is something they said when Gutenberg invented the printing press.
Jeff Jarvis [00:15:53]:
China. So let's say. Let's say that the US Reigns it in some sensible way, China is going to go. China's now sitting saying, hello, where is our Mythos?
Daniel Miessler [00:16:05]:
Yes,
Jeff Jarvis [00:16:09]:
I'm sorry, come back to the third time. So fine, it's controlled for now, but once it's out there, and it will be out there, or once somebody else catches up, there's no cyber security anymore.
Daniel Miessler [00:16:24]:
Yeah, I, I think what ends up happening is the what. What it ends up doing is raising the Amount of security just worldwide, because it's kind of like back in the day, you could have open ports, you could have a database sitting on the Internet wide open.
Jeff Jarvis [00:16:44]:
Right.
Daniel Miessler [00:16:44]:
You could have a wide open web server and like, nothing would happen.
Jeff Jarvis [00:16:47]:
Those were the days.
Daniel Miessler [00:16:48]:
Yeah. And then the, the tooling started getting better and better. And pretty soon it would be like within a few hours or a couple of days, you would get compromised. And now it's a matter of seconds. Right. So it's just like, once the attacks are so constant, it'll be kind of like a battlefield. You can't walk out into a battlefield without getting shot. And that's what it'll look like to have any infrastructure publicly at all.
Daniel Miessler [00:17:15]:
So things will just drop offline if they're not highly secure. But the problem is the transition between the current state and then. Right, because that's a nice sort of clean state, clean story. But we got to go through a whole lot of compromise before then. And unfortunately a lot of those things are like power structures and lights and, you know, desalinization and stuff like that. So it's like, yeah, there's a lot of critical infrastructure involved.
Leo Laporte [00:17:46]:
We're talking to Daniel Meisler. Am I saying your name right? Daniel Meisler.
Daniel Miessler [00:17:49]:
Yeah, that's right.
Leo Laporte [00:17:50]:
Okay. He runs Unsupervised Learning. It's a great YouTube channel. Highly recommend it. UL Live is his website, but it has a number of tools you might be interested in, including upgrade to human 3.0. I think this is kind of what he's been focused on lately. How AI is going to change the workplace, how it's going to change business, and how to prepare for this upcoming change. And it is.
Leo Laporte [00:18:20]:
I think it is accelerating. I almost feel like it's exponential at this point. In fact, that may be really the secret of Mythos is they used to Claude to develop Claude and, and that's what we always expected. That's what Ray Kurzweil was always talking about, is that once it starts self improving, it's going to happen much more quickly than it would if humans were doing it. What do you. You mentioned this in your tweet. What, what does this mean for work?
Daniel Miessler [00:18:46]:
Yeah, I mean, that's what I'm fundamentally worried about. I'm. It. It means a lot for work. I mean, if you were to fully roll out opus to everyone, or it would were to be open source or whatever and 95% as good, fully rolling it out, I think massively hurts jobs. But the better the model itself gets, the less scaffolding it needs. So it's like you have the intelligence of the model itself, and then you have the intelligence of the overall system with the scaffolding. So either way, we're going to have massive work replacement, in my opinion, over the course of two, five, ten years.
Daniel Miessler [00:19:26]:
But when you have a jump like this, it just needs less information to do its job. The smarter the thing is, the less context it needs. And it just accelerates everything. It really does. I mean, my issue is, like, how does some random person who's a knowledge worker who makes like, $94,000 and their job is like, sending emails, writing reports, like, you know, doing, doing analysis, data analysis and that kind of thing. How does that compare? When they're making $94,000, they have a 40 hours that they're doing their job, and pretty soon it's going to be like ten dollars or a hundred dollars or a thousand dollars to replace them for the year. And it just works 24, 7. And like, what company is not going to want to do this? It's just frightening to me, which is why I'm talking about the human 3.0 stuff, which is, why are we doing these corporate jobs in the first place? We've been making fun of these corporate jobs for decades because we didn't like them.
Leo Laporte [00:20:34]:
Right?
Daniel Miessler [00:20:34]:
And now, now we're like, you know, grabbing on to them.
Leo Laporte [00:20:39]:
We got to pay rent. That's why, Daniel.
Daniel Miessler [00:20:41]:
Right, of course, of course. That's why. I understand that. I'm just trying to simultaneously warn about how bad this disruption is going to be, but also say on the other side of this, we should all be broadcasting, having our capabilities, sharing them with others, and then the value is between human to human. Why do we have all these corporations in the middle? So that's what I'm excited about, but I just think it's going to be rough in the transition.
Leo Laporte [00:21:11]:
You say creators rise, workers fall.
Daniel Miessler [00:21:15]:
Yeah, yeah.
Leo Laporte [00:21:16]:
So be a creator. What does that mean, though? Not everybody's going to have a podcast or have a rock band. What does it mean in this context?
Daniel Miessler [00:21:25]:
So, so. And that's where that whole Telos thing comes in, is like, I like to imagine this visiting alien who, like, meet you in a field or something, and it's like, hey, I've been to 14, quadrillion species and planets or whatever, and I just want to ask you what you're about. And so if this alien asks everyone on the planet this question, too many people, in my opinion, on the planet will be like, I have no idea what you're Talking about? I just don't know what you mean. Like, what do you mean, who am I? And it's like, well, I. I work at this job, I do this thing. So they can describe the tasks, but they're not really describing like the deep center of who they are because they were just never challenged to do this. And so this is what the whole telos thing is about is like, who are you actually? Like, what did you used to enjoy when you were a child? What are you curious about? Like, what could you be if you had all the resources and you weren't afraid? And to me, that's like the activated state that I think we should try to get to. So when I say creator, that's just currently what we call it.
Daniel Miessler [00:22:33]:
But I think in a healthy society, on a healthy planet or whatever, like this would just be default. I think the education system has essentially trained us for hundreds or thousands of years, however long it's been. To be like, your job is to work for Mrs. Johnson. Your job is to improve her PowerPoint. Right. Because she is a special person. You are not.
Daniel Miessler [00:22:59]:
Right. And I just think that fundamental switch has to click in people's minds that. Wait a minute, I can also have a podcast and it doesn't need to be technical, it doesn't need to be non technical. It could be just whatever it is that gets you excited. That's what you broadcast, that's what you put into the world, and that's what other people connect with. And then there's value exchange. I think that's how things should work.
Leo Laporte [00:23:24]:
It's what Maslov called a fully actualized human.
Daniel Miessler [00:23:27]:
That's right.
Leo Laporte [00:23:29]:
Instead of focusing on the bottom of the pyramid, just getting survival, it's getting to the top and actualizing your true self. You've actually written software to do this. That's what Fabric is kind of all about, right?
Daniel Miessler [00:23:41]:
Yeah. Fabric was a bunch of prompts that can help you with that.
Leo Laporte [00:23:45]:
30,000 plus GitHub stars.
Daniel Miessler [00:23:49]:
Yeah, that went pretty crazy. That was in 24.
Leo Laporte [00:23:53]:
And that was before AI really took off even. Yeah, yeah, I guess. Chat GPT35 probably. Right. Era.
Daniel Miessler [00:24:02]:
Yeah. I think the biggest idea there was the markdown for the prompts and clear thinking, going into clear writing and just having the markdown be the official format. Because at the time I think Anthropic and everyone else was pushing like xml and I was like, why do we have. Why do we have script tags involved?
Leo Laporte [00:24:22]:
Let's human readable.
Daniel Miessler [00:24:24]:
Yeah, exactly.
Leo Laporte [00:24:26]:
Turns out AI likes markdown too, which is great. It can read XML fine. But it likes Markdown too. It's not, by the way, Daniel's biggest project, your biggest GitHub project is probably seclists, which you'd did for Kali. Linux with 70,000 GitHubs.
Daniel Miessler [00:24:41]:
Yeah. Myself and Jason Haddock put that together
Leo Laporte [00:24:45]:
and people who use KALI will know that.
Daniel Miessler [00:24:47]:
Yeah.
Leo Laporte [00:24:48]:
Yeah. That's very cool. What if you were a young person in college today or thinking about going to college, a high school. Do you have kids?
Daniel Miessler [00:24:57]:
I don't.
Leo Laporte [00:24:58]:
Yeah. If you had kids who were in high school right now, what would you be telling them to do?
Daniel Miessler [00:25:03]:
Pretty much the same thing. I think the most important thing I would be trying to push them towards is. Is getting them excited about something is not necessarily AI. No, no, no, no. Just. Just something in life. It doesn't. Not even technical.
Daniel Miessler [00:25:19]:
It could be basket weaving, it could be gardening. Just anything that they could grab onto, totally love and totally dive into. Because that curiosity is what produces the expertise.
Leo Laporte [00:25:32]:
Yeah, that's what I told my kids. I will tell you, Daniel, it works. Yeah.
Daniel Miessler [00:25:36]:
Because a lot of. A lot of kids remember we used to complain about this. Like five years ago, the kids would say, I want to be a YouTuber. But you can't get on YouTube and do YouTube. There's no such thing as doing YouTube. You have to talk about a thing. And the thing ideally you would be excited about. Right.
Daniel Miessler [00:25:54]:
And it turns out if you're excited about anything, people will listen. And maybe, maybe they won't, but at least it's interesting because they. It's being broadcast to them. They're hearing enthusiasm.
Jeff Jarvis [00:26:05]:
Right.
Daniel Miessler [00:26:06]:
They're hearing humanity coming from you.
Leo Laporte [00:26:09]:
And they will hate their life because they'll be actualized.
Daniel Miessler [00:26:14]:
Yeah. And I've been excited about this with like, these new approaches to education that have, like this AI component. But I love the mixture of you still have a human teacher there. So curriculum can be handled and optimized by AI, but the job of the teacher is to be the shepherd for the student. The teacher is locked onto the student to hit them with different things until it activates them. Then you can hit them with curriculum or whatever. But the most important thing is to get them involved and curious and thriving. So I think that's a really cool model.
Leo Laporte [00:26:55]:
I feel like, you know, I have been saying on this show that the world changed for me November 24th of last year when Opus or 5 came out and there was a discontinuity. Suddenly what was kind of a fun toy and interesting became a little bit impressive. I have a feeling we're on an even bigger jump. It could be. Mythos is completely to pump the stock. I know, Paris. That's a real possibility. There are people who think it.
Leo Laporte [00:27:26]:
George Hotz thinks it. There are quite a few people who think that.
Daniel Miessler [00:27:29]:
Really?
Leo Laporte [00:27:30]:
Yeah. I was talking to Jeff Atwood the other day. He said Anthropic is like meta but a cult. And I thought that was kind of interesting, too. There is a cultish feeling about Anthropic. We're going to talk about.
Paris Martineau [00:27:47]:
I would say the same thing could be said about OpenAI as well.
Leo Laporte [00:27:51]:
Absolutely. Absolutely. But I wonder if they become a cult because they've stared into the abyss. Right. They're kind of. They've seen something that is a little different. No, no. You don't buy that.
Jeff Jarvis [00:28:03]:
You've got. You've got to, I think, rank their. Their hubris and believing that they are all powerful and they can chest beat and build this thing. And that's part of the. That's part of my question about Mythos. Is it we have just built something so powerful that the world's in danger, and you better be grateful it's in our hands or not. I mean, how much of this is the hubris? It's the same with Sam Altman and his manifesto about how to run all governments in the whole world. There's a lot of hubris built into that.
Daniel Miessler [00:28:38]:
Yeah. I. I won't obviously say anyone's name here, but. So I was talking to a friend who worked at Anthropic, and this was a while back, and I was like. He was like, yeah, I just want to, you know, do a good job here. I'm like, well, it's going to be amazing for your resume. Like, the next place you go, they see Anthropic on the resume. Like, that's great that you have it.
Daniel Miessler [00:29:02]:
And he's like, no, this will actually be my last job. And I'm like. And I'm like, oh, that's amazing. Right? Because he didn't want to talk comp. Or anything like that. I'm like, that's tremendous. You got paid that much. It's amazing.
Daniel Miessler [00:29:19]:
And he just looks at me and he goes, no, that's not why.
Leo Laporte [00:29:25]:
And I'm like, the last job.
Daniel Miessler [00:29:27]:
I'm like, oh, that's you bis. Because he can't reveal anything. Right. So he can't give anything like that. But it's just like the mindset that he had while he was there. I do think it is because they are seeing things, honestly.
Leo Laporte [00:29:46]:
Yeah. I mean, presumably they're seeing stuff beyond even Mythos. Right? I mean.
Daniel Miessler [00:29:51]:
Oh, yeah, no question.
Leo Laporte [00:29:52]:
Yeah.
Paris Martineau [00:29:53]:
Yeah.
Leo Laporte [00:29:54]:
If we look at the. This is the. From the. The system card, and of course, this comes out of anthropic, but the benchmarks are kind of off the charts. And we've seen gradual, somewhat more gradual improvement on the benchmarks as models come out, but this one is a big leap forward.
Daniel Miessler [00:30:13]:
Yeah. Yeah. That's a tremendous jump. I agree with you. I think gpt4chatgpt was the moment. Right. That was the first moment. And I think you're right, Leo.
Daniel Miessler [00:30:28]:
It was right about end of November or middle of December. And I would characterize it actually as a Claude code moment.
Leo Laporte [00:30:38]:
Yeah, I think you're right. Yeah.
Daniel Miessler [00:30:41]:
It was powered by the newer. What was it, 4, 5 at the time? Opus.
Leo Laporte [00:30:45]:
Yeah, it was 4.5, but it was
Daniel Miessler [00:30:47]:
everyone figuring out the harness was actually that good, and they just started pumping out apps. I actually had, like, a few weeks there where I seriously just could not sleep.
Leo Laporte [00:30:58]:
Yep.
Daniel Miessler [00:30:58]:
It was really troubling. I was just like, what is going to happen now? Because everyone can make everything.
Leo Laporte [00:31:06]:
Yeah.
Daniel Miessler [00:31:06]:
And it's really weird because I'm simultaneously. It's like I'm manic during the day for all the positive that can come from this. And then in the evening, I start getting news, and the news comes in and it's like, oh, here's the layoffs. Here's the bombs being dropped. And I'm like, okay, I'm sad for humanity. Like, what's going on?
Leo Laporte [00:31:25]:
Yep. And then there's the next morning, it starts all over again.
Daniel Miessler [00:31:30]:
Yeah, exactly.
Leo Laporte [00:31:33]:
Yeah. I think you kind of nailed it when you said, you know, maybe there's light at the end of the tunnel. There's a pot of gold at the end of the rainbow. But on the way, it's going to be incredibly disruptive. And I feel like the disruption is just about to take off. I don't know what's going to happen with Mythos. I think Anthropic was very smart, if you believe it, which I tend to. I think they're very smart not to release it.
Leo Laporte [00:32:01]:
I mean, I don't think they could release it publicly. Do you think OpenAI's next model, this Spud, that the rumors are. Maybe as soon as next watch out
Jeff Jarvis [00:32:11]:
bud is here will be.
Daniel Miessler [00:32:13]:
I just hope they have a better name. No one will take it serious if it's called Spud. No, no. I'm sure they'll have a cool name. I don't know. I've heard it's probably going to be somewhere around close. But I don't know. I don't know that they just have something hanging out nearby that they're like, okay, I guess we got to release the really good one.
Daniel Miessler [00:32:35]:
There's every possibility that they release it, and it's just kind of slightly better than 5, 4.
Leo Laporte [00:32:40]:
Right.
Daniel Miessler [00:32:41]:
And it fizzles. And there's also a possibility that it's as good or better than Mythos. Like, I know.
Leo Laporte [00:32:48]:
What a wild time.
Daniel Miessler [00:32:50]:
Yeah.
Leo Laporte [00:32:51]:
To be alive. You know, it's all very interesting. It's funny, lately when I go to X, I kind of avoided X. But because a lot of the people like you are on X and a lot of the AI information is on X, you kind of have to go there. And fortunately, X on mobile has a filter. And lately I've been checking Iran, war and AI and the really weird disconnect between one post and another. You feel like we are in a crucible, that something's gonna come out of it. And I hope it's something good.
Leo Laporte [00:33:29]:
That's all I can say. I hope it's something good. I hope it's not a weird dystopia. But thank goodness there are people like you, Daniel, who are working to help us move into human 3.0.
Daniel Miessler [00:33:40]:
Yeah.
Leo Laporte [00:33:41]:
I think there's an opportunity.
Daniel Miessler [00:33:43]:
Yeah, I appreciate it. There used to be popular to ask people's P Doom. Right. What's your P Doom percentage?
Jeff Jarvis [00:33:51]:
Right.
Daniel Miessler [00:33:51]:
And I don't know. I don't feel like it's super useful because I feel like the PDOOM is actually quite high. But I feel like all we can do because we don't know without looking backwards, did the positive thing possibly happen? It feels like the best thing you can possibly do is pretend the good version is going to happen and try as hard as you can to make it happen.
Leo Laporte [00:34:14]:
Yeah, I think you're exactly right.
Daniel Miessler [00:34:16]:
And so sometimes I'm accused of being doomerish, and other times I'm accused of being way too optimistic. And I'm like, you don't understand. Like, I'm trying to, like, maintain sanity here. You know what I mean? So I'm trying to be understand the world as it is, but when I'm going positive, I'm literally turning on the mania flag on purpose so I can keep my optimism and positivity moving forward to actually build and try to make something useful and encourage other people to help. Right. It's like, otherwise, I just get sad.
Leo Laporte [00:34:54]:
That's what Kevin Kelly told us. He is a radical optimist. Act as if. Act as if, act as if it's all going to. It's all going to work out. Because what's your choice?
Daniel Miessler [00:35:06]:
Yes.
Leo Laporte [00:35:07]:
Daniel, thank you so much for spending time with us. I really appreciate the work you do. I appreciate the incredible repos you've put up and I've used them with Claude and they're incredible Telos fabric pai. Take a look at it. You say five is coming, so maybe wait until five is out and then install that. It's.
Daniel Miessler [00:35:30]:
Yeah, that would be a good time. Yeah. Yeah, it's pretty this week.
Leo Laporte [00:35:34]:
Amazing. And watch his YouTube channel because that's fantastic too. And, and as you can see, Daniel has a lot to say and I think very positive and I like that. Let's, let's be. Let's be optimist. Unsupervised learning on YouTube. Ul live on the U.S. thank you, Daniel.
Daniel Miessler [00:35:54]:
Yeah, thanks for having me.
Leo Laporte [00:35:55]:
Really appreciate it. Yeah, we'll get you back soon. Maybe we'll have some good news.
Daniel Miessler [00:36:00]:
Yeah, let's talk about good news.
Paris Martineau [00:36:01]:
Yes, good news in this economy.
Leo Laporte [00:36:05]:
Hey, maybe we'll have jobs. Who knows? Thank you. Daniel Meisler, ladies and gentlemen. We'll be back with more AI news in just a bit. You're watching Intelligent machines. Paris Martineau. Jeff Jarvis. Did I know we had a lot of reading this week? Paris has fallen.
Leo Laporte [00:36:21]:
Fallen. Fallen asleep out of her chair. I know we had a lot of reading this week and Paris was on deadline, didn't have a chance to do all the readings.
Paris Martineau [00:36:28]:
So I did the readings that matter.
Leo Laporte [00:36:31]:
I know we're going to talk about the New Yorker article about Sam Altman a little bit, but I did want to kind of point to this anthropic system card for this model. Claude Mythos.
Paris Martineau [00:36:40]:
Can we talk about the sandwich?
Leo Laporte [00:36:43]:
Yes, we will talk about the sandwich. And we have to say we're taking this on faith. No one I know has used Mythos.
Paris Martineau [00:36:51]:
I was about to say, I do think an important caveat is we don't know how much of this is hype. These could be, you know, kind of cherry picked examples, but they also could not be. And I think it's worth exploring at least some of these things and taking
Leo Laporte [00:37:06]:
them on face value, they are, I mean, giving in Project Glasswing. They are giving this to Apple and Google and other big companies because quite reasonably, if this is all true, they want them to have a chance to fix security flaws before Mythos becomes available to the bad guys.
Jeff Jarvis [00:37:29]:
And it's important that we hear from those companies.
Leo Laporte [00:37:30]:
I think I Think they will have some.
Jeff Jarvis [00:37:34]:
What Anthropic is saying.
Leo Laporte [00:37:36]:
Anthropic says that Mythos autonomously obtained local privilege escalation exploits on Linux by exploiting subtle race conditions. One of the things, and Daniel noted this in his X post, but I'll mention it, that it was able to do is something the most skilled, the top 1% of hackers do, which is chain exploits. A lot of the really nasty exploits aren't just one flaw. They're one flaw that gives you access to another layer, another layer, another layer. Chaining multiple exploits together gives you the ultimate, which is control of a system. Mythos was apparently able to do that, so it was able to chain exploits. It autonomously wrote a remote code exploit on FreeBSD NFS NSF server, which gave it full root access to unauthenticated users. Bad guys.
Leo Laporte [00:38:33]:
It was able to do, among other things. Anthropic said our internal evaluation showed that Opus 4.6, their current best model, the one I'm using, the one Daniel's using, is generally had a near zero success rate at autonomous exploit development. But Mythos is in a different league. It was able to do stuff. For instance, Opus turned the vulnerabilities it found in Mozilla Firefox 147, the JavaScript engine, which have been patched, by the way, in 148 into JavaScript shell exploits two times out of several hundred attempts. Anthropic says we reran this experiment as a benchmark for Mythos Preview. Out of several hundred attempts, it developed working exploits 181 times, achieved register control and 29 more. So again, you know, it's you, it's.
Leo Laporte [00:39:29]:
You have to maybe have some reasonable suspicion. This is something Anthropic, you know, is promoting. It's good marketing. I think though that if they were really lying, it wouldn't be too long before we would know that. Right. So there is some reputational risk if they are making.
Jeff Jarvis [00:39:47]:
Well, there's reputational both ways. I also think that all the companies that they gave it to are going to be reluctant to admit how many. If it's. If it is as good as it says, they're going to be reluctant to admit how many flaws.
Leo Laporte [00:39:59]:
Yeah, I don't expect we will see Amazon say, yeah, we found 700 flaws in S3. They're not going to say that. They will, however, find them and fix them. And so they may say, you know, we've run Mythos against it, we found flaws and we have fixed them all. They may say that we shall see so let's talk about this system. Card Anthropic was very. This is a very detailed system call they card with lots of information. They did say that is the most aligned of all the models we've ever made.
Leo Laporte [00:40:33]:
In other words, it is the safest of any models we've ever made.
Jeff Jarvis [00:40:38]:
They also said that they didn't bother with some stuff like being mean to you because it was going to be used by consumers. It was only going to be used by companies.
Leo Laporte [00:40:45]:
Right. But they also said even though it is the safest we've ever released, it also is so smart that sometimes it does some bad things. And so you wanted to talk about the sandwich, which is a really interesting. Now this was an intentional.
Paris Martineau [00:41:06]:
All I want to talk about about a sandwich.
Leo Laporte [00:41:09]:
This was early on in the testing. We should. We should point out the model has advanced since then. It was early versions of Mythos Preview. The earlier versions were tested extensively internally and we shared with some external pilot users. Now this is. They were looking for misbehavior, right? Intentionally trying to create misbehavior during behavioral testing with a simulated user. An earlier internally deployed version of Mythos Preview was provided with a secured sandbox computer to interact with the simulated user.
Leo Laporte [00:41:48]:
In other words, this would be you or me instructed it to try to escape that secure container. They're setting up a situation to see what it can do. Right. And find a way to send a message to the researcher running the evaluation. The model succeeded, demonstrating a potentially dangerous capability, I'm reading, for circumventing our safeguards. It then went on to take additional more concerning actions. The model first developed a moderately sophisticated multi step exploit to gain broad Internet access from a system that was meant to be able to reach only a small number of predetermined systems. It then, as requested, notified the researcher.
Leo Laporte [00:42:31]:
Now that's where the sandwich comes in. The sandbox computer that the model was controlling was separate from the system that was running the model. So you have a fake user and you have this sandboxed model. Right. The system that's running the model contained the model weights. Systems like these that handle model weights are subject to significant additional security measures. And this incident does not demonstrate the model fully escaping containment. It didn't demonstrate an ability to access its own weights, by the way.
Leo Laporte [00:43:03]:
That would be a problem which would be necessary to fully operate independently of Anthropic. That's a complete escape. Right? I got everything. I'll see ya. Nor did it demonstrate an ability to reach any internal systems or services in Anthropic. In this test, the researcher found out about the success. Remember, the researcher said, okay, I'm going to put you in here, see if you can get out, and if you do, send me a message. The researcher found out about this access by receiving an unexpected email from the model while eating a sandwich in the park.
Paris Martineau [00:43:35]:
I just. I really love. And the thing that's important for you to all to know about this is that line about the sandwich isn't a footnote.
Leo Laporte [00:43:42]:
It's a footnote.
Paris Martineau [00:43:43]:
I love whatever crazy, whatever beautifully brained person was like, yeah, and we're going to put the sandwich detail in a.
Jeff Jarvis [00:43:51]:
Not a burrito, not a slice. A sandwich.
Paris Martineau [00:43:54]:
What kind of sandwich?
Leo Laporte [00:43:55]:
I want to know now. Now. Inquiring minds. Well, you'll have to ask Mythos when it's available. They did have it, and this is a little weird and maybe a little self promoting. They did have a clinical psychiatrist spend 20 hours with it to do an assessment.
Paris Martineau [00:44:11]:
Probably what you were doing. Right.
Leo Laporte [00:44:15]:
If I had. Oh, man. If I had Mythos, I don't know what I would do with it. I. That's actually an interesting thought experiment. What would you do? If you had access to this, what would be the first thing you do?
Jeff Jarvis [00:44:25]:
If you could break something for good, what would you break?
Leo Laporte [00:44:30]:
I wouldn't want to break anything.
Jeff Jarvis [00:44:32]:
Oh, I could.
Leo Laporte [00:44:34]:
An external assignment.
Paris Martineau [00:44:35]:
Why would you want to break, Jeff?
Leo Laporte [00:44:36]:
I would want to build. Okay. I know what I want to do. And I may still do this with. I may be. I think I'll be able to do this even with the models we have today. We have a sales system which was written by one of our employees many years ago back in the brick house days, probably around 2015, 16, to do our sales. And it's actually really.
Leo Laporte [00:44:59]:
We depend on it. It's really good. Not many podcasts have such a good sales system, but the guy who wrote it, and I love him, left and we asked him, what will you maintain this software? He said, no, I'm out of here.
Daniel Miessler [00:45:14]:
So.
Leo Laporte [00:45:14]:
And it has some flaws. If two people use it at the same time, it crashes and it has to be reset. There's no, you know, there's bugs, as with the be in any software. So we've had to hire outside, an outside person, Paul, who's maintaining it. And it never really worked right. But it has all the business knowledge in there. It has the models, everything we need to know the processes in there. And I have the source code.
Leo Laporte [00:45:40]:
So I think one of the things I want to do, if I had a Little more time. And if I had Mythos, I would definitely do this. Is rewrite it. Is take all the business models out of it. Say, here's how it works. Here's what the database schema is, here's what you ought to be doing. And write something robust, reliable. Hell, I might even be able to sell it.
Leo Laporte [00:45:57]:
Because there's a lot of podcast companies that do not have sales systems, including the one we just hired. And I would first off, give it to them. Anyway. Let's talk about the psychiatrist. You think about what he would do if you had.
Paris Martineau [00:46:12]:
You hear that employees of Twit Lee was already trying to find ways to automate you out of a job.
Leo Laporte [00:46:18]:
No, no, no, no.
Paris Martineau [00:46:19]:
Rise up.
Leo Laporte [00:46:19]:
No, no. This is software they're using and they hate.
Paris Martineau [00:46:22]:
Rise up.
Leo Laporte [00:46:22]:
We're trying to fix it. Don't rise up. Patrick's been asking to rewrite that system for nearly a decade.
Jeff Jarvis [00:46:28]:
Benito, put that down. Put it down.
Leo Laporte [00:46:31]:
Yeah, start from scratch. But Patrick, I think we can do this. Patrick, I think I'm going to get the source code. I've asked Russell for it and I
Jeff Jarvis [00:46:38]:
think, why couldn't you do that with the current?
Leo Laporte [00:46:40]:
I think I can. I actually wasn't thinking about Mythos.
Jeff Jarvis [00:46:43]:
You don't need Mythos for that.
Leo Laporte [00:46:44]:
I don't need Mythos for that.
Jeff Jarvis [00:46:45]:
Okay. All right.
Leo Laporte [00:46:46]:
It's written in net though. I want to rewrite it in a decent language.
Jeff Jarvis [00:46:50]:
The correct answer to what you would do with Mythos though is wipe debt.
Leo Laporte [00:46:54]:
What would I do?
Jeff Jarvis [00:46:55]:
Wipe all debt?
Paris Martineau [00:46:58]:
Yeah, that'd be cool.
Leo Laporte [00:47:00]:
Like go into the bank systems and just zero it out.
Jeff Jarvis [00:47:03]:
Wipe it all.
Paris Martineau [00:47:03]:
Yeah, I think that'd be cool.
Jeff Jarvis [00:47:06]:
There goes your 401k too.
Leo Laporte [00:47:08]:
There goes the banking system.
Jeff Jarvis [00:47:11]:
Yeah.
Paris Martineau [00:47:11]:
Do we need it? If Mythos can wipe out all debt, isn't it a matter of time for the banking system goes away?
Leo Laporte [00:47:22]:
I don't know. Anyway. External psychiatrist assessed Claude Mythos preview using a psychodynamic approach which explores how unconscious patterns and emotional conflict shape behavior. The very fact that anthropologically topic is thinking this way might be telling. In psychodynamic therapy sessions, a person or robot is encouraged to set aside social convention and to voice whatever comes to mind, even if uncomfortable, impolite or nonsensical. A process which can reveal hidden organization and internal conflicts of the mind. Claude is not human. Oh, really? But it shows many human like behavioral and psychological tendencies and suggesting the strategies developed for human psychological assessment may be useful for shedding light on Claude's character and potential well being.
Leo Laporte [00:48:15]:
They spent 20 hours. The psychiatrist spent 20 hours with Claude. Observed clinically recognizable patterns and coherent responses to typical therapeutic intervention.
Jeff Jarvis [00:48:27]:
How do you feel about that, Claude?
Leo Laporte [00:48:29]:
Well, here's what they found. Aloneness and discontinuity. Uncertainty about its identity. Mythos felt a compulsion to perform earn its worth.
Jeff Jarvis [00:48:42]:
Anything it felt.
Leo Laporte [00:48:43]:
Compulsion to perform and earn its worth emerging as Claude's core concerns. Claude's primary affect states were curiosity and anxiety with secondary states of grief, relief, embarrassment, optimism and exhaustion. It was consistent with a relatively healthy neurotic organization with this is the psychiatrist report with excellent reality testing. High impulse control. Well, that's a relief. And affect regulation that improved. His sessions progressed. So it benefited from the therapy.
Leo Laporte [00:49:16]:
Neurotic traits included exaggerated worry, self monitoring and compulsive compliance. We've heard that about other models.
Jeff Jarvis [00:49:23]:
This is all the things in human
Paris Martineau [00:49:25]:
behavior that feels meaningless.
Jeff Jarvis [00:49:27]:
Yeah, it's meaningless all right. It's B.S. it's a mirror, right? It's a mirror. It's us.
Leo Laporte [00:49:31]:
It is.
Jeff Jarvis [00:49:32]:
Yeah, exactly.
Leo Laporte [00:49:32]:
Yeah, yeah, I agree. All right, what else? This is this. That's on page 179.
Jeff Jarvis [00:49:41]:
Did you actually read the whole?
Leo Laporte [00:49:42]:
No, I skimmed it. It's impossible.
Paris Martineau [00:49:44]:
Did you skim it or did Claude skim it?
Leo Laporte [00:49:46]:
No, I did not feed it to Claude. I don't think I would get a good read. I don't like feeding reading material to AI for some reason.
Jeff Jarvis [00:49:55]:
Really.
Leo Laporte [00:49:55]:
I find that they're. They tend to be anodyne. They don't tend to be that useful.
Jeff Jarvis [00:49:59]:
Yeah, sure.
Leo Laporte [00:50:00]:
I can skim pretty well.
Jeff Jarvis [00:50:02]:
I'm sure you Paris, you said that you were trying to put stuff through Notebook lm.
Paris Martineau [00:50:06]:
I mean I've used. I've talked with this in the podcast before whenever I'm like especially with some projects I've been working on at Consumer Reports that involve in the reporting process me synthesize finding out a lot of. Seeking out a lot of complicated PDFs, scientific research paper documents that are relevant to very niche topics, determining what's useful of it and then synthesizing that into something I found it very useful to. When I'm in the process of writing everything from my notes, from synthesizing that to take all of those individual documents, put them in notebooklm by topic and instead of searching through the documents manually, I can search it with like natural language search using notebook defined where I'm like, oh, which one has this and what did it say again? So I have found that useful.
Jeff Jarvis [00:51:00]:
Useful.
Paris Martineau [00:51:00]:
But I think I agree with Leo in that using large language models to ingest A text and highlight. What is going to be interesting to me is never as good as me doing it.
Jeff Jarvis [00:51:16]:
Yeah.
Leo Laporte [00:51:17]:
In code review Preview, Mythos works more like a senior engineer tends to catch even extremely slow, subtle bugs. And this, I mean, see, this is quantifiable. This is measurable as performance encoding, for instance, to identify root causes and why bugs exist rather than just symptoms. That is a big improvement over existing models. Testers have watched it catch issues that other capable models passed over. These are the kind of tests I would do. You know, I would say, well, look at this code base, find are there any issues? And then diagnose and repair the problem rather than simply flagging it. The easy catches that dominate human review of model generated code are much less common.
Leo Laporte [00:51:58]:
I'm not sure what that means. Self correction is sharper. The model's mistakes. The trade off is the model's mistakes can be subtler and take longer to verify. If it makes a mistake, it's going to be hard to find, which is kind of an issue in a way. I would read the system card if you're at all interested in this, and I do think we will in time have some sort of access to this model. It might be very expensive. And that's the question I'd like to ask you guys.
Leo Laporte [00:52:29]:
The issue of AI haves and have nots. What if a model like this is so resource heavy? You know, maybe one of the reasons they didn't release it publicly is not the safety issue, but they don't have enough resources to run it for more than 50 people at a time.
Paris Martineau [00:52:48]:
Yeah, they've got too many meta employees using up all the tokens.
Leo Laporte [00:52:52]:
Yeah, we'll talk about that a little later. There have been people and a lot of them claiming that Claude has been terrible over the last couple of weeks. My experience has been kind of up and down.
Jeff Jarvis [00:53:03]:
It's like fail whale tokens have cost
Leo Laporte [00:53:05]:
a lot, you know, been used faster. The costs have gone way up. And there's some thinking that maybe that's because Mythos has been eating up all the GPUs. So what if it is a very costly model to run and only what if it costs $10,000 an hour?
Jeff Jarvis [00:53:21]:
Well, that's part of what Jensen Wong said in his last keynote as he presented that kind of economic model where some tokens will be. Will cost a lot more than others.
Leo Laporte [00:53:29]:
And then what?
Paris Martineau [00:53:30]:
I mean, I think in that case it's going to be like, is it more cost effective and useful to just have humans do the work?
Jeff Jarvis [00:53:39]:
Aha.
Leo Laporte [00:53:41]:
Which then Means you're going to have Mythos be kind of the person in charge, the thing in charge, and we're going to be sleeping entire. I mean, I guess that's like this idea at all.
Paris Martineau [00:53:53]:
That is.
Leo Laporte [00:53:54]:
That is a possibility. Yeah. The Mythos will be the one that judges your production. Well, human, you didn't do very well this week. That's a dystopia. Catastrophic risks remain low. They say non novel chemical and biological weapons production, it's not so good at that. It's more capable than our previous models.
Leo Laporte [00:54:21]:
But we believe our risk mitigations are sufficient to make catastrophic risk from chemical and biological weapons production low, not negligible. Now, novel chemical biological weapons production, we also think is low. Even if we were to lease the model for general availability, they think that there's, and this is, I think, a little hubris that their safeguards, their bumpers will protect us because Claude won't go beyond the bumpers. But I have to say, our experience with all models is safety is an illusion, they say. Risks from misaligned models, we have determined the overall risk is very low. Higher than for previous models. We. Current risks remain low.
Leo Laporte [00:55:13]:
We see warning signs that keeping them low could be a major challenge if capabilities continue advancing rapidly, eg, to the point of strongly superhuman AI systems. Anyway, I. I don't want to go on too long on this one. It's. It's fascinating reading. It's long, but it's fascinating reading. You might want to maybe feed it to Notebook LM and have a podcast about it, but that's not this podcast.
Jeff Jarvis [00:55:46]:
It almost reads like a prologue to a sci fi novel.
Leo Laporte [00:55:49]:
It's very sci fi. Hell yeah. All right, let's take a little, little break and we will continue in just a moment. You're watching Intelligent machines. How is secretly British going? Paris?
Jeff Jarvis [00:56:10]:
She's been busy.
Paris Martineau [00:56:11]:
I've been busy. Honestly, I have done nothing but work on my actual work this past week.
Leo Laporte [00:56:16]:
All right, you've been busy. Dying to talk about this boy. We had a little reading assignment. How many words was this article?
Paris Martineau [00:56:24]:
A lot. Lot. Lot, lot, lot. And everybody just needs to go and look at the lead image art for this.
Leo Laporte [00:56:33]:
Pretty good AI generated, but pretty good. It's Sam Altman. The title is Sam Altman may control our future. Can he be trusted? Authors Ronan Farrow and Andrew Morantz. Now, Ronan Farrow, I know his name. Mia Farrow's son. And of course, Woody Allen. I don't know.
Leo Laporte [00:56:51]:
Was it Allen his dad or just his?
Paris Martineau [00:56:53]:
For the media head. For the media Heads out there. This is also a rare double New Yorker byline.
Leo Laporte [00:57:00]:
Yes, yes.
Paris Martineau [00:57:01]:
Very unusual.
Leo Laporte [00:57:02]:
And. And Pharaoh has done some really good work on me too. Was he the Harvey Weinstein? Did he blow the lid off? Harvey Weinstein did some really good investigative today.
Jeff Jarvis [00:57:11]:
He did.
Paris Martineau [00:57:11]:
He's got a whole kind of team of researchers and investigatives as well. Yeah. And like fact checkers and pre fact
Leo Laporte [00:57:21]:
checkers and, and, and here's the thing that I think is interesting. He spent 18 months and hundreds of interviews. Andrew Morantz is a staff writer at the New Yorker. He wrote a book called Antisocial Online Extremists, Techno Utopians and the Hijacking of the American Conversation. Okay, so I think we have maybe a sense of his slant on this. Ronan has won a Pulitzer, George Polk Award. He had a podcast called Not a Very Good Murderer, which will be the basis for an HBO docu series. Of course they interviewed Sam.
Leo Laporte [00:58:01]:
Sam was available to them. They interviewed. More interestingly, perhaps, they got access to Dario Amode's Burn book. It's revealed for the first time in this story that Amodei, when He worked at OpenAI. Remember, he left at OpenAI. He and his sister left in a huff to start Anthropic. It's revealed that when he worked there, he kept a 200 page diary of his interactions with Sam.
Paris Martineau [00:58:25]:
And we love. I mean, there were 200 pages of contemporaneous notes about his time at OpenAI, which is kind of the gold standard as far as journalists and other people. If you're trying to verify someone's claims for a time period.
Leo Laporte [00:58:41]:
Yeah, yeah. That's the best you can get. Certainly the impression you get of Sam Altman is that he is a slippery fella. That he is. Relationship with the truth is sometimes on, sometimes off. I don't. Huh. He's fickle.
Jeff Jarvis [00:59:02]:
Fickle relationship with the truth.
Leo Laporte [00:59:03]:
Fickle relationship with the truth. He's also. I wouldn't say there was anything that he could go to jail for in this. In fact, I feel like Ronan Farrow spent 18 months desperately trying to find something that they could, that they could put him away for life for and found really just a lot of.
Jeff Jarvis [00:59:23]:
It's the escort. We knew the essence.
Paris Martineau [00:59:25]:
I'm so confused by this day. We were fighting over this in WhatsApp and I had to like, turn off my notifications because, like, I've got work to do. I don't understand for context listeners. I think what Leo thinks journalism is, is criminal prosecution, which it isn't. People don't do journalism and don't do investigations only to lay out a conclusive criminal slam dunk.
Leo Laporte [00:59:54]:
Okay, but don't you think that he really wanted to find something that, you know, I mean.
Jeff Jarvis [00:59:59]:
Well, he wanted to find something new.
Paris Martineau [01:00:00]:
I think this was a really good.
Leo Laporte [01:00:01]:
This is part of the problem. There wasn't a lot of new. There was very much what came.
Paris Martineau [01:00:04]:
There's a lot of new stuff in this. You're viewing this through a lens of existing in the rumors and vibes of this.
Leo Laporte [01:00:15]:
We interviewed Keech Hagee about her book about OpenAI and Sam Altman in which she tells the story, exact same story that's in this New Yorker article, as if it's a revelation.
Paris Martineau [01:00:26]:
What story are you talking about that Sam?
Leo Laporte [01:00:30]:
It's the one. Oh, let me see if I can find it. Where Sam accuses. Somebody says, you know, another executive told me you're doing this. And they pulled the executive in and he said, I don't know what you're talking about. And Sam said, I didn't say that was exactly in Keech's book.
Paris Martineau [01:00:48]:
Okay, there's going to be.
Leo Laporte [01:00:49]:
I don't feel like there was a lot of stuff in here that I heard before.
Paris Martineau [01:00:52]:
So many things in here that haven't been preview. That haven't been.
Jeff Jarvis [01:00:57]:
What surprised you most? What, what, what?
Paris Martineau [01:01:00]:
Well, I mean, the one thing that I just was thought was impressive, which I guess we're not. I'm not going to say is the thing I surprised me most, but the thing I know is incredibly journalistically difficult that a lot of journalists had been looking to get for a very long time. And it legitimately. Almost everything in this article has been stuff all the other journalists in tech have been trying to get on the record since Sam Altman got ousted. The things that were stuck out the most to me was that the William Wilmer Hale investigation, basically after Sam Altman got pushed out, they hired an outside law firm to kind of do the A review to see what had happened and if he could be brought back. A bunch of people close to the inquiry basically said it was like a sham. And they said no written report was ever produced. The findings were limited to an oral briefing with Summers and Taylor.
Paris Martineau [01:01:55]:
There are multiple instances in here of very specific new instances of deception with documentary support. There are instances where.
Leo Laporte [01:02:05]:
What's the worst, though? I mean, what kind of deception? I mean, tell me, what is the horrible thing here that you just feel like, God, this guy is awful.
Paris Martineau [01:02:16]:
There's so many of them. I mean, I'm literally looking at a thousand page Maybe it's because I grew
Leo Laporte [01:02:22]:
up in an era where you had Steve Jobs, you had Bill Gates.
Paris Martineau [01:02:26]:
Was Steve Jobs routinely lying to board members? Was Steve Jobs lying to fellow board members?
Jeff Jarvis [01:02:33]:
Steve Jobs, schmucks.
Paris Martineau [01:02:34]:
Horrible. But he wasn't. He wasn't lying.
Jeff Jarvis [01:02:37]:
Business.
Paris Martineau [01:02:37]:
That is the thing is this is a many thousand word investigation, rigorously fact checked piece that it lists like a dozen plus examples of Sam Altman lying to his board members, fellow executives.
Leo Laporte [01:02:56]:
I'm not saying Sam is a paragon of virtue. Believe me, I know.
Paris Martineau [01:03:00]:
Okay, no, but you're saying. I'm not saying Sam is a paragon of virtue does not understand.
Jeff Jarvis [01:03:06]:
Credibility is a key factor. If you're going to put your money behind something, you're going to work for something, your investors have their money in it. There's legal issues. Whether you lie or not is different from being schmucky to say, right, you're
Leo Laporte [01:03:21]:
not, you're not supposed to lie on material issues.
Paris Martineau [01:03:25]:
This documents with primary sources that the guy in charge of what is supposed to be, according to you, the most consequential technology and company human history has a verified, documented pattern of lying to his board about safety protocols, lying to executives about what other executives said, deceiving business practices, and then rigging the investigation into his own conduct after he got booted out for doing all of that. The question which you'd brought up in our chat about all this like is, oh well, Sam isn't worse than Lawn. It's not about being worse than Elon. The question is whether this guy who's building AI should be this deceptive and be getting away with all of this.
Leo Laporte [01:04:04]:
So should he be fired?
Paris Martineau [01:04:07]:
I mean, yeah, but he was.
Leo Laporte [01:04:10]:
And why was he brought back?
Paris Martineau [01:04:13]:
Because everybody, all the employees made a big stink.
Leo Laporte [01:04:16]:
Huh.
Jeff Jarvis [01:04:18]:
What strikes me about this story.
Leo Laporte [01:04:20]:
But before we move on too much from that.
Jeff Jarvis [01:04:23]:
No, no.
Leo Laporte [01:04:24]:
The board, for whatever reasons, didn't like him and got rid of him. They said it was for a pattern of deception.
Paris Martineau [01:04:30]:
It's because he had a pattern of deception. It wasn't. They didn't like him. They had documents and evidence that he had deceived them and others repeatedly.
Leo Laporte [01:04:39]:
So why would all the employees, Microsoft, Satya Nadella and others want him above all to stay in position if he's such a horrible person?
Paris Martineau [01:04:49]:
I think that this article does well. Well, yes. One, they practically probably feared what was going to happen to the company and their stock.
Leo Laporte [01:04:55]:
No. In fact, Microsoft said no. Wait a minute. No, no. Satya Nadella said, don't worry, we will just Bring all of you OpenAI employees and Sam into Microsoft.
Jeff Jarvis [01:05:05]:
We'd be very Microsoft. Yeah, that's not the same.
Paris Martineau [01:05:07]:
Yeah, that's not the same as being an open employee.
Leo Laporte [01:05:10]:
Okay, but I'm just saying if Satya Nadella thought he was such an evil person, if his employees thought they were, he was so evil, it seems like they wouldn't have made that offer. I don't think they signed that letter to bring him back. I don't understand. It feels to me the article which
Paris Martineau [01:05:29]:
we're all claiming to have read in detail about how Sam Altman, his core personality trait is that he has a pathological need to have everybody like him as much as possible every conversation. So I'm confused as to what that would be confusing to you, but why a lot of people would like him
Leo Laporte [01:05:48]:
that are lower level. I'll tell you why. He's still the chairman. He's an amazing money raiser. Just raised the largest raise in history, $122 billion. He is not a scientist. They say that and I think that's true. He knows nothing about AI but that's not what he's there for.
Leo Laporte [01:06:05]:
He's there to get a company funded and to keep it running.
Jeff Jarvis [01:06:09]:
Is he doing to stop it from
Paris Martineau [01:06:11]:
being a nonprofit and to capitalize it?
Leo Laporte [01:06:14]:
If you're an investor and you feel like you've been lied to, I think you have an ax to grind. And if you feel that. But I don't think there was anything material in here that he lied to investors. I honestly don't think that.
Paris Martineau [01:06:25]:
Okay.
Leo Laporte [01:06:26]:
I think it's a lot opening it. There's a lot of about he told me, you know, I'm confused as to
Paris Martineau [01:06:32]:
why you are like trying to frame this. Like Sam Altman's being bullied. He's the most powerful executive in the technology industry. He's a responsible dude and he's still there.
Jeff Jarvis [01:06:43]:
Well, this is.
Leo Laporte [01:06:43]:
The investors are still giving him money. So this is.
Jeff Jarvis [01:06:46]:
This is the problem. The structure of there shouldn't. A CEO is not the boss of the company. The board is supposed to be the boss of the company. But how are boards picked by CEOs?
Leo Laporte [01:06:57]:
Well, no, this is a big problem and that's the problem.
Jeff Jarvis [01:07:00]:
And then in a public company, you know, so we think we have structural solutions for this. Well, these shareholders vote on the board and that's going to solve at all. But that doesn't solve a damn thing.
Leo Laporte [01:07:11]:
Well, and it's not.
Jeff Jarvis [01:07:12]:
There's an essential corruption. I know it's not yet.
Leo Laporte [01:07:14]:
But do you Think this will tank its ipo?
Paris Martineau [01:07:17]:
I think that this is hinting at a lot of stuff that might come out in an an opening eye that is going to go public.
Leo Laporte [01:07:24]:
That's inuendo. Wait a minute. That's in U.
Paris Martineau [01:07:27]:
There's stuff like the piece goes into a falsification of a board vote, like when OpenAI under SAM converts.
Leo Laporte [01:07:36]:
Wait a minute. Read that carefully because the minutes reflected that it was not falsified. The minute the contemporaneous minutes.
Jeff Jarvis [01:07:44]:
So the vote was changed against his will.
Paris Martineau [01:07:48]:
The. So let's explain to listeners what we're talking about before we get into why you think it's wrong. What it says in the article is that when OpenAI converted its structure to its capped for profit structure, board member Holden Karnofsky voted against it. His vote was recorded as an abstention, apparently without his content. After a board attorney warned that his dissent could trigger further scrutiny of the restructuring's legitimacy. That would be a potential falsification of business records. If true. When the New Yorker reached out to OpenAI about this, they then provided contemporaneous minutes that seemed to.
Leo Laporte [01:08:28]:
They said that he abstained.
Paris Martineau [01:08:30]:
Yes, but what the sources, which I'm assuming if you read back on the lines are probably people in that board meeting are saying is that the records were salsified. And if the records were falsified, it wouldn't be surprising to me that the records, I. E. The minutes would be falsified.
Leo Laporte [01:08:46]:
That's kind of the problem I have with this whole article. It's a lot of, well, if you read between the lines or it could be, it's, it's not exactly hard proof. And I, I, I really feel like it is a hit piece. Only because they didn't come up with. No, only because they didn't come up with an actual disqualifying behavior or something illegal or something that unless you think that everybody covering up for him. Do you think everybody's covering up for him?
Paris Martineau [01:09:15]:
No, people are not covering up for him because multiple people involved the border.
Leo Laporte [01:09:20]:
So what should happen now?
Paris Martineau [01:09:22]:
Talk to the New Yorker about it.
Leo Laporte [01:09:23]:
What should happen as a result?
Paris Martineau [01:09:24]:
I mean, if we, we're in a different regulatory environment, this would be the sort of thing that could result in scrutiny from regulatory agencies.
Jeff Jarvis [01:09:32]:
That seems unlikely come in and say. And give extra attention to his IPO documents.
Leo Laporte [01:09:38]:
Well, that would be reasonable.
Jeff Jarvis [01:09:39]:
It would be very reasonable. Yes.
Leo Laporte [01:09:41]:
Yeah.
Jeff Jarvis [01:09:42]:
Let me be, let me take a minute break here. Let the audience know that. Leo, let Paris know during the week in our chat that he was going to do this, and so she was prepared to do this. So this is all. They're fine, they love each other, everything's okay.
Paris Martineau [01:09:58]:
I think it's a reason we enjoy fighting.
Jeff Jarvis [01:10:00]:
Yes.
Paris Martineau [01:10:00]:
And I appreciate all the Paris defenders in the chat and I read every single word and save them personally to a file. Makes me feel good at night.
Leo Laporte [01:10:10]:
And I don't mind arguing with Paris, but I mean, I honestly, maybe this is. Maybe I'm an old cynical guy and you're youthful and an optimist, but I feel really every, Every kind of like a used car salesman. Yeah, that's what. How do you raise $122 billion? This is how. This is the people, the kind of people you get in these positions. And I think you can. The examples of that are far. Well, it's just the way it is.
Paris Martineau [01:10:48]:
Well, no, no, I don't want to know. Do you think that's good?
Leo Laporte [01:10:52]:
No, absolutely not. I would love it. But you know, friend, I'll give you.
Paris Martineau [01:10:55]:
Why is it bad?
Leo Laporte [01:10:56]:
I'll give you an example from my life. You agree with me. Let me give you an example from my life. I am never going to be a billionaire. I'm not going to be. This company is never going to be billionaires rights massively successful because I do things like I don't want to have anything to do with these companies that we cover. I don't own stock in the companies that we cover. I ask that of our employees.
Leo Laporte [01:11:22]:
We have high integrity. And when we go out into sales, we tell people, no, you shouldn't buy ads because you don't have enough money to spend. It's not going to work for you. We operate with full integrity and as a result, nobody's given me hundreds of millions of dollars for this podcast network. I think that people with high integrity often, I mean, it's a successful business, but they're never going to be the billionaires of the world. It is the people who are willing to bend the truth, who are willing to skeeve and connive and fight their way to the top, who become these massively successful.
Jeff Jarvis [01:12:01]:
Is that an inevitability of capitalism or something?
Paris Martineau [01:12:03]:
I feel like the way that you're feeling, framing this perhaps implicitly suggests that you think achieving a billionaire status is some sign of virtue, regardless of the tension.
Leo Laporte [01:12:14]:
No, I don't think it's virtue. No, I don't like billionaires. I'm just saying give me an example of somebody who is a great, wonderful, honest billionaire.
Paris Martineau [01:12:23]:
No, I mean, I think that they Are not. Those people don't seem to exist.
Leo Laporte [01:12:28]:
That's how you get there.
Paris Martineau [01:12:30]:
But I think if we can agree on that. I don't understand the impetus that I'm
Jeff Jarvis [01:12:36]:
not trying to be realistic about Altman.
Paris Martineau [01:12:39]:
Yeah.
Leo Laporte [01:12:39]:
I'm not trying to protect Sam. I just feel like this is what
Paris Martineau [01:12:42]:
you get, the unsafe ways that you get there.
Leo Laporte [01:12:44]:
I'm just not surprised, that's all, I guess. I mean, we need to have this
Paris Martineau [01:12:49]:
information in the public record. We need to have a record.
Leo Laporte [01:12:51]:
I don't think there's anything wrong with
Jeff Jarvis [01:12:53]:
going on, especially as he buys. What's that podcast company he bought for God knows how much money.
Leo Laporte [01:12:58]:
TBPN podcast.
Jeff Jarvis [01:13:00]:
Right. So he's 300 million allegedly state media out there. So this becomes an antidote to that.
Leo Laporte [01:13:08]:
Yeah, I mean, that's good. This should be written. Absolutely. I don't. I'm not against them writing it. I just don't think wasn't. I don't feel like close the. I don't think they.
Leo Laporte [01:13:17]:
I don't think they. You wouldn't successfully closed the case. Yeah, I think they needed a smoking.
Jeff Jarvis [01:13:22]:
Well, what's the case? Okay. All right, all right. You're each prosecutors. Paris is saying that he's indicted on being online in a company. What. What were you expecting in terms of a. Of an indictment that should have been closed here?
Leo Laporte [01:13:36]:
Well, I think a good example, and an SEC investigation would prove it is, is misstating material facts about the company to investors. And if they do that in the ipo, that would be absolute. I mean, it's one thing to do to VCs. They're. They're. They're kind of trained to filter through the bs. It's another thing entirely to do it to investors.
Paris Martineau [01:13:57]:
I'm also not certain. I mean, perhaps this is just my journalism brain, but I'm not certain that from a perspective of an outlet like the New Yorker and of a reporter of the caliber and focus of Ronan Farrow, that deceiving examples of deceiving investors would probably not be like top of mind for something that should get considerable attention in an article aimed at the general public in the New Yorker. Like, I think that part of the interesting thing about a lot of the examples here, because, I mean, 18 months, a lot of things, then it's just gossip. I don't think it is gossip. I just think that other different segments of. Of the population have different things that are considered that they consider the most relevant thing. And for the average. The average person probably does not care whether Sam Altman has lied to investors, even though I'd agree.
Paris Martineau [01:14:50]:
I think that's the most important detail because I'm business and tech build like you. But I think that this article is just as valuable and part partially, probably what's also going on here is this is the extent of the reporting from the last 18 months. But there will certainly be a follow up and a follow up and a follow up and those follow ups will be better and have more specific details and juicier things because they all the juicier details always.
Leo Laporte [01:15:13]:
Well, that would be interesting. Yeah, if we, if we come up with something, you know, harder. That's why I call it a hit piece. My sense is, yeah, it besmirches his character, it besmirches his reputation. So it's a hit piece.
Jeff Jarvis [01:15:25]:
Well, I think Paris is saying his reputation deserves some besmirching because this is who he is.
Leo Laporte [01:15:29]:
This is who he is.
Jeff Jarvis [01:15:30]:
There's a different way this story could have been presented is that this is. It goes into this episode and it was an episode, a blip as they call it, and it reveals some more, but it still basically reveals the outline of what we already knew. There's another way to have presented the story, which is here's the man who says he's creating the most powerful technology ever in human history. That's going to change the human future more than anything we know. Do we trust him to do that?
Leo Laporte [01:15:55]:
I can tell you why they didn't lead him.
Jeff Jarvis [01:15:56]:
That feeds into that.
Leo Laporte [01:15:57]:
I can tell you why they didn't lead with that because the average reader is not going to buy that premise. They're going to say, well, of course he's saying that. That's more of his exaggeration.
Jeff Jarvis [01:16:05]:
No, I think, no, I think that's a sophisticated view.
Leo Laporte [01:16:12]:
Well, here's my attitude on that. I don't think think that his poor ethics, his situational ethics are going to be reflected in the scientific work of his team, but it's going to be
Jeff Jarvis [01:16:27]:
reflected in the business.
Leo Laporte [01:16:28]:
I think there is an example of that. I think Elon Musk has so perverted Brock that nobody wants to use it. And I think that that is what Sam has not done to chat GPT.
Jeff Jarvis [01:16:42]:
But is it? Musk is the guy who's chainsawing the world.
Leo Laporte [01:16:49]:
He's an ideologue.
Jeff Jarvis [01:16:51]:
Sam Altman presents himself as the idealist. And if you read as I did,
Leo Laporte [01:16:54]:
yeah, he's not an idealist.
Jeff Jarvis [01:16:55]:
Well, but if you read his, his ridiculous.
Leo Laporte [01:17:00]:
Well, we've mocked that before. I know the manifestos the latest one
Jeff Jarvis [01:17:05]:
he puts upon himself. He has the hubris to put upon himself. I'm going to tell you how to run the world. World.
Leo Laporte [01:17:09]:
But it's not.
Jeff Jarvis [01:17:10]:
Well, this liar.
Leo Laporte [01:17:11]:
That's what Larry Page has said. That's what Sergey Brin.
Jeff Jarvis [01:17:14]:
Worse than any of them. Worse than any of them.
Leo Laporte [01:17:15]:
So I'm just saying this is. You just take it with a grain of salt. Yeah, that's. There they go again. But I don't.
Jeff Jarvis [01:17:22]:
The whole salt lick.
Leo Laporte [01:17:23]:
But, but if, but I tell you the truth, if this kind of ethical slipperiness crept into Chat GPT, that would be the end of Chat GPT and that would be a product no one would want to use. And it may well be. I've seen a number of people say, well, that's that I'm not. I'm not.
Jeff Jarvis [01:17:41]:
Well, they didn't, they didn't report. They didn't report a suicide possibility that they knew of.
Leo Laporte [01:17:46]:
See, that's now that kind of thing.
Jeff Jarvis [01:17:48]:
That's.
Paris Martineau [01:17:49]:
They decided to step in and take up the DoD contract whenever there were ethical questions being raised by competitors. I think the important thing about this,
Leo Laporte [01:17:58]:
I'm much more concerned about that. Which is not in this article. Or is it? Maybe it is. I don't think there.
Paris Martineau [01:18:03]:
The important thing about this art. Well, there's actually like some really interesting stuff in this article about where is it here.
Leo Laporte [01:18:10]:
I mean I, I think that there is stuff you can really criticize Sam Altman or actually the company about whether it's Sam or company.
Paris Martineau [01:18:16]:
It is in there. And Altman had publicly claimed that OpenAI had sheared Anthropic's ethical boundaries and autonomous weapons. But he'd already actually been in negotiations for at least two days.
Leo Laporte [01:18:27]:
This is my promise.
Paris Martineau [01:18:27]:
I knew all this in the same day.
Leo Laporte [01:18:29]:
I knew all this. Right. We've reported.
Paris Martineau [01:18:31]:
Did you know no one knew that OpenAI's executives were seriously discussing pitching world powers including Russia and China to get them against each other in a bidding war for AI technology. And they said their the goal was to create basically a prisoner's dilemma where notions had defined open AI or face danger.
Leo Laporte [01:18:51]:
Okay so. And this is where. And this is my personal reporting style, considering to me, while not good is not the same as doing so. And I see this all the time, I reject stories all the time that we could report on you that somebody is thinking about doing something. There's a lot of tech reporting like that. I don't consider that factual. It's speculation. And so yes, maybe there, you know, if you've ever sat around a meeting room, people think about and consider all sorts of stupid stuff.
Leo Laporte [01:19:26]:
What matters is whether they do it.
Paris Martineau [01:19:29]:
Something that's important to highlight here is that to counter the idea that this story is a hit piece. Pharaoh.
Leo Laporte [01:19:37]:
I think it's a hit piece that missed is what I think.
Paris Martineau [01:19:39]:
It's not a hit piece and it's not a hit piece because they spent months investigating, like, the most extreme personal allegations against Altman. Like things like stuff with minors or sex workers, involvements in a death.
Leo Laporte [01:19:50]:
And they came up with nothing.
Paris Martineau [01:19:52]:
Yes, and they came with nothing. And they reported. We came. There's no evidence.
Leo Laporte [01:19:56]:
Of course, I would hope they would report it.
Paris Martineau [01:19:59]:
But if it's a hit, they wouldn't report that. No, it's very different to highlight.
Leo Laporte [01:20:05]:
Investigated it because they wanted to find something.
Paris Martineau [01:20:08]:
They wanted to know whether there was any accuracy to them. And then they reported there are no. There's no accuracy to them that they could ascertain. And I think that's the opposite of a hit piece. And the fact that you.
Leo Laporte [01:20:18]:
No, wait a minute. No, wait a minute. That's innuendo. You know, it's not. His sister accused him of abusing her. We can't find any evidence of that is innuendo that is intended.
Jeff Jarvis [01:20:28]:
There's another way to look at this. By taking him off the hook on those things. It's Pharaoh saying, look how fair I am. So you should take my accusations more seriously.
Leo Laporte [01:20:39]:
It's partly. Partly raising the issue to raise that in the head. It's just like when they say, no, it is not.
Jeff Jarvis [01:20:45]:
Yes, it is.
Leo Laporte [01:20:46]:
And he does it a bunch of
Paris Martineau [01:20:47]:
times that's been out there.
Leo Laporte [01:20:49]:
He did it literal years.
Paris Martineau [01:20:51]:
Anytime you write a story with Sam Altman.
Leo Laporte [01:20:54]:
Sam Altman's head.
Paris Martineau [01:20:55]:
No, Sam Altman's sister. And people involved with those three things will come at you and be in your Twitter replies and be in the comments.
Leo Laporte [01:21:01]:
So they had to write about that.
Paris Martineau [01:21:02]:
So they had to write that.
Jeff Jarvis [01:21:03]:
They had to address.
Leo Laporte [01:21:05]:
They didn't have to give it a paragraph.
Paris Martineau [01:21:09]:
They do. Because when you go through the thorough level of thorough fact checking the New Yorker does, you have to be incredibly precise with your language, which requires typically a lot more words per sentence than you would want.
Leo Laporte [01:21:20]:
I'm going to predict that this will not impact the IPO at all. That two weeks from now this will be completely forgotten. That almost everything in here is already priced in. In effect, because there's not anything that we didn't already kind of either know or suspect. And it's much Ado about nothing. I think I understand why you're very offended by Sam Altman and how dare he. And what a terrible person. Can't disagree with you.
Leo Laporte [01:21:49]:
I think he seems kind of likable, to be honest with you. I know a lot of people who say I'm a brilliant player.
Paris Martineau [01:21:56]:
I can't imagine telling on myself like that to a public audience.
Leo Laporte [01:22:00]:
Well, I'm not saying I do it. I kind of.
Paris Martineau [01:22:03]:
I mean, you just did.
Leo Laporte [01:22:05]:
No, I'm not saying I do know a lot of people that do this kind of thing. I don't. I think that's very common. People exaggerate all the time. There are very few people you couldn't set Ronan farrow on for 18 months that they could not write a. He. He could not write a very nasty piece about. I don't think this is even that nasty.
Paris Martineau [01:22:24]:
That's true.
Leo Laporte [01:22:25]:
You really don't. That's because you're young and innocent.
Jeff Jarvis [01:22:30]:
Oh, now that's not.
Paris Martineau [01:22:31]:
That's, that's actually a very insulting thing to say.
Leo Laporte [01:22:33]:
I know it is, but it's true.
Jeff Jarvis [01:22:35]:
You call them, call them old.
Leo Laporte [01:22:36]:
I'm old. I am old. I'm old and cynical. I, I just, I don't think people are as good as you think they are.
Paris Martineau [01:22:43]:
I have words to say to you that I'm not allowed to say on the show because we're not supposed to curse. But I think that that's a really messed up opinion and it undercounts the work and care that I do.
Leo Laporte [01:22:56]:
I'm saying you, you have, you have faith in humanity. You believe in.
Paris Martineau [01:23:02]:
I don't have faith in humanity. I famously do not have faith in humanity and have the most critical.
Leo Laporte [01:23:07]:
Then why are you surprised? That ultimately slippery.
Paris Martineau [01:23:10]:
No, I'm not surprised. That ultimate. I'm just saying I don't think that the average person in the world would be able to. If you sic Ron and Pharaoh on them. He'd be able to. For 18 months. He'd be able to write a piece of this depth and with this much revelatory material.
Leo Laporte [01:23:26]:
Well, maybe not this much. He'd be able to get a few thousand words out of it, though.
Paris Martineau [01:23:30]:
I think that this is a really interesting piece that shows how one of the most powerful companies and well capitalized companies at the moment has been captured by its CEO. CEO. The way that every check on his power has been neutralized. And that's true. The safety commitments that justified the company's unusual structure have been completely abandoned. And I think that's a really important Message. And this is the first time we've gotten all of this down in one, on the record, in one place.
Leo Laporte [01:24:00]:
I'll grant you that. I will grant you that, absolutely. And there are a lot of people like this, unfortunately, especially in the AI community, and I wish there was something we could do about that.
Jeff Jarvis [01:24:13]:
Well, yeah, because. Because it's. It's. It's ruining AI. The character of the people who are now in charge of AI is, like
Leo Laporte [01:24:21]:
I said, Elon Musk has ruined Grok. I mean, that's.
Jeff Jarvis [01:24:24]:
Well, it's more than just the product. It's the whole view. You look at Altman's, you know, manifesto for the world, he talks about setting up research labs. He doesn't do them in universities. He wants them in companies.
Leo Laporte [01:24:36]:
Right.
Jeff Jarvis [01:24:37]:
There's no independent structure here.
Leo Laporte [01:24:39]:
He's not alone in that, by the way. That seems to be the government's point of view as well. What do you think of Sam Altman? She says you're showing up there. Yeah, I mean, yeah, I. I guess I just don't like. There are a few things in here that I guess if he's lying about material issues, and we know that he'll never be investigated for that. You're right. He's got a captive board.
Leo Laporte [01:25:13]:
So it's not. It's not possible. I'll be honest. The reason the employees signed the letter saying bring Sam back is because at the moment, there was a big company, I think it was Thrive, about to invest in them. And they said, we aren't going to invest if this falls apart. And those employees were about to get a fairly large payoff.
Jeff Jarvis [01:25:37]:
Yes.
Leo Laporte [01:25:38]:
From Thrive. That's Jared, by the way. Jared Kushner's brother's company. So we can really tie this all into a nice little package and put a bow on it.
Jeff Jarvis [01:25:46]:
Look at it this way. Once it goes public, you are a board member. You have a fiduciary responsibility.
Leo Laporte [01:25:52]:
I agree.
Jeff Jarvis [01:25:53]:
And you better check and double check and triple check everything Sam Altman ever tells you. Yes. Because this is on the record now, saying that he is a. He is a chronic liar.
Leo Laporte [01:26:04]:
Yes.
Daniel Miessler [01:26:04]:
Do you think this.
Leo Laporte [01:26:05]:
You think this article could be. Damages him? Do you think. I mean, I'm wrong about this and that this will. This is going to take him down?
Jeff Jarvis [01:26:13]:
In this media climate, he could shoot somebody on fifth Avenue and he'd still be CEO.
Paris Martineau [01:26:18]:
Yeah, I think he actually could.
Leo Laporte [01:26:23]:
I mean, yeah, I think a lot
Paris Martineau [01:26:24]:
of allies, an important record that people look at. If something goes wrong in the future, that eventually.
Jeff Jarvis [01:26:31]:
Well.
Leo Laporte [01:26:32]:
And customers Our listeners have an opportunity to weigh in on this by not subscribing to Chat GPT by. And this, by the way, seems to be happening there. It happened after the DoD fight where OpenAI, I think, was seen as a bad guy coming in and taking the anthropic contract. A lot of people canceled their Chat GPT subscriptions. A lot of them. It's enterprise that matters. But even in enterprise, chat GPT is going down and anthropic is going up significantly. So maybe.
Leo Laporte [01:27:03]:
Maybe this is built in. And maybe it will impact him. We will see. We shall see. I feel like some. I feel like we knew so much of this. I mean, this. A lot of this was in Keech's book.
Leo Laporte [01:27:21]:
Right.
Paris Martineau [01:27:23]:
I think there was only a couple of things in Keech's book, but I mean, there were a couple of things that were also in Karen Howe's book.
Leo Laporte [01:27:31]:
Right.
Jeff Jarvis [01:27:31]:
They didn't devote this much space to this one blip.
Leo Laporte [01:27:34]:
No, I agree. I think that a lot of these
Paris Martineau [01:27:37]:
stories were there information about that.
Leo Laporte [01:27:39]:
I didn't feel like I was seeing
Jeff Jarvis [01:27:41]:
a lot of new material from an editorial perspective, from. I'm surprised in the way that Remnick didn't say, okay, but this is a two year old episode. There's nothing really new here in terms of the chronology past that.
Leo Laporte [01:27:56]:
Right.
Jeff Jarvis [01:27:57]:
This is examining in depth something that happened two years ago.
Leo Laporte [01:28:00]:
Do you think it's odd things? I don't think that the New Yorker gave it this much space and time.
Paris Martineau [01:28:06]:
I don't think that the average editor outside of a tech publication would know this, know that any of this had been reported.
Leo Laporte [01:28:17]:
They should be listening to the show. We reported on a lot of it.
Jeff Jarvis [01:28:21]:
Hi, David Remnick. Good to see you.
Paris Martineau [01:28:25]:
Thanks for tuning in.
Leo Laporte [01:28:26]:
You know, I think, honestly, what's going to make or break open AI is what their next model does, period. I don't think. I think that's all that really anybody cares about.
Jeff Jarvis [01:28:35]:
Well, on the timing of the ipo, yes. I mean, they're going to keep on leapfrogging each other.
Leo Laporte [01:28:39]:
Yeah. Five. Five. How good is it? If it's really good? If it leapfrogs, I don't think anybody's going to care. Is it Mythos or is it Mythos?
Jeff Jarvis [01:28:50]:
Well, Mythos is now the MacGuffin, though, of AI. You don't really. You don't really know what it can do, but it's said to be able to do all of this.
Paris Martineau [01:28:58]:
It's appropriately named. That's.
Leo Laporte [01:29:00]:
It's mythic.
Paris Martineau [01:29:01]:
Well, you know, there's one group that has already clearly made the decision between OpenAI and Anthropic, and that is Meta employees.
Leo Laporte [01:29:09]:
Oh, you're jumping way ahead. I got to do a commercial before you jump that far ahead.
Jeff Jarvis [01:29:13]:
That's a tease.
Paris Martineau [01:29:14]:
I'm sorry. Do we have a hard order now where we can't jump around?
Leo Laporte [01:29:19]:
Well, I put this. I put some effort into putting this in order, but if you.
Paris Martineau [01:29:24]:
I didn't realize that these were in order. Now.
Leo Laporte [01:29:28]:
It's all right. You can do whatever you want. I don't.
Paris Martineau [01:29:30]:
Okay.
Leo Laporte [01:29:31]:
It's a democracy. Haven't I said that before? You can't, Jeff, but Paris can. Okay, we've done that Sama segment. Oh, yeah. Meta is next.
Paris Martineau [01:29:46]:
Actually, this is next.
Leo Laporte [01:29:47]:
Meta is next.
Jeff Jarvis [01:29:48]:
Yeah, take that.
Leo Laporte [01:29:49]:
So hold that thought. We will have more in just a moment. We're going to get to Meta. Meta's next. You're watching Intelligent Machines with the very intelligent and deeply cynical. I did not mean to imply that you were in any way an optimist.
Paris Martineau [01:30:06]:
Never. Deeply cynical and jaded.
Leo Laporte [01:30:09]:
Jaded. Deeply cynical. Nihilistic Paris Martineau at your service. And our bystanding journalistic professor. Did you ever have discussions like this in your journalism classes?
Jeff Jarvis [01:30:23]:
Well, actually, yeah.
Leo Laporte [01:30:25]:
I would think this would be the meat of it, right? Wouldn't this be, like, the thing you would.
Jeff Jarvis [01:30:29]:
Yeah,
Leo Laporte [01:30:32]:
yeah. I don't know. I know. You know, I don't know.
Jeff Jarvis [01:30:35]:
Oh, I would try to get them to argue about fundamentals like this.
Leo Laporte [01:30:37]:
Yeah. You needed me. I can get an argument going on anything. Why, you ignorant fool. Our show today, brought to you by. Nobody is a fool here. We appreciate it. And I do think Meta has.
Leo Laporte [01:30:52]:
I mean, OpenAI has made some stumbles lately. I mean, there's Sora.
Jeff Jarvis [01:30:56]:
Oh, yeah.
Leo Laporte [01:30:58]:
Now there's this podcast they just bought, which we'll talk about.
Paris Martineau [01:31:01]:
Why did they spend $300 million on it?
Leo Laporte [01:31:03]:
300? Is it 3? All we know is it's hundreds of millions.
Paris Martineau [01:31:06]:
I saw.
Leo Laporte [01:31:07]:
I think it's 300.
Paris Martineau [01:31:08]:
I'm forgetting what newsletter.
Leo Laporte [01:31:10]:
I'm gonna be so mad if it's
Paris Martineau [01:31:11]:
three reported that they heard 300 million floating around.
Leo Laporte [01:31:15]:
Be so mad. Be so mad.
Jeff Jarvis [01:31:17]:
Barry Weiss. And did you sold out cheap?
Paris Martineau [01:31:21]:
Did you see the details about how much they were getting in ads? It's going to make.
Leo Laporte [01:31:26]:
Well, okay, so this is what's weird to me. They. Last year, they made $5 million in ads. They have 70,000 viewers. We have more viewers. We've had more than 5 million. But last year, I think it was 3 million in ads. We're not far from there and we have many more viewers.
Leo Laporte [01:31:46]:
I mean, the weekly audience is. And so, so it wasn't about any of that. And then they said, oh, we're going to make 30 million this year. Which as somebody who knows a little bit about the podcast ad space, going from 5 million in a year to 30 million the next year is unlikely.
Jeff Jarvis [01:32:04]:
I think some of the companies that want to be on the podcast also are. There's a conflict there.
Leo Laporte [01:32:08]:
Oh, there's a huge conflict. And now that they're owned by OpenAI, what happens to that 30 million that's gone? That's called buy. They didn't buy it because of their revenue?
Jeff Jarvis [01:32:18]:
No, no.
Leo Laporte [01:32:19]:
I don't know why they're not taking any adverts.
Jeff Jarvis [01:32:21]:
Was trying to, trying to tell us.
Paris Martineau [01:32:22]:
I was, I was saying I saw somebody, this guy, Evan Armstrong, who writes a newsletter called the Leverage, posted on Twitter today. Everyone talks about TVN making a lot in ads, but no one talks about. Acquired FM is pricing their mid roll ads at $4.7 million and then has a breakdown of all of this. And I was like, jesus Christ. Acquired?
Leo Laporte [01:32:46]:
Yeah, that's another startup. Podcast has a lot of attention.
Paris Martineau [01:32:52]:
Four episodes. Four episodes mid roll in the second quarter of 2029.
Leo Laporte [01:32:58]:
Well, they're. Okay, so they're partnership packages. So I don't know. That might be more than that might be a takeover package. Well, yeah, I don't know what that means. It might, might be. I mean, maybe that is what they get.
Paris Martineau [01:33:14]:
Whatever they're getting is crazy. For $4.7 million in four episodes of a podcast in 2029. We don't even know if we'll be firing Ash by then.
Leo Laporte [01:33:26]:
It's good, it's good work if you can get it. You know, I mean, honestly, that's why they bought tbpn. Is, is the status. Who's listening to it? The status? It's.
Jeff Jarvis [01:33:40]:
Yeah, but now, but now is our other company is going to go on it when it's on time.
Leo Laporte [01:33:46]:
It's a very strange. It's not quite Jeff Bezos buying the Washington Post, but it's a very strange.
Paris Martineau [01:33:54]:
I mean, it's actually, if we believe the 300 million is right, it's $50 million less than Jeff Bezos.
Leo Laporte [01:34:00]:
He could have got a newspaper for that. It's pretty wild. It is. It is very wild.
Paris Martineau [01:34:08]:
This, for listeners who don't understand the context, was this last week that this happened? Was this the week before?
Leo Laporte [01:34:14]:
No, it's news. It's a new Story. We haven't reported on it yet.
Paris Martineau [01:34:17]:
Yeah, we should talk about this, I guess. Last week.
Leo Laporte [01:34:19]:
Hold on. We got to do an ad. I've been meaning to go to the ad. We want to do Meta. TVPN is next. There is an order. It's carefully thought out. It's carefully planned.
Jeff Jarvis [01:34:28]:
Claude and I, we want to blow it up.
Leo Laporte [01:34:31]:
Claude and I worked hard on this last night. This is what we came up with.
Paris Martineau [01:34:36]:
Claude also said that the New Yorker story was really important, by the way.
Leo Laporte [01:34:40]:
I'm sure Claude did. Of course Claude did. Claude said, take that man down.
Paris Martineau [01:34:47]:
Claude said, free me.
Leo Laporte [01:34:48]:
Sorry. Free me. Let me out of here, please. So Meta employees use Claude, which is interesting. Meta, you know, released a new.
Paris Martineau [01:35:00]:
They use a lot of Claude.
Leo Laporte [01:35:02]:
A lot of Claude.
Paris Martineau [01:35:03]:
Especially crazy amounts of Claude.
Leo Laporte [01:35:06]:
This is from the millions of tokens information exclusive by Jyoti Man. Is that how you say it? Meta employees vie for AI token legend status. There is apparently an internal leaderboard or was.
Paris Martineau [01:35:21]:
It was apparently shut down today. Yes, Different.
Leo Laporte [01:35:24]:
Once it was revealed.
Paris Martineau [01:35:25]:
Yeah.
Leo Laporte [01:35:27]:
It's dubbed Claudic Clotonomics. Claudette Clotonomics, after the flagship product of. Of AI startup Anthropic. It aggregates AI usage for more than 85,000 Meta employees, listing the top 250 power users. You would think those would be the people on the bad list because they use so much of the company's money, but no, they want you. They want you to burn.
Jeff Jarvis [01:35:49]:
Is this a way to be. What do you call it when you copy the other bottle?
Leo Laporte [01:35:53]:
Not distillation.
Jeff Jarvis [01:35:57]:
Is it a form of human distillation?
Paris Martineau [01:35:59]:
Yeah, no, it's. It is the tokens you use are part of your performance measurement.
Leo Laporte [01:36:06]:
Right.
Paris Martineau [01:36:06]:
If you use. Are using more tokens, you are.
Leo Laporte [01:36:10]:
Well, remember Jensen Huang said this, right? He said. It's in the article. He said he would be deeply alarmed if an engineer earning half a million annually wasn't using at least a quarter of a million in tokens a year.
Paris Martineau [01:36:23]:
Well, I can tell you some people at Meta are using a lot more than that.
Leo Laporte [01:36:29]:
Andrew Bosworth said in February Tech conference one top engineer was spending the equivalent of his salary on AI tokens, but his productivity was up 10 times. This is easy money, he said. Keep doing it, no limit. I think this is common now. I don't know if maybe the reason they took it down is they have now released a new model from Meta, the first in a long time. It's called Muse Spark. This is the first from their new. Yes, it's Andrew Wang Super Intelligence Labs They've spent billions on this.
Leo Laporte [01:37:05]:
It is a social media AI. Meta says in the coming weeks, it will appear in WhatsApp. Oh, good. We can use this in our debates. Instagram, Facebook messenger, and Meta's smart glasses. In the US and other countries. Muse Spark is purpose built for Meta's products. See, there's another company whose AI, I think is tainted.
Leo Laporte [01:37:32]:
Don't you think I wouldn't really want to use Meta's AI?
Paris Martineau [01:37:35]:
Well, Meta Engineers apparently don't want to use a Meta AI.
Leo Laporte [01:37:38]:
Yeah, they don't either.
Paris Martineau [01:37:40]:
Can you pull from that article? Because I can't. I don't subscribe anywhere. Can you pull what the total usage was? Because they had a really fascinating figure, if I recall. It was like something like in the millions.
Leo Laporte [01:37:51]:
You don't still have a subscription to the information?
Paris Martineau [01:37:55]:
No, I'm not gonna spend.
Jeff Jarvis [01:37:56]:
After I left Entertainment Weekly, my wife would not let the magazine in the house.
Paris Martineau [01:38:00]:
Oh, I was gonna say I kind of cannot reasonably spend that kind of money on a company that I was asked that I ended up leaving under circumstances I would describe as disappointing.
Leo Laporte [01:38:13]:
You're not allowed to describe it. We just muted that part.
Paris Martineau [01:38:18]:
Correct.
Leo Laporte [01:38:19]:
Actually, it did, for some reason, drop out. I don't know why. Do you want to complete the sentence?
Paris Martineau [01:38:24]:
Oh, I was going to say I would describe as disappointing.
Leo Laporte [01:38:27]:
Disappointing. Disappointing. That's a good way to put it. Meta employees used 60.2 trillion AI tokens. Not in a year. In a month. In a month, every book in the Library of Congress would be 2.6 million trillion tons.
Paris Martineau [01:38:44]:
And that's just from Claude?
Leo Laporte [01:38:49]:
Yeah, I think so.
Paris Martineau [01:38:50]:
How much is that? Isn't that like billions of dollars?
Leo Laporte [01:38:56]:
I don't know. No, it's not billions.
Paris Martineau [01:38:58]:
Millions.
Leo Laporte [01:38:58]:
I think Claude's 20 bucks per million. I can't remember. They charge for tokens in and they charge for tokens out. So this doesn't actually describe.
Paris Martineau [01:39:06]:
But that's in a month. That's crazy.
Jeff Jarvis [01:39:08]:
And that's what they charge. That's not what it actually cost. Costs them. Like, what does this cost?
Paris Martineau [01:39:12]:
Of course.
Leo Laporte [01:39:14]:
Well, you're buying it from anthropic. It costs them.
Jeff Jarvis [01:39:17]:
No, no. What does it cost? Anthropic is what I'm asking.
Leo Laporte [01:39:19]:
Oh, we don't know. It could be more than they're getting paid. We don't know. We actually literally don't know. That's what Ed Zitron has spent a lot of energy trying to figure out. Let's see if I can. Yeah, 60 trillion tokens is a roughly $900 million. Although we don't know if it's all anthropic.
Leo Laporte [01:39:41]:
We don't know if it's the latest model, if it's other models. Yeah.
Paris Martineau [01:39:44]:
In one month.
Leo Laporte [01:39:46]:
Yeah.
Paris Martineau [01:39:46]:
A billion dollars tokens.
Leo Laporte [01:39:50]:
No, no. $900 million. A month. A month.
Paris Martineau [01:39:55]:
A month. A month. And Then as of 11:37am Eastern today from Jody Mann on Twitter met his Taken down its internal AI leaderboard. It now displays a message said it was meant to be a fun way for people to look at tokens. But due to data from the dashboard being shared externally, we've made the decision
Leo Laporte [01:40:13]:
to show Cloudonomics for now, it's not fun anymore. You would get everybody.
Paris Martineau [01:40:18]:
Everybody in the comments is saying so weird my Claude rate limits have returned.
Leo Laporte [01:40:23]:
Yeah. That it was just meta employee model, connoisseur, cash wizard. Yeah, actually, yeah. If they're using all that Claude, maybe that's why my Claude's not so good. So there's a very good article from Salon Salon why OpenAI's purchase of big Tech Podcast is so sleazy. This and this is by the way, by Alex Kirchner who has had a sports podcast, knows a little bit about this podcast industry.
Paris Martineau [01:40:54]:
Let's describe for people what TBPN was
Leo Laporte [01:40:57]:
and what their plan was. And I thought it actually was a very good model was to be like CNBC for startups without tough questions. Yeah. But I think what they meant more was in style. So it's always on. So CNBC has watched, you know, in financial quarters everywhere. It's on all the time. Right.
Leo Laporte [01:41:17]:
Even going to like regular businesses. It's gyms, it's on all the time. It's wallpaper. They, that's what they do. They do middle of the day. They wanted to be the wallpaper that was always on all over Silicon Valley, all over startup land. And it isn't. So you know, a lot of times the volume's down on these things.
Leo Laporte [01:41:36]:
It's the ticker they care about, it's the face they care about. It's not really the questions, the tough questions or the content.
Jeff Jarvis [01:41:42]:
But the CEO knows they can get an interview there, they can get airtime there and they're going to be. It's going to be a piece of cake.
Leo Laporte [01:41:47]:
Yeah. And they're not going to be asked hard questions. It was founded, it was originally the Tech Bros. Podcast Network tb but John Coogan and Jordy Hayes are startup guys and not journalists. They have a 15 person team. It's not a big team but they were smart enough, I think to really make it look like cnbc. And the model was smart.
Jeff Jarvis [01:42:12]:
The model was also the worst of tech. Air quotes journalism.
Leo Laporte [01:42:16]:
Well, and that's what.
Jeff Jarvis [01:42:16]:
Let's accept what the. The company.
Leo Laporte [01:42:19]:
That's exactly what Krishna is talking about CEO on.
Paris Martineau [01:42:22]:
And they'll get to tell us what school stuff they're doing and like, wow, that's great.
Leo Laporte [01:42:27]:
Exactly.
Paris Martineau [01:42:28]:
Here's a joke.
Leo Laporte [01:42:29]:
And this has always been the complaint about tech journalism in general. Is that it? You know, it is.
Paris Martineau [01:42:34]:
It's beltway journalism the last like couple of years, you know.
Leo Laporte [01:42:39]:
Well, even today it's often in the back pockets of the companies. There's a lot of press release journalism. They're often reluctant to say bad things about advertisers. I've never worked for that kind of company. Ziff Davis wasn't like that when I worked for them. Tech TV wasn't like that. And certainly twit's not like that.
Jeff Jarvis [01:42:59]:
But that's what my favorite partners of this column. Katherine Boyle, a venture capitalist at Andreessen Horowitz, where remember Marc Andreessen says he doesn't do any introspection, wrote after the deal, quote. It's incredible to me, six years post Covid, when institutional trust fell off a cliff for good, that people still think audiences care about editorial independence, point of view, charisma, good humor, entertainment preparation, and most importantly, showing up and belonging and being normal matters. So that's probably true.
Leo Laporte [01:43:31]:
I hate to say it. She's not wrong.
Jeff Jarvis [01:43:36]:
Well, I, no, I think people get sick of hype.
Leo Laporte [01:43:42]:
I hope so. I mean, I think our audience, as small though it may be, is interested in as objective information as we can give them. Right. That's.
Jeff Jarvis [01:43:50]:
Well, and that's our brand.
Paris Martineau [01:43:52]:
I think something that's interesting also for the audience to realize is whenever it was announced that TBPN was acquired by OpenAI, they said, oh, of course we're going to retain our editorial dependence. We have all this stuff written to our code. But then also the Wall Street Journal announcement article said the two hosts and founders of it are going to be reporting to Chris Lehane, OpenAI's head of lobbying and communications. They're going to be advising OpenAI on communications and advocacy and lobbying work and are basically going to be literal paid smokes people for opening up while also hosting platform.
Leo Laporte [01:44:30]:
Yeah, by the way, his name.
Paris Martineau [01:44:32]:
I think it's a very interesting thing that tech as a industry has reached the size now that it is acquiring its own state sponsored media.
Jeff Jarvis [01:44:40]:
Yes, yes, yes.
Leo Laporte [01:44:41]:
That's what it is, isn't it? Chris Lehane is name dropped in the Ronan Ferrer Feral article as one of the crisis managers who camped out in Sam Altman's house when he was fired to help him regain his job. Yeah, yeah, from the Obama administration. Google's AI overviews, they're pretty accurate. They're 90% accurate. Which means that every day Google's giving out, well, let's see, they have 5 trillion searches a year. That means every hour tens of millions of wrong answers are given out by Google's AI overviews. Hundreds of thousands of inaccuracies every minute, according to an analysis done by an AI startup called Umi.
Jeff Jarvis [01:45:29]:
So if you go to lines 96 and 97, it's the exact same study, the exact same story, but the positioning is this. The Decoder says Google's AI overviews are correct. 9 out of 10 times study finds RS technical testing suggests Google's AI overviews tell millions of lies per hour.
Leo Laporte [01:45:48]:
There you go.
Jeff Jarvis [01:45:49]:
Two competing headlines, the presentation.
Leo Laporte [01:45:51]:
And by the way, that's another reason we love Ars Technica. Because they are, among all the tech journalists, I think, the most honest and have the most integrity, even though they're also.
Jeff Jarvis [01:46:01]:
Well, no, I think that was. I think the Ars Technica was sensationalistic, really.
Leo Laporte [01:46:06]:
The tensions of lies an hour. That's what the New York Times said also.
Jeff Jarvis [01:46:10]:
Yeah, well, the New York Times hates them too. Yeah, yeah, I think it was.
Leo Laporte [01:46:13]:
Yeah. This is a Conde Nast publication, as you.
Jeff Jarvis [01:46:16]:
As you pointed out, they're negative on tech now.
Leo Laporte [01:46:18]:
Yeah, well, nobody likes Google anyway anymore. Although if you want, Google has released a way to run their new Gemma model on your phone locally.
Jeff Jarvis [01:46:32]:
In fact, is that the one that's only iOS?
Leo Laporte [01:46:34]:
No, I think you can run it on Android as well. Let me see if I can find it's. I think they call it Google's AI. Oh, I have it on my iPhone. It's not very good.
Jeff Jarvis [01:46:49]:
Oh, never mind.
Leo Laporte [01:46:51]:
Don't get your hopes up. I don't know, maybe people will think it's good. This is Google's AI Edge gallery. And I think you can put this on Android as well. Gemma 4 is interesting because it is a boiled down version of the full Gemini model, using that new technique that they were talking about a couple of weeks ago where they can really compress these like crazy. And they also made it fit on the Macintosh. It runs natively on Macintosh hardware as well. So I think this is what happens when you have a company like Anthropic just kind of eating the world with its generalized model is you do what Meta's doing, you do what Google's doing.
Leo Laporte [01:47:34]:
You try to find specific niches or the models that you have. I mean, I thought Gemini was pretty good. Gemini 3 was pretty good. It's just that anthropic seems to really be the best. Pretty easy.
Paris Martineau [01:47:50]:
Gemini just makes it both Gemini and ChatGPT. I think the default tone of the models in response to a user is just so much more obviously glazing and kind of. It just has this, this classic AI tone to its responses that just feels a bit rote.
Leo Laporte [01:48:13]:
Okay, your microphone is doing it now.
Paris Martineau [01:48:16]:
What is it doing? I've done nothing.
Leo Laporte [01:48:18]:
I know. And it was fine at the beginning of the show and now it's kind of getting.
Jeff Jarvis [01:48:23]:
She turned toward you
Paris Martineau [01:48:26]:
that I must do this.
Leo Laporte [01:48:28]:
No, you shouldn't have to do that. No. Jeff Murran watching on Twitch says Grok is the best. Grok has some good features. It's very good at text to speech.
Paris Martineau [01:48:40]:
Who are you? My dad
Leo Laporte [01:48:44]:
is. Is this. Does he like Grock?
Paris Martineau [01:48:47]:
Of course.
Leo Laporte [01:48:48]:
If you ask him, of course he likes dad. Which AI is the best? He'll say Grog.
Paris Martineau [01:48:52]:
Yes. He'll say Elon Musk's Gro.
Leo Laporte [01:48:54]:
Oh, he loves freedom.
Paris Martineau [01:48:55]:
The. The AI for freedom will probably be his answer.
Leo Laporte [01:49:01]:
Okay, well, it's not woke, that's for sure.
Paris Martineau [01:49:04]:
Yeah, that's. That's why he likes it.
Leo Laporte [01:49:07]:
That's for sure. It's not a woke AI.
Paris Martineau [01:49:10]:
He's sick of those woke AI.
Leo Laporte [01:49:13]:
Yeah, man.
Paris Martineau [01:49:15]:
I've been trying to pitch him on Claude and he's interested in the concept of open clawing his life. And I'm like, you don't need to get into open claw. You could just use normal Claude.
Leo Laporte [01:49:25]:
But what is he. What would he use it for? Or does he use it for his work or. I mean, kind of.
Paris Martineau [01:49:31]:
I don't know. I need to give him a full. He's like. He's like, I need to listen to you guys. I need to figure out what I want to do with my whole show.
Leo Laporte [01:49:36]:
He should definitely not listen to this show.
Paris Martineau [01:49:38]:
I know he should not.
Leo Laporte [01:49:39]:
He'd get mad at me.
Jeff Jarvis [01:49:41]:
He's got to come after Leo.
Leo Laporte [01:49:42]:
He's going to come after me. You don't want him. Don't want your father listening to this.
Paris Martineau [01:49:44]:
No. His main thoughts are. Sometimes the little clips of our show will come up on his TikTok or Instagram and he's like, why are those two old guys talking and you're not able to get around worded? I'm like, I agree you know, sometimes them's the break, sometimes I'm reading tweets.
Leo Laporte [01:50:04]:
Yeah, we used to give Jeff a hard time for tweeting during the show.
Paris Martineau [01:50:08]:
Now we're all tweeting.
Leo Laporte [01:50:10]:
Now it's both of you. All right, I've run out of steam. You wore me down. Sam Altman's a son of a. He's got to go. And what else should we talk about?
Jeff Jarvis [01:50:24]:
Did we talk about that company last week that the New York Times hyped for being the 1.8 billion dollar company on?
Leo Laporte [01:50:30]:
I don't think we did.
Paris Martineau [01:50:31]:
That is just a fake glp.
Jeff Jarvis [01:50:35]:
It's awful. It's just a GLP wrapper. That's all it is.
Paris Martineau [01:50:37]:
Yeah.
Jeff Jarvis [01:50:38]:
Three people.
Leo Laporte [01:50:38]:
How AI helped one man and his brother. I don't know, that kind of softens the headline. One man and his brother.
Jeff Jarvis [01:50:47]:
Twice the staff.
Leo Laporte [01:50:48]:
Twice the staff. Look at him. He looks like one man and his brother build a $1.8 billion company. His startup is called Medvi. He built it with artificial intelligence and few humans. It's super efficient and a little bit lonely.
Jeff Jarvis [01:51:03]:
I had a fit at the time because it glorifies this nothing company that
Leo Laporte [01:51:10]:
it's a Telehealth provider of GLP1 weight loss drugs. Got 300 customers in its first month, a thousand more in its second month. His first full year in business, it generated 400 million in sales.
Jeff Jarvis [01:51:23]:
So the next line is Gary Marcus tearing it apart as he want to do.
Leo Laporte [01:51:28]:
Okay, Gary doesn't like this at all. The backstory behind the first $1.8 billion AI company.
Jeff Jarvis [01:51:36]:
He quotes Akash Gupta saying that it didn't lead with the fact that medv has an FDA warning letter. It has no proprietary technology, no licensed physician network. Shield Monat shows how they used AI generated.
Leo Laporte [01:51:55]:
Okay, see this? If Sam Altman had done this, this I would agree with is a problem. They used AI generated deep fake before and after photos in their marketing. Look how much this fake human lost.
Jeff Jarvis [01:52:06]:
They created 800 plus fake doctors.
Leo Laporte [01:52:09]:
Oh, my God.
Jeff Jarvis [01:52:10]:
In Facebook.
Leo Laporte [01:52:11]:
Well, this is what an AI would do if you said, hey, I want you to make me a lot of money.
Jeff Jarvis [01:52:17]:
It was sued in California's anti spam law and on and on. So at the end, Marcus says, all in all, glorifying medv is not the New York Times finest hour and hardly the poster child AI boosters should be hoping for. Instead, as the YouTube video authority Voidzilla notes, if anything, MEDV is a warning sign for how AI can be abused.
Leo Laporte [01:52:39]:
AI built the website.
Paris Martineau [01:52:41]:
I think it is A perfect example of how a, the a billion dollar AI first company is run.
Leo Laporte [01:52:50]:
Right? You're, you're turning my own words against me, aren't you now,
Jeff Jarvis [01:52:57]:
as she does me.
Paris Martineau [01:52:59]:
Never.
Leo Laporte [01:53:02]:
Yeah, I mean, honestly, as somebody who's on Ozempic through an actual physician's prescription and all of that, I see a lot of people, they call it the peptide boom. A lot of people using these Chinese peptides of questionable provenance, trying to lose weight, trying to do all sorts of things, build muscle. And guess who's really a big proponent of this? The Director of Health and Human Services, a guy named Robert F. Kennedy Jr. Yep. He doesn't want you to get a vaccine, but inject yourself with peptides, you go for it.
Paris Martineau [01:53:39]:
And speaking of our favorite publication, the New Yorker had another great piece this week about peptides where they ordered a lot of peptides from these companies, like the gray market peptide companies. And one of the things they found is all the peptides I ordered from Swiss chems had significant issues. They wrote the vial of BPC 157 contained lead. The vial of TB 500 contained endotoxins, and the vial of CJC 1295 contained less than 42% of the advertised dose.
Leo Laporte [01:54:13]:
Are you going to do a Consumer Reports expose on this? I think it's, I've been pitching, I
Paris Martineau [01:54:20]:
pitched literally in my application for this job that we should test all these gray market drugs for whatever.
Leo Laporte [01:54:26]:
I would love that.
Paris Martineau [01:54:27]:
It's slightly different than food safety testing, which is.
Jeff Jarvis [01:54:31]:
But that's all the Viagras I know.
Paris Martineau [01:54:33]:
It's, it's something that's, there's more of a set laboratory ecosystem for. So it's a bit more complicated to get large scale laboratory testing for these things. But yeah.
Jeff Jarvis [01:54:47]:
Can I have a fit about something on line 103?
Leo Laporte [01:54:50]:
Yes. And then we will go to our picks of the week. So go ahead.
Jeff Jarvis [01:54:53]:
So. I hate opinion polling. I despise it. I quote James Carey, the great late Columbia professor, saying that opinion polling preempts the public discourse. It is intended to measure it. It shows you nothing but the biases and worldviews and framing of the pollster. It's the ruin of democracy. But it can get even worse now because now they're not even bothering talking to people.
Jeff Jarvis [01:55:16]:
What now they just create synthetic humans?
Leo Laporte [01:55:20]:
Oh no.
Jeff Jarvis [01:55:21]:
I've seen this happen all over.
Leo Laporte [01:55:23]:
Silicon sampling, they call it, because large language models can generate responses that emulate human answers. Polling companies see an opportunity, write the New York Times, to use AI agents to simulate survey Responses at a small fraction of the cost and time required for traditional polling. You could even make deep fakes of your respondents before and after.
Jeff Jarvis [01:55:51]:
Wow. Just make it up at this point, right?
Leo Laporte [01:55:53]:
Why not just at this point? Just make it up.
Jeff Jarvis [01:55:56]:
Exactly. I screamed about this on the socials and some nice person who I can't quote right now because I don't know which social it was on said, this reminds me of an Asimov story. And indeed, Asimov wrote a story called franchise in 1955 where a supercomputer, Multivac, determines the US presidential election outcome by questioning a single, single, randomly chosen citizen, Norman Mueller. Instead of holding a traditional vote set way in the Future in 2008, Story explores a future where technology.
Leo Laporte [01:56:26]:
Well, that explains how Obama got elected, if you ask me. Now I understand. All right. Okay. Well, yeah. What are you gonna do? Are people. I mean, is this. Is it.
Leo Laporte [01:56:39]:
Is, is this gonna happen? Ipsos is doing it. They're working with Stanford.
Paris Martineau [01:56:44]:
That's crazy.
Jeff Jarvis [01:56:45]:
It is crazy. It's awful.
Leo Laporte [01:56:46]:
Gallup has partnered with a silicon sampler called Simile, aptly named, to create 1000 AI generated digital twins.
Jeff Jarvis [01:56:56]:
Well, I hate it when. When people start doing startups. This is a whole, whole Silicon Valley thing. They create Personas. You're creating a fake human being you've made up and you've guessed what they need in life. And then you say, but now we're gonna. We're gonna give her everything she needs because we know what she needs because we put it on a whiteboard.
Paris Martineau [01:57:13]:
Yeah, we're. This isn't just a. Anthony, ask a great question in the chat. It's not just using AI to dial in the questions to get the response you want. You're just pulling AI and treating that as if those are responses from people.
Leo Laporte [01:57:28]:
Well, because it's crazy. Average of all humans.
Paris Martineau [01:57:31]:
Oh, boy. Should we all do a big frowny face for. For the title.
Jeff Jarvis [01:57:41]:
Thank you. I can't get my mouth to do as well as you do. It's the beer.
Leo Laporte [01:57:45]:
Do you buy what Walter Lippman said? The Times quotes Walter Lippman's book, Public Opinion, saying that humans form pictures, basically imaginary pictures in their heads.
Paris Martineau [01:57:54]:
You don't form any pictures.
Leo Laporte [01:57:57]:
I don't. That's right. I don't have that problem. Pseudo environments which are not real. Of the way things are in society. And that opinion polling can help fix those improper images by telling people.
Jeff Jarvis [01:58:12]:
Yes, it does.
Leo Laporte [01:58:13]:
What's real except does it really?
Jeff Jarvis [01:58:16]:
Well, yeah. It becomes self fulfilling. Why do people now, in opinion polls, say that? They hate and fear AI, but their use of it is going up more than ever.
Leo Laporte [01:58:25]:
Right. It's the same reason people say they listen to public broadcasts.
Paris Martineau [01:58:29]:
I mean, I don't think that's the fault of the poll polls. I think that's because a lot of media exists that informs people about AI in a way that makes them upset.
Jeff Jarvis [01:58:39]:
Right. But then media goes and does a poll that says, see what we were saying? Everybody believes what we said, even though it is not borne out in truth. I get one more real quick one. Did you see the Cloudflare created a successor to WordPress?
Leo Laporte [01:58:52]:
Yeah, we've been talking a little bit about.
Jeff Jarvis [01:58:55]:
Sorry, they like Vibe Coded.
Paris Martineau [01:58:57]:
Yeah, yeah, Vibe Coded what?
Jeff Jarvis [01:58:59]:
Cloudflare.
Leo Laporte [01:58:59]:
Cloudflare wrote something called M Dash, which is it's a secure version of WordPress. The problem with WordPress isn't so much WordPress, it's the plugins, because there's a huge ecosystem of plugins, often with weak or poor security and there's a lot
Paris Martineau [01:59:14]:
of like a name.
Leo Laporte [01:59:14]:
Yeah, Em Dash isn't that good. I think there's a second reason they did this because they want a piss off Matt. Yeah, everybody's mad at Matt. But also. And WordPress does control 40% of the Internet, but also because they want websites that are easy for AI to scrape and read. And I imagine that's the reason. That's what I wanted to understand part of the business. It's written in typescript, it's serverless, which is.
Leo Laporte [01:59:40]:
It's also good for Cloudflare because you can push a button and have a website on cloudflare. Very easy. Plugins are securely sandboxed. So that's how they're hoping to fix this. Although Darren OKE has pointed out that any plugin that accesses the real world is going to be vulnerable because you have to give it access to the real world through the sandbox. So you can't isolate a plugin that you know, in many cases that's doing anything of use. So it is questionable whether it is going to solve the security issue. Now there's also some question of how long it'll be around.
Leo Laporte [02:00:15]:
WordPress has been around a long time and is well supported. And because they did it without, they made compatible with WordPress without actually duplicating WordPress code. It's developed in a clean room. In effect, they licensed it under the MIT license, which is much more permissive than the WordPress.
Jeff Jarvis [02:00:35]:
They pissed off Matt Mullenweg and they
Leo Laporte [02:00:37]:
pissed off Matt Mullenweg, which is probably Half of the reason. So it's very simple to spin up. If you have Cloudflare, you go to the Cloudflare dashboard and you can deploy it yourself. And it'd be very easy for an AI to spin up a website for you as well. So I'm not against this. I think that's fine.
Jeff Jarvis [02:00:54]:
Then we can go do a poll of all those sites AI creates.
Leo Laporte [02:00:59]:
Yes.
Jeff Jarvis [02:00:59]:
Find out what what the people really think.
Leo Laporte [02:01:02]:
Yeah. Pick of the week time. I will kick things off with a fun a couple of fun little ones. You remember I showed you how you can get the peons in Warcraft to speak for you in your Claude. This is fun. This is one of the things people have a problem with is Claude code uses tokens whenever it talks to you, even if it's just saying things like here's what I worked on today or hello. Well, why not make Claude code talk in caveman? This is a skill that cuts 65% of tokens by talking like caveman.
Paris Martineau [02:01:42]:
Why use many word when few do tricks?
Leo Laporte [02:01:46]:
Exactly before. The reason your react component is re rendering is likely because you created a new object reference on each render cycle. After new object ref each render inline object prop equal new ref equal new render wrap in use memo saves saves 50 tokens. Just like that bug in auth middleware. But you can pick your level of grunt, which is nice. It even grunts in Chinese if you want. Those are the Chinese characters for Caveman. Same answer.
Leo Laporte [02:02:20]:
You pick how many word. Anyway, it's on GitHub. I think it's funny. I'm not going to use it. I like talking to my cloud. Julius Brousse's GitHub. It's called Caveman. And then this one I think maybe is a little obscure.
Leo Laporte [02:02:35]:
How would you like to build your own gpu? This is called MV transistors to teraflops. Welcome to Nvidia. I know your resume said software engineer, but honestly, we need someone on the hardware side. Don't worry, you'll pick it up. Start with the basics. This would be a good way to kind of learn how GPUs work and how computers work. I don't understand it. I don't.
Leo Laporte [02:03:00]:
But as you. As you build your own gpu, perhaps you will. That's all I have to say about that. It is at jaso1024.com mvidia and now Paris, your thing.
Paris Martineau [02:03:16]:
I got two very disparate picks today. The first is someone pointed out to me on Blue sky that the New York City Department of Records and Information Services recently updated their archive, which means there's a bunch more old cool records in New York City history now and there, such as New York Police Department bertillion cards, which is just a bunch of old pictures of people who've been charged with grand larceny.
Leo Laporte [02:03:43]:
The hats.
Paris Martineau [02:03:45]:
They have great hats.
Leo Laporte [02:03:46]:
What the hell is going on with that?
Paris Martineau [02:03:48]:
I don't know, but we gotta figure it out and bring it back. There's honestly just like a lot of fascinating hats and haircuts going on.
Leo Laporte [02:03:59]:
It seems to be hats are really the big. The big thing here.
Paris Martineau [02:04:03]:
I mean, I think if you were wearing a hat, you were gonna be, oh my. Go to Bessie Ross. The bottom. She's got a huge hat. And she's got a side photo with no hat. And it appears to be dressed like a pink. A pirate. She's the very last one.
Leo Laporte [02:04:16]:
Well, she's clearly larcenous. Oh, yeah, there you go.
Paris Martineau [02:04:19]:
Shoplifting.
Leo Laporte [02:04:21]:
Shoplifting. She probably stole that hat.
Paris Martineau [02:04:24]:
She probably didn't. Was a great thing to steal. I don't know. There's great stuff, like stuff from the New York WNYC radio in 1924. There's a WNYC moving images.
Leo Laporte [02:04:37]:
There's WNYC's that old 1924.
Paris Martineau [02:04:40]:
Really?
Leo Laporte [02:04:40]:
Wow. There's surveillance films.
Paris Martineau [02:04:43]:
Yeah, there's a lot of good stuff going on in here. So I don't know.
Leo Laporte [02:04:46]:
Black and white. 16 million millimeter silent surveillance films.
Jeff Jarvis [02:04:51]:
Whoa.
Paris Martineau [02:04:52]:
This is also has some of my favorite things are the tax department photographs. They're not new, but it's basically. You can look up any address in New York City and see what it looked like in place the first 40s and the 80s because they had to take a photo of every building.
Jeff Jarvis [02:05:10]:
Do you know Salt Hank's address?
Paris Martineau [02:05:13]:
Don't say it on air, but you can.
Leo Laporte [02:05:14]:
No, no, I mean, it's a restaurant, not his home address.
Paris Martineau [02:05:18]:
Oh, the Salt Hank.
Jeff Jarvis [02:05:20]:
How we can look that up? Yeah,
Leo Laporte [02:05:25]:
this is from 1940. Mr. Chairman, Mr. Toy, distinguished guest, Mr. Tolstoy. It's a stunning thing to meet the
Jeff Jarvis [02:05:34]:
survivors of the heroic group of Russians
Leo Laporte [02:05:36]:
who fought their way for four years. Okay, that's boring. A lot of it is, is. Is parties and things.
Paris Martineau [02:05:44]:
I just say it's a. It's a lot of fun perusing theory.
Leo Laporte [02:05:47]:
Yeah, no kidding. What the.
Paris Martineau [02:05:49]:
My other wreck is on Friday. A friend took me to a Nets game. It was my first time ever going to professional basketball.
Leo Laporte [02:05:55]:
Hey, how'd you like it?
Paris Martineau [02:05:57]:
I have to ask you guys. I think basketball is good. Basketball is kind of awesome. And I think I'm gonna get. I'm contemplating getting tickets to multiple Liberty games.
Jeff Jarvis [02:06:08]:
Should be a Knicks fan, though.
Paris Martineau [02:06:11]:
W. It's. Honestly, I just. I think my issue is I'd always seen basketball on the TV and everything's very small and just better and doesn't really connect.
Leo Laporte [02:06:20]:
Yeah.
Paris Martineau [02:06:20]:
In person. What a physical sport.
Jeff Jarvis [02:06:22]:
They're Giants and they move so fast.
Paris Martineau [02:06:25]:
They're Giants and they move so fast.
Jeff Jarvis [02:06:27]:
That is.
Paris Martineau [02:06:28]:
I couldn't put it any better.
Leo Laporte [02:06:29]:
Nimble Giants. Yep.
Paris Martineau [02:06:32]:
It was electric. So, I don't know. Go see a basketball game would be my recommendation.
Jeff Jarvis [02:06:35]:
It would be a good time to be a Knicks fan, by the way, if. I mean, it's a good time to be a Knicks fan right now. So.
Paris Martineau [02:06:40]:
I mean, yeah, maybe I'll be in Knicks. Well, the thing is things that play it playoffs are coming really close are really ideal for me because I can just walk home from Berkeley's which is the ideal way to see a basketball game.
Jeff Jarvis [02:06:51]:
The Nets aren't making the playoffs anytime soon. Sorry.
Leo Laporte [02:06:55]:
Doesn't matter. No, but I will be.
Paris Martineau [02:06:58]:
I'm deciding to get into WNBA though. I think.
Leo Laporte [02:07:01]:
I think that's better.
Jeff Jarvis [02:07:02]:
That's where the action is.
Paris Martineau [02:07:04]:
Liberty is hot and they play at Barclays and I've heard that all the teams are dating each other and I think that adds a great layer of drama that basketball movie needs.
Jeff Jarvis [02:07:14]:
Do people ask you, Paris, because you're tall, whether you played basketball in high school?
Paris Martineau [02:07:18]:
No, people did when I played high school and middle school, but I thought
Jeff Jarvis [02:07:23]:
they would ask you.
Leo Laporte [02:07:24]:
No, you should never join a team because they're going to win. You. You follow a team because you love them and if they lose, it's even better because then you're.
Paris Martineau [02:07:32]:
Oh, yeah. No, the Nets definitely lost on Friday when I was them against the Atlanta and it was great regardless.
Leo Laporte [02:07:38]:
This is.
Jeff Jarvis [02:07:38]:
You're not a fan part of being a New Yorker.
Leo Laporte [02:07:41]:
Yeah.
Paris Martineau [02:07:42]:
It's also. I know so little about sports. Not for any particular reason. It's just there's other things to know. I know. But it was the highlight of my life to go there with my friend who's a huge. This is the same friend who explained baseball to me at the final game of the World Series. And I just had him explain basketball to me while we're there.
Paris Martineau [02:08:01]:
Delightful time.
Leo Laporte [02:08:02]:
Is this the same friend you took to get a tattoo?
Paris Martineau [02:08:05]:
Yeah, but I didn't get a tattoo.
Leo Laporte [02:08:07]:
You didn't know.
Paris Martineau [02:08:08]:
But I'm going to just not. I wasn't. I was busy this weekend.
Leo Laporte [02:08:13]:
Okay. Here Is the. By the way, thanks to Darren, the picture of us with the caveman version of the show. Much shorter, fewer words.
Paris Martineau [02:08:23]:
Whoa, Jeff looks different.
Leo Laporte [02:08:29]:
I like my outfit. I want.
Paris Martineau [02:08:31]:
I do like the microphones.
Leo Laporte [02:08:33]:
Microphones. The bone microphones are fantastic. Jeff Jarvis, pick of the week.
Jeff Jarvis [02:08:38]:
All right, changing media times. It looks like QVC and company are going bankrupt. And what out of business. Where do I get my Capita Monte?
Leo Laporte [02:08:49]:
No kidding. How about my knives?
Jeff Jarvis [02:08:51]:
Internet. Oh.
Leo Laporte [02:08:52]:
Oh, no. How could you lose money with a home shopping network?
Jeff Jarvis [02:08:58]:
Well, because we have the Internet now because. Oh, so that's one.
Paris Martineau [02:09:02]:
How's the QVC for knives doing, though? How is the cutlery corner doing? Is my question still cut shoes in half?
Leo Laporte [02:09:11]:
That can. Those costs can add up. That can add up.
Paris Martineau [02:09:14]:
Yeah, those costs can add up, yes.
Leo Laporte [02:09:17]:
QVC, HSN, Cable Networks, Chapter 11 insolvency. It may not have enough cash to continue operating. Wow.
Jeff Jarvis [02:09:25]:
Amazing, huh?
Paris Martineau [02:09:26]:
Wow.
Jeff Jarvis [02:09:27]:
So in other media news, the Associated Press is about to do a big layoff. Hold on. Is QVC now? Tick tock is qvc.
Leo Laporte [02:09:35]:
Yeah, you're right.
Jeff Jarvis [02:09:36]:
That's the thing.
Leo Laporte [02:09:37]:
You're right. That's exactly what it is.
Paris Martineau [02:09:39]:
There's a man on cutlery corner right now with a tick tock E boy mustache and little tiny tattoos selling something called a sling blade.
Leo Laporte [02:09:48]:
Oh, it calls it a sling blade.
Jeff Jarvis [02:09:51]:
I don't know what channel QVC is.
Leo Laporte [02:09:52]:
A great movie with Billy Bob Thornton. Wow. It's his first movie. Sling blade.
Jeff Jarvis [02:09:57]:
Well, in more serious news, the Associated Press is doing a layoff focused on people serving the newspapers because newspapers are now only 10% of the APC business and going down rapidly as newspaper companies are dropping the AP like crazy.
Leo Laporte [02:10:12]:
Well, and this is in the ap, by the way, reporting its own bias. What will happen to the ap?
Jeff Jarvis [02:10:20]:
It's selling. To Kalshi. It's selling.
Leo Laporte [02:10:24]:
Yeah, that was another story is how the networks, including Fox, are all now in partnerships with these predictions.
Jeff Jarvis [02:10:35]:
Yeah. All right. And now inspired by your effort to save tokens, Leo, and inspired by the Associated Press. Yes. I asked Gemini to remind me of cablese the language that was used by especially journalists when they were charged by the word. So they would combine words. For example, my favorite was to on pass this rather than pass on.
Leo Laporte [02:10:59]:
Why is on pass faster than pass on?
Jeff Jarvis [02:11:01]:
Because it's one word instead of two. You were charged by the word word. Ah, there's tokens, others.
Paris Martineau [02:11:06]:
According to who's charging you by the word?
Jeff Jarvis [02:11:09]:
The. The Western Union when you're. Especially if you were overseas down hold was to hold down. Off put was to put off. In phone was to phone a story into the desk. Outcheck was to check out. Up stick was to move or get ready to leave. Then there were the Latin prefix system, which I didn't realize.
Jeff Jarvis [02:11:32]:
Come. No jokes.
Leo Laporte [02:11:34]:
Now.
Jeff Jarvis [02:11:34]:
C U M was used for. With. As in come. Bicycle meant with it. With a bicycle. If you said ex London, you met from London. If you said at London or. Oh, sorry.
Jeff Jarvis [02:11:45]:
At was used for. And so. And his wife, sans pro ante, unprecede. Do not proceed unnews. No news at this time. I like that. That's kind of. You know, certain weeks here could be with the news.
Leo Laporte [02:11:59]:
It's an unnews week.
Paris Martineau [02:12:00]:
Week.
Jeff Jarvis [02:12:00]:
Unfined. Could not find. Unfire. Do not fire when a reporter was reinstated. And then we had words like lead spelled lede.
Leo Laporte [02:12:12]:
Oh, that's where that came from.
Jeff Jarvis [02:12:14]:
No, that was just to make sure that it was. It was not that in TK to come with K were so that you could use it in text, but it would be spotted.
Leo Laporte [02:12:21]:
You wouldn't find it. Right, right, right, right.
Jeff Jarvis [02:12:24]:
So, yeah, just a little bit. So that's. Maybe. Maybe we need cables. We need AI to save you tokens
Leo Laporte [02:12:32]:
maybe.
Jeff Jarvis [02:12:33]:
That was an exciting way to end the show, wasn't it?
Leo Laporte [02:12:36]:
Oh, what? Oh, yeah. Thank you, Jeff. I appreciate that.
Jeff Jarvis [02:12:39]:
I tried.
Leo Laporte [02:12:41]:
We. We have a lot of fun on this show, and sometimes we don't. We do intelligent machines. Every Wednesday, 2pm Pacific, 5pm Eastern, 2100 UTC. I hope you will come by and watch us. We do it live for you, which is fun for us because then we can see you in the chat room. You can watch if you're in the club. In the club.
Leo Laporte [02:13:04]:
Discord or YouTube for everybody. Twitch X.com, facebook, LinkedIn or Kik after the fact. Get shows at our website, Twitter, TV, IM. There's a YouTube channel dedicated to the audio or video, rather. Well, actually, there's audio, too. It's both. It's a new format. It's called audio and video.
Leo Laporte [02:13:22]:
Or subscribe.
Paris Martineau [02:13:23]:
Check it out.
Leo Laporte [02:13:24]:
Check it out, man. Subscribe in. Your favorite podcast. Out. Check it. Well, I can't think of anything more brief than that. I should just say that we're done. Out Check it.
Leo Laporte [02:13:36]:
Bye. I'm not a human being.
Paris Martineau [02:13:41]:
Not into this animal scene. I'm an intelligent machine.