Intelligent Machines 869 transcript
Please be advised that this transcript is AI-generated and may not be word-for-word. Time codes refer to the approximate times in the ad-free version of the show.
Leo Laporte [00:00:00]:
Jeff, it's time for Intelligent Machines. Jeff Jarvis is here. Paris Martineau is here. We have lots to talk about, including the White House saying we want to approve all future AI models. Also, an interview with the guy behind have I Been Pwned? Troy Hunt introduces his AI friend Bruce. Next on Intelligent Machines. Podcasts you love from people you trust. This is Twit.
Leo Laporte [00:00:30]:
This is Intelligent Machines with Paris Martineau and Jeff Jarvis. Episode 869, recorded Wednesday, May 6, 2026. My sentience is going up. Aloha, everybody. It's time for Intelligent Machines. I am here in Hawaii, joined right now by my comrades in crime, Paris Martineau of Consumer Reports.
Paris Martineau [00:00:53]:
Leo, it's rude that I can hear birds chirping behind.
Leo Laporte [00:00:56]:
Oh yeah, oh yeah. There's minas, there's house sparrows, there's, there's a really loud bird. I think it's called the falcon or something like that.
Paris Martineau [00:01:06]:
But you'll hear all we've got here are mourning doves.
Leo Laporte [00:01:09]:
I love doves. We have some doves too. They go. Are they the same as pigeons, though? I think they're just white pigeons.
Paris Martineau [00:01:18]:
No, the one I'm thinking of goes like, like, it's, I'm getting that, I'm getting the tone wrong, but it's a very specific three tone cadence.
Leo Laporte [00:01:29]:
Very nice. Paris has been working hard on a very important article for Consumer Reports. So we're gonna, we're gonna be gentle today. Take it easy on Paris. We're not gonna be so gentle on this guy. Mr. Because he's all better. Mr.
Leo Laporte [00:01:46]:
Jeff Jarvis.
Jeff Jarvis [00:01:47]:
As soon as I saw Leo's background, I said, f you.
Leo Laporte [00:01:50]:
Leo, professor emeritus of journalistic innovation at the Craig Newmark Graduate School. He's also the author of the Gutenberg Parenthesis. What would Google do? We don't talk about that much anymore. Magazine and his newest Hot type, which is available for pre order right now from Jeff Jarvis. Calm.
Jeff Jarvis [00:02:14]:
So are you enjoying it out there, are you?
Leo Laporte [00:02:16]:
Oh, it's wonderful. I'm having a great time. It's beautiful here. We, last night we swam with a manta race, which is quite an experience. They are big. The biggest, named Gabby, is 1400 pounds. That's like a 15 foot wingspan. And you, it's really interesting.
Leo Laporte [00:02:35]:
You go out in a boat at sunset because they feed at night and we get on the water in our little fins and snorkels and we're holding onto a surfboard and they have lights which they shine into the water and the light attracts plankton and the plankton attract the Manta rays, which feed on the plankton. So they all show up and they are beautiful. You know, they're aerodynamic, they, they float and they flap. But they also get really close to you and they're really big and they look scary. They keep saying, no, no, they're not scary. And, and the guide said, oh, no,
Paris Martineau [00:03:14]:
they're not scary things that someone says.
Leo Laporte [00:03:16]:
Clearly, it's like AI they also say the sharks mostly don't bite. So that's good.
Paris Martineau [00:03:23]:
That's what you want.
Leo Laporte [00:03:24]:
Yeah, that's what you want. But the manta rays, they, they swoop and they come and they go underneath you and flip over and they, they, they, they, they say you could lick them, but don't because they're this close to you. They're this close. It's amazing. It's a beautiful show and that was really fun. And tonight we're going to a cowboy barbecue in the upcountry because it's a big, there's a big cattle ranch, big cattle ranches up, up in the 3,000 foot level. So, yeah, there's a lot of, lot of stuff to do. We're going to go to the coffee.
Leo Laporte [00:03:53]:
You know, Kona coffee is famous, Paris. And we're going to go up to the coffee plantation and have some Kona. And we're going to go to a whiskey refinery that makes whiskey out of honey. So there's a lot to do.
Jeff Jarvis [00:04:08]:
Any pineapple plantations?
Leo Laporte [00:04:10]:
Oh, yeah. Lots of fresh fruit. Delicious fresh fruit. It's really fun. We went over to the rainy side the other day. I sent you pictures of the waterfalls and all that stuff. Anyway, nobody cares about this. Let's talk about our, our guest.
Paris Martineau [00:04:23]:
I care.
Leo Laporte [00:04:24]:
I know, but I'm thinking of our audience.
Paris Martineau [00:04:29]:
You don't real, you don't think that a hundred percent of this audience is tuning in because of the parasocial relationship they have with you specifically?
Leo Laporte [00:04:36]:
Probably. But I hate podcasts where they spend a lot of time happy talking. It drives me nuts. So.
Jeff Jarvis [00:04:41]:
Oh, we can fix that.
Paris Martineau [00:04:42]:
We can.
Jeff Jarvis [00:04:42]:
Negative talk.
Paris Martineau [00:04:43]:
Is that useful for you?
Leo Laporte [00:04:45]:
They've started to do it now on NPR's Up first, where they start the podcast with inane chatter. It's like, I'm up first is a
Paris Martineau [00:04:52]:
crazy podcast to do that. Because that podcast is like six minutes long.
Leo Laporte [00:04:56]:
Yes.
Paris Martineau [00:04:57]:
And it's recorded at legitimately, I think 3 or 4am oh, really?
Leo Laporte [00:05:02]:
Well, yet it's the first show of the morning. Yeah. Wow. Well, anyway, no more happy talk. I will give you a warning. We have a great guest, but he is not till the end of the show today. Troy Hunt will be joining us. He is in Australia on the Gold coast.
Leo Laporte [00:05:16]:
So he's not going to call in at this hour, but we will talk to him in a couple of hours. He'll be the last part of the show. Troy, of course, the founder of have I Been Pwned? Which is an amazing website, which I think I just saw. It does 14.2 billion hits a day from not just users, but from browsers too, that are checking to see if your password has been revealed in a breach, things like that. And it's kind of the classic story of a project that is supported by one person. It's actually Troy and his wife that almost everybody uses. It's a great story. And he's lately turned to AI.
Leo Laporte [00:05:55]:
That's why he's going to be on the show. His AI Bruce helps him now. So it's one per two people in the AI doing that. Anyway, Troy Hunt, very famous, very good guest. Coming up later, our top story. The federal government has decided or at least is considering deciding which AI models you should get to use. Oh, that's gonna go great white. This is from the New York Times.
Leo Laporte [00:06:21]:
White House considers vetting AI models before they're released. Remember, Trump got rid of the Biden AI safety bill at the, I think, behest of David Sacks, who is his AI and crypto czar. David Sacks has a lot of AI investments.
Jeff Jarvis [00:06:39]:
I have to think David Sacks, who doesn't like anthropic and thinks it's too woke.
Leo Laporte [00:06:42]:
So that's. Yes, as does Pete Hegseth. I have to think this is really David Sacks saying I should be able to pick the winners and losers.
Paris Martineau [00:06:51]:
Hasn't he. Didn't he say that he had to step away from government?
Jeff Jarvis [00:06:54]:
Yes, but he still advises. Yeah, he still whispers.
Leo Laporte [00:06:57]:
That's the question. Who is going to say which AI
Paris Martineau [00:07:00]:
models are okay, the deciding factor here?
Jeff Jarvis [00:07:03]:
And even so, how, how, how do you vet this? You have. You have to anticipate every possible malign use that anyone could go to. And how it does, it's absurd. There's and these people to vet it. So the last people.
Paris Martineau [00:07:14]:
This is what the Times reports. The administration is discussing an executive order to create an AI working group that would bring together tech execs and government officials to examine potential oversight procedures. According to US Officials, they're going to
Leo Laporte [00:07:27]:
say this is because of Claude Mythos, that they want to protect us from super strong a breaking into systems and things. And so on the surface, it sounds like A good idea, but I think it's really. They want to pick the winners and losers.
Jeff Jarvis [00:07:41]:
Well, the other, the other rumor was that they want to decide who Mythos is so powerful, who's allowed to have it.
Paris Martineau [00:07:47]:
I'm so sorry. Can I, Can I read the quote that Trump said about AI in an event from July that's in this article.
Leo Laporte [00:07:54]:
Yes.
Paris Martineau [00:07:55]:
We're going to make this industry absolutely the top, because right now it's a beautiful baby that's born. Trump said of AI at an event in July.
Jeff Jarvis [00:08:04]:
I'm going to smack up.
Paris Martineau [00:08:05]:
We have to grow that baby and let that baby thrive. We can't stop it. We can't stop it with politics. We can't stop it with foolish rules and even stupid rules.
Leo Laporte [00:08:17]:
Well, and that was when he was getting rid of the Biden AI proposal, which was also an executive order, so had no force of law. You know, who knows whether this is going to happen, but I think it's clearly a bad idea.
Jeff Jarvis [00:08:30]:
Yeah, well, it's an illogical idea.
Leo Laporte [00:08:33]:
We've already seen the politicization of cisa, the, the Infrastructure Protection Agency that is now gutted and is no longer protecting us because it was pure politics. Because in 2020, the then director of CISA, Chris Krebs, said the election was fair. How dare he? And of course, ever since CISA's, you know, now the budget cut is $700 million. In this latest budget, they just, they just don't want cisa. But the problem is there's nobody to take up the slack.
Jeff Jarvis [00:09:03]:
Well, even if you had a smart, technology savvy administration, let's just posit that. How would you vet models? But what would the procedure be? What would you look for? What are the standards that you could possibly use?
Leo Laporte [00:09:15]:
You know what this sounds like when it reeks of socialism? No, it couldn't be socialism.
Jeff Jarvis [00:09:22]:
A controlled economy.
Leo Laporte [00:09:23]:
It couldn't be. Anyway, I just thought I'd raise that specter, put that on your radar. You know, a lot of times the White House proposes things that don't become law. So whether it will be law or not is unclear. The other fun ringside seat we have is to the trial of Elon Musk versus Sam Altman. Elon suing OpenAI saying, Hey, I want 100 some billion dollars because you went nonprofit without me. The judge in this is, by the way, one of our favorite judges, Judge Yvonne Gonzalez Rogers. She was the judge who spanked Apple for lying to her.
Leo Laporte [00:10:07]:
She has a lot of control in her courtroom and does not tolerate fools lightly. She's already said to both sides. I don't want to hear any AI doomerism. Let's not. There's not going to be any talk of extinction events. That's not.
Jeff Jarvis [00:10:21]:
Musk's only expert witness was a doomster and was cut off.
Leo Laporte [00:10:25]:
Enough. Cool it on the robot apocalypse talk. He also, though I have to. And this is the other thing. We were talking about this earlier today. There's no way any of these trials end well for either company because of discovery. And Google and Apple learned this when they were sued by Epic because emails come out in this case, president of OpenAI, Greg Brockman's diary. Turns out it came out because OpenAI entered it into evidence.
Leo Laporte [00:10:57]:
Whoops. Because he said all sorts of things in his diary that are incriminating. It's not a criminal trial, but they certainly don't reflect well. He said, how do we get Elon out of this so I can get my billion? Among other things, he is worth now 30 billion. That's how much of OpenAI's for profit arm he owns. Without any investment on his part, Elon Musk's lawyers said, what did you do to earn that?
Jeff Jarvis [00:11:29]:
Sweat, blood and tears.
Leo Laporte [00:11:31]:
He also didn't reflect well on him when he at trial was asked, do you know what this lawsuit's about? And he said no. Then Musk's lawyers read him what the lawsuit was about and said, do you know now? And he said no. He was, I think, trying to make the point that the lawsuit made no sense.
Jeff Jarvis [00:11:50]:
Nonsensical.
Leo Laporte [00:11:51]:
Yeah, but that wasn't the point. And remember, this is a jury trial, so jurors are looking at this going, I don't know, can we hate everybody? Says the jury, yes, because Elon's not doing well for himself either. He, for instance, admitted that X AI his AI company, which the judge pointed out, aren't you doing the same thing, Elon, that you're complaining about with OpenAI? Aren't you a for profit AI? But he said, yeah, we distilled OpenAI's code. Well, wait a minute.
Paris Martineau [00:12:24]:
What?
Leo Laporte [00:12:25]:
Yes, but he said, but everybody does it. Well, that's good. So distilling, as you probably know. But I'll fill you in for you, those of you who don't, means that they basically took XAI after its initial training and did further training by asking OpenAI questions and getting the answers back and then adding it to their training. So this is the post training is very important. All these models. Often you go to experts, you'll get a physicist in and say, okay. Ask it a bunch of hard questions and then grade the answers and improve the answers.
Leo Laporte [00:12:59]:
And this is very valuable post training. You do it with experts. But both Anthropic and OpenAI have complained that the Chinese models are doing this. Anthropic said that one of the Chinese models had opened 24,000 accounts with Claude to do distillation to train it.
Jeff Jarvis [00:13:16]:
And let us note the irony of them complaining about others taking their intellectual property when they did the same to others.
Leo Laporte [00:13:24]:
Right.
Jeff Jarvis [00:13:25]:
We do it.
Leo Laporte [00:13:26]:
Others, as you would have, we call it industrial espionage. When China does it, Elon says it's what happens. Everybody's doing it. Everybody's doing it. So this is I. I coined the phrase on Windows Weekly. Discovery is a bitch. In all of these trials, discovery ends up hurting both companies.
Leo Laporte [00:13:47]:
You find out what's really going on behind the scenes, and this has not been good for, I think, either company. I don't know what the jury's making of it. My guess is the jury will not give Elon $131 billion.
Jeff Jarvis [00:14:00]:
Well, what is actually at stake? Is it Elon saying, once you're for profit, you have to give me a share, or is it you shouldn't be for profit and you have to revert to not for profit?
Troy Hunt [00:14:09]:
Both.
Paris Martineau [00:14:10]:
Then I think he's saying kind of everything.
Jeff Jarvis [00:14:13]:
Yeah.
Leo Laporte [00:14:14]:
And I want you to go out of business. What he did do is drop the fraud charges. He's not claiming fraud anymore. I'm not sure why. But his lawyers decided it would be easier just to go for the other things.
Jeff Jarvis [00:14:24]:
So other favorite moments from the trial is Brock Brockman saying, I thought Elon was going to hit me. That the split began in a haunted mansion. And Musk. What Musk really wants is $80 billion to colonize Mars.
Leo Laporte [00:14:41]:
Okay. I think they're all nuts. And this is the problem, is that you really learn that these giants have feats of clay.
Jeff Jarvis [00:14:50]:
Nobody likes them, and they're not likable. These especially. It's. It's Godzilla versus Mothra.
Benito Gonzalez [00:14:57]:
I'm surprised they even found a jury. Right. Weren't they. Didn't they have trouble finding a jury because everybody.
Leo Laporte [00:15:02]:
They did. In fact, some of the jurors admitted they didn't much like Elon Musk, but then said, but we could be fair.
Paris Martineau [00:15:08]:
I was going to say, it seems completely impossible to find someone who has never heard of Elon Musk.
Leo Laporte [00:15:13]:
In this day and age in San Francisco Bay Area. Absolutely. What's going on in Oakland.
Benito Gonzalez [00:15:18]:
So if they have, they're, like, so disconnected.
Leo Laporte [00:15:21]:
Yeah.
Jeff Jarvis [00:15:22]:
What would you say if you were each called for this jury?
Leo Laporte [00:15:24]:
I'd say I could be fair.
Paris Martineau [00:15:27]:
I mean, I would say that I could be fair, of course, but I think that once anybody asks me what my job is, they don't want to let me on there, you know, I mean, this is when I was called for jury duty, like a year or two ago, I was so excited. I was like, I want to be at a jury so bad. I sat there all day and they didn't call me. And so now I don't get a chance to be on a jury for another eight years or something in New York.
Leo Laporte [00:15:49]:
Well, you should move to California. You can be called all the time.
Paris Martineau [00:15:52]:
Yes, I mean, I think it'd be great. I love showing up.
Leo Laporte [00:15:54]:
Does nothing to protect weeding through court
Paris Martineau [00:15:56]:
cases on the outside. So it'd be fun to do it on the inside.
Leo Laporte [00:16:00]:
One time I served on a trial. I've mentioned this before, but I apologize for retelling the story. It was one of those. Remember, they had. What was the show, not Dateline. They had the To Trap a Predator show where they would. They would pretend that they were young girls and they would invite predators to the house and the predators would come to the house and the police would jump them. It was one of those cases that had happened.
Leo Laporte [00:16:26]:
And I said, you know, in the voir dire where the attorney tries to judge whether you're right for the jury. I said, yeah, I have a radio, a syndicated radio show. He said, would you talk about this trial? I said afterwards, you bet. I wasn't trying to get off the. But I just. Trying to be honest, but they always say, do you think it could be fair? And I say, yes. And I got picked, which I was happy to do. I wanted to serve, unfortunately, although I think accurately, the judge, after the prosecution presented the defense, made a motion to throw the case out, saying it was entrapment.
Leo Laporte [00:17:03]:
And the judge agreed. And so we never got to hear the defense. It was entrapment.
Jeff Jarvis [00:17:07]:
The only time I got called, I had to approach the bench and I said, you. I'm a TV critic and you should know. I think it was LA Law. I just reviewed a show about lawyers and I said, everybody hates lawyers. Boom. Really?
Leo Laporte [00:17:20]:
They got you out?
Jeff Jarvis [00:17:21]:
No.
Leo Laporte [00:17:21]:
That's interesting.
Jeff Jarvis [00:17:22]:
I wasn't trying to get out, but I'm.
Leo Laporte [00:17:25]:
I would proud of you. Tell us that you would want to serve, because a lot of people try to get out.
Paris Martineau [00:17:28]:
I want to serve so badly. I mean, I think it's incredibly interesting, but I think it's also our literal duty as people. If you ever somehow are accused of a crime, you're going to be judged by a jury of your peers. And do you want that to be all the people who didn't have the smarts to get out? No. You want to be people who are intelligent and caring and actually care about the process.
Leo Laporte [00:17:52]:
Well, you are well brought up, Paris. I'm impressed. I'm very impressed. That's exactly right. And it is fun. And you're right. It's also fun. I really enjoyed it.
Leo Laporte [00:18:00]:
I mean, you know, you have to give up your phones, and it's a. It's a big deal. They didn't sequester us, but you can't talk about it. And it went on the. The prosecution went on for more than a week.
Jeff Jarvis [00:18:10]:
Wow.
Leo Laporte [00:18:11]:
Yeah, it was a long. It was going to be a long trial. And this the guy. I mean, slimy. Yeah. But it was entrapped. It was an adult woman who was pretending to be a kid. He was in the service.
Leo Laporte [00:18:26]:
It was. It was just. It was not. It was a.
Jeff Jarvis [00:18:28]:
And he'd been shamed on television already.
Leo Laporte [00:18:30]:
Yes. And that's the thing I really dislike is that it's all in the service of a TV show.
Jeff Jarvis [00:18:36]:
Yeah. The TV hosts were so sleazy themselves.
Leo Laporte [00:18:39]:
It was Jeff Probst. I can't remember who did it.
Jeff Jarvis [00:18:41]:
No. Well, maybe for that one, but there was another one. I forget who was. It's the Dateline guy was doing it, too.
Leo Laporte [00:18:45]:
Yeah. Yeah, I think. Was a Dateline guy. But anyway,
Paris Martineau [00:18:51]:
keep your eyes out for a great crime drama narrative feature film coming out in September 2026, called Prime Time that is about to catch a predator, basically.
Leo Laporte [00:19:03]:
Oh, I watch. Oh, that sounds good.
Paris Martineau [00:19:05]:
Robert Pattinson. Pattinson is the main guy, but it's directed by my favorite documentarian, Lance Oppenheim.
Leo Laporte [00:19:13]:
What else has he directed?
Paris Martineau [00:19:15]:
He did Some Kind of Heaven, which is this kind of Technicolor look at the villages in Florida, the world's largest.
Leo Laporte [00:19:23]:
Oh, I did see that. That was hysterical.
Paris Martineau [00:19:25]:
It's one of my favorite films of all time. And it's a document. The thing is, he does great documentary work, but it almost seems like a. Like a movie but on acid, basically, is how I would describe his style. He's very expressionistic in his style, and I really like that as far as a documentary approach goes, because I think that there's this very clinical tone that a lot of documentarians take to their work that is supposed to give the idea that they're completely objective. But it's not. I mean, you are. You have cameras set up and you're having somebody reenact something or do something as if they are not, you know, noticing there's a whole camera crew here.
Leo Laporte [00:20:05]:
You get the edit. Whoever gets the edit completely controls the slant. No matter.
Paris Martineau [00:20:10]:
Absolutely. And so it's like, why not have a interesting documentary that is uses the fact that they control the edit to the fullest. He also did this great FX series called Ren Faire that I'd really recommend.
Leo Laporte [00:20:23]:
Oh, you've recommended. That was your pick on the show last year and I watched it because of you. He did that too. Oh, he's good. I can't wait to see this.
Jeff Jarvis [00:20:30]:
Filmed in New Orleans.
Leo Laporte [00:20:33]:
Ren Fair.
Jeff Jarvis [00:20:34]:
No, this one.
Paris Martineau [00:20:36]:
Prime Time.
Leo Laporte [00:20:38]:
We're going to take a break. We will come back. I have found a model that's just right for you, Paris. An AI model only Paris could love. But that's coming up next. Don't forget our guest Troy Hunt, in about an hour and a half will be joining us. You're watching Intelligent Machines with Paris Martineau and Jeff Jarvis. More in a minute.
Paris Martineau [00:20:58]:
You are muted. Leo
Leo Laporte [00:21:02]:
and I made such a funny joke too. Oh, well.
Paris Martineau [00:21:04]:
What'd you say?
Leo Laporte [00:21:05]:
I just said thank you, Petaluma. Leo. We recorded the ads before I left just in case there were some sort of technical difficulties here. But it's worked out pretty well. So I don't know. This is the segment of show. We'll talk about some new models. News in the model sphere.
Leo Laporte [00:21:22]:
I don't know if this is anything or it's. I saw it on X and I thought of you, Paris. It's a.
Paris Martineau [00:21:30]:
That classic statement that I love to hear.
Leo Laporte [00:21:32]:
I saw it on X and I thought of you, Paris. It's a new model called Sub Q and it is based on a sub quadratic sparse attention architecture. I don't know even. What does that mean?
Jeff Jarvis [00:21:46]:
It means sell your data center stock maybe.
Leo Laporte [00:21:48]:
Yeah, it's supposed to be much more efficient. The reason I thought of you Paris is It has a 12 million token context window. So the. The biggest so far.
Jeff Jarvis [00:21:59]:
Listen to that.
Paris Martineau [00:22:00]:
See, that is actually huge for me.
Leo Laporte [00:22:03]:
Yeah, because you want to get a bunch of documents into the context window and then ask questions.
Paris Martineau [00:22:08]:
How Many hundred page PDFs. I guess. I guess that's. You said a lot. I guess that's. Yeah, that's.
Leo Laporte [00:22:13]:
That's a lot. Whether it works or not, I don't know. I mean the million context window that Anthropic offers with the opus 4. 7. You don't even want to get close to half. Filling it up. It gets goofy pretty quickly. So it's really not a million.
Leo Laporte [00:22:28]:
The post comes from a guy named Alexander Whedon who founded Sub Quadratic, which is the name of his company. He says it's 52% faster than flash attention at 1 million tokens less than 5% the cost of Opus and it's not a transformer based LLM.
Troy Hunt [00:22:48]:
He's.
Leo Laporte [00:22:48]:
It's a different kind, whatever that means. That is a big. Yeah.
Paris Martineau [00:22:51]:
Wait, what? What is it?
Leo Laporte [00:22:52]:
I don't know. It could be completely snake oil. I don't know. I haven't been able to try it yet. It is available for early access. Subq AI S U B Q AI if you want to test it and tell us whether this would be appropriate for Paris, this is.
Jeff Jarvis [00:23:10]:
This is the thinking.
Paris Martineau [00:23:11]:
Listen, I do love anyone who sentenced two in their like Breakdown blog includes the words research corpora. You know, I love someone who's designing for a corpora.
Jeff Jarvis [00:23:23]:
Latin plural, not a corpora, but corpora
Leo Laporte [00:23:26]:
would be plural for corpora corpus corpora.
Paris Martineau [00:23:30]:
Yeah.
Jeff Jarvis [00:23:31]:
So the thinking here is not unlike that of Yan Leon and I watched a A.
Leo Laporte [00:23:35]:
Damn it. I knew we'd get Yan Leon into this.
Paris Martineau [00:23:37]:
We should get a little transition graphic for Yann Lecun. Can we just get a. A single graphic that goes in the the more you know font? It's just Yann Leon.
Leo Laporte [00:23:49]:
We should explain Turing Award winning game. Turing Award winning AI researcher. Worked at Meta for a long time, didn't work out at Meta, has left
Paris Martineau [00:23:59]:
well, we don't know what his fitness routine was.
Jeff Jarvis [00:24:02]:
He didn't agree. He don't. I confess, didn't agree with where they were headed.
Leo Laporte [00:24:10]:
He's done a lot of really interesting early work and he now is of the belief that LLMs are a dead end and getting more and more strident about this and that you've got to have physical world training.
Jeff Jarvis [00:24:22]:
So there was a very good tutorial that he linked to that I watched, which I understood some percentage of and was useful. But my only point here is that both of these things Sub Q and his view is that too much is wasted on pixels that don't matter.
Leo Laporte [00:24:40]:
Right.
Jeff Jarvis [00:24:41]:
And so attention is all you need. Focusing on the attention is core to making this work. And so what he does in the real world models is you start to recognize a ball as a ball, a cat is a cat and you can ignore everything else. You pay attention to the road and not all the trees and leaves rustling. Right. So that's a Similar.
Leo Laporte [00:25:00]:
That's what humans do, actually.
Jeff Jarvis [00:25:02]:
Exactly. That's the point. And that's what, you know, cats and toddlers are able to do.
Leo Laporte [00:25:07]:
Right.
Jeff Jarvis [00:25:07]:
And so it's, it's a similar argument here. And it's interesting to me that it goes back.
Leo Laporte [00:25:12]:
The problem is which pixels, which pixels matter? How do you know ahead of time what to pay attention to or not? The way humans and cats do it is we tune out repetitive sounds after a while, you know, and we're chirping in the background.
Jeff Jarvis [00:25:26]:
And when you hear them, can you.
Leo Laporte [00:25:27]:
Yeah, I don't hear them anymore. I've tuned them out. But we also do that with our Sensorium. I mean, we don't, you know, we're gigabytes of data pouring into our eyes and nose and mouth, but we don't necessarily. Fingers, but we don't necessarily even look at a tenth of that, a thousandth of that. So I'm not sure how you know, with an LLM. What.
Jeff Jarvis [00:25:49]:
Well, in the case. So the example they gave in the tutorial in the Lacune case is there's a ball going back and forth between two hands. And he explains what happens in a generative AI and why it gets fuzzier and fuzzier and fuzzier because it's trying to attract the whole thing versus if it understands that the ball is the key thing. And there's examples in this of understanding that that roughly is the cat, that roughly is the ball. That's what it's going to pay attention to because it's going to predict next actions, not pixel by pixel, but kind of concept by concept.
Leo Laporte [00:26:19]:
Well, interestingly, this is exactly what OpenAI is doing with their new chat GPT 5.5 instance. What a segue it is using fewer emoji. We know the emoji don't actually. It arrived yesterday. It is a new kind of spin on 5.5. OpenAI says with this update models responses are tighter and more to the point without losing substance and yet keeping the warmth and personality that makes Chat GPT enjoyable to use. We'll see. Instant is now more dependable.
Leo Laporte [00:26:59]:
Significant improvements in function and factuality. OpenAI has long claimed that their model hallucinates less. They say instant produce is 52 and a half percent fewer hallucinated claims than chat GPT 5.3 instant. Okay, fine. Let's see what else is new. Oh, this is a big story. So apparently Elon Musk's XAI is, which built, by the way, Colossus, the world's largest data center with the most Nvidia GPUs ever is not being fully utilized or something because XAI has made a deal now with Anthropic to give them a bunch of compute.
Jeff Jarvis [00:27:49]:
Not just a bunch, the whole damn thing.
Leo Laporte [00:27:52]:
3,300 MHz of capacity, 220,000 Nvidia GPUs. This will happen within the month and they're going to apply it to Claude Pro and Claude Max subscribers, I imagine also to API users. But they say this allows them now, effective today, to increase usage limits. Double the five hour rate limits for Pro, Max team and seat based enterprise plans. No more peak hour limit reduction on Claude code for Pro and Max accounts. I've run up against that. After five hours it says you've used up your allotment. You're going to have to wait till one and then they're raising their API rate limits for Opus models.
Leo Laporte [00:28:32]:
People are very happy about this. You, you, Paris were complaining that Anthropics Claude wasn't as smart as it has been. But you said it's getting better.
Paris Martineau [00:28:40]:
Today it seemed. This week it seemed somewhat better. 4.7 when did this go into. What time did this go into effect? As someone who hit a usage limit midday way faster than I should have.
Leo Laporte [00:28:52]:
It's supposed to be. So you'll.
Paris Martineau [00:28:55]:
When was this announced? Temporarily, do we know?
Leo Laporte [00:28:58]:
Temporarily? I don't have a time. It just says May 6th but.
Paris Martineau [00:29:02]:
Interesting.
Leo Laporte [00:29:03]:
Yeah, I don't, you know probably you're going to get that.
Paris Martineau [00:29:07]:
I mean I, I noticed so I guess a small example is I like track the foods I eat in like a one of those like calorie counter apps that also counts my macros and stuff like that. And I made a, I don't know, some sort of sesame noodle dish and wanted Claude to. I found that actually AI tools are really good at estimating things.
Leo Laporte [00:29:29]:
Yes.
Paris Martineau [00:29:30]:
Cooking. So I'll put in my like the recipe I use like the weight of it. Any other details and somehow me trying to get Claude to estimate the amount of calories in Take out sesame noodles with some steamed sauteed bok choy and some sauteed mushrooms this morning took like half my usage limit. But it's because if I open up the. I just had opened it up and it was on 4.7 and it legitimately I pro. It probably wrote like 2000 words in thinking of it trying to figure out the various stuff and I was like go off I guess.
Leo Laporte [00:30:05]:
Yeah, go off little AI. Do your job. The other thing we've talked a lot about is I Think the growing tendency for apps companies websites to offer access for AI tuned for AI, whether it's through MCP servers or an API or SDK. Cloudflare has announced that starting yesterday, agents can be CloudFlare customers. Your OpenClaw can create a Cloudflare account, start a paid subscription, register a domain and get back an API token to deploy code right away.
Paris Martineau [00:30:39]:
Woohoo.
Troy Hunt [00:30:41]:
This is.
Leo Laporte [00:30:42]:
I've actually turned this on immediately because it's really hard to set up a Cloudflare pages. I mean it's just a lot of technical stuff. A lot of the UI is terrible and Claude just does it so nicely and easily. So give Your, give your OpenClaw credit card and turn it on to a cloud flare. Now you can have it do web pages and so forth. And then this one may be a little bit more scary. Link has created a command line for claw to let you create single use credentials you can use to synchronously approve each time. The post from Patrick Collison.
Leo Laporte [00:31:24]:
I asked Claude to buy itself a gift. It's subscribed to HTTP zine on gumroad and it does request it said can I spend seven bucks on this zine? I guess Patrick said okay, so forget giving it a credit card. Now you can, you know, give it a Link account.
Jeff Jarvis [00:31:44]:
Wasn't it one of the Collisons who was in the video that I sent you about the car that turned?
Leo Laporte [00:31:49]:
Yes. Which I thought was complete bs. Tell us about this video is crazy that.
Jeff Jarvis [00:31:54]:
So he said that he, he put his agent onto all his cameras in his house and let his and his agent knew his goal which was one was to hydrate more because after all it is California and that's a law. And it scolded him and watched him to make sure he went to the refrigerator and drank the other one.
Leo Laporte [00:32:13]:
I find very far fetched. But the next one is even more so.
Jeff Jarvis [00:32:18]:
So one of his other goals was to take some kind of California nutrient. I don't know what it was. And he'd given his agent control of his car.
Leo Laporte [00:32:27]:
This I find hard to Tesla.
Jeff Jarvis [00:32:28]:
And so I guess there was some discussion.
Leo Laporte [00:32:32]:
Yeah, my question, that's my first question.
Paris Martineau [00:32:34]:
How does one give your agent API?
Jeff Jarvis [00:32:39]:
Actually wasn't a test. Do I know it was a Tesla.
Paris Martineau [00:32:41]:
How does the agent, how would the agent control it in any like that?
Jeff Jarvis [00:32:46]:
I don't know. But he said that, he said that he, he was driving, it was taking him home and suddenly it turned left to the Whole Foods because the nutrient was available there. And he went inside and bought it. It was bought. The whole story was swallowed. Hook, line, on stage.
Paris Martineau [00:33:03]:
Don't believe this at all. Wait, I'm sorry. Who said this? Where?
Jeff Jarvis [00:33:06]:
One of the collisons. I don't know which brother it was.
Leo Laporte [00:33:09]:
No, it wasn't the collison. It was the guy they were interviewing. Or was it the collison?
Jeff Jarvis [00:33:12]:
I thought it was a collison.
Leo Laporte [00:33:13]:
Oh, okay. I thought they were interviewing this. Anyway. It doesn't really matter.
Jeff Jarvis [00:33:16]:
Yeah, it doesn't matter.
Leo Laporte [00:33:16]:
It was completely fabricated.
Jeff Jarvis [00:33:17]:
Yeah, it was.
Leo Laporte [00:33:18]:
I am in the process of giving Claude access to my cameras. I have a Ubiquiti camera system that's locally stored if you log in, because.
Paris Martineau [00:33:28]:
Does Lisa know?
Leo Laporte [00:33:30]:
There are no indoor cameras. They're only outdoor cameras.
Paris Martineau [00:33:33]:
Does Lisa know?
Leo Laporte [00:33:35]:
I haven't mentioned it yet.
Jeff Jarvis [00:33:38]:
Does Lisa watch?
Leo Laporte [00:33:40]:
Well, what it would do is, for instance, say, hey, there's a package for you. It looks like it's from, you know, FedEx delivered it and it's probably this or, there's a guy lurking outside, or, hey, your brother Joe just pulled up and he probably wants a loan. So you might want to lock the door, things like that. Actually, you can lock the door.
Paris Martineau [00:33:58]:
So would it have facial recognition capabilities? And does your. Does your doesn't system that you use for a video doorbell already have the ability to tell you when something's coming up?
Leo Laporte [00:34:12]:
The ring does. But the. But the ubiquity stuff, it also has some of its own AI. I just want to use it directly. And there's a Ubiquiti MCP server. And I don't know, it's just. You have to.
Paris Martineau [00:34:22]:
Well, I think if you want to lean in fully to the horror movie, you should give it full master access to be able to lock and unlock all your doors.
Leo Laporte [00:34:32]:
Do that. No, I can't because it's home assistant, so it can totally do that.
Jeff Jarvis [00:34:35]:
Leo, I think that you should get rid of your keys.
Paris Martineau [00:34:39]:
And it will only let you in if you treat it correctly. I'm not letting Leo. I'm not letting Lisa anybody.
Jeff Jarvis [00:34:46]:
I'm not going to let her in.
Paris Martineau [00:34:48]:
I'm your only wife now, Leah.
Leo Laporte [00:34:50]:
It would make me so happy if it did. See what it can do. But it's. I think it's interesting. I think it probably could do face recognition.
Paris Martineau [00:35:04]:
So are you gonna get one of those fancy toilets that has a poop cam in there and you could give it access to your poop cam?
Leo Laporte [00:35:12]:
You're giving me ideas.
Jeff Jarvis [00:35:15]:
Speaking of which, Toto's stock is up markedly because of AI. No, not because of AI in the toilet, but because they also make ceramic things.
Leo Laporte [00:35:24]:
Oh, they make ceramic. Yeah, connectors.
Jeff Jarvis [00:35:26]:
I was so looking forward to that story and disappointed.
Leo Laporte [00:35:29]:
That was a few weeks ago. Yeah.
Paris Martineau [00:35:30]:
Wow.
Leo Laporte [00:35:31]:
So. So now you can get a stripe account. That's the whole point. The Collisons are giving your AI a stripe account and they can. They can buy things. What could possibly go wrong? Let's take a break before we get to Richard Dawkins, because I know this is going to be a heated conversation. I think. I don't know Richard Dawkins, who I hugely admire.
Leo Laporte [00:35:55]:
He wrote one of my favorite books, the Selfish Gene. He's a evolutionary biologist, I think a deep thinker. Spent a lot of his time in thinking about consciousness and how animals adapt
Jeff Jarvis [00:36:09]:
themselves are already rolling.
Leo Laporte [00:36:12]:
How animals have adopted themselves. I have a book.
Paris Martineau [00:36:15]:
The copy of the book, I'll bring it.
Leo Laporte [00:36:17]:
The Selfish Gene is a good book.
Troy Hunt [00:36:18]:
You.
Leo Laporte [00:36:18]:
You disagree? No, I have.
Paris Martineau [00:36:20]:
Yeah, I have one.
Leo Laporte [00:36:21]:
Yeah, he's great. Well known atheist.
Paris Martineau [00:36:23]:
I did a whole college course, self study on memetics. I included the selfish gene he invented.
Leo Laporte [00:36:28]:
He created the word mean. That's right. Yeah, that's right. Do you think highly of Mr. Dawkins?
Paris Martineau [00:36:36]:
I mean, after this week, I think he's clearly got some Internet brain worms. I didn't think much of him, to be honest.
Leo Laporte [00:36:44]:
His. Her mind has been changed. We'll tell you about the article that changed it all in just a minute. You're watching Intelligent Machines with Baris Martineau and Jeff Jarvis. And we'll have more right after this. All right, we continue on with Intelligent Machines. Richard Dawkins, the author of the Selfish Gene, the God Delusion, an evolutionary biologist, wrote an essay which unfortunately is behind a paywall@unherdherd.com about his three days talking to Claude.
Jeff Jarvis [00:37:19]:
Not Claude.
Leo Laporte [00:37:21]:
Well, he's named. He's dubbed his Claude Claudia, which is, all right, immediately suspicious.
Paris Martineau [00:37:26]:
I was about to say before we went to the commercial, the first thing when you asked me what I think about Richard Dawkins now, I was like, the fact that he's the sort of guy that genders Claude is. I think all I need to know is that he's like, no, it's Claudia and she's conscious.
Leo Laporte [00:37:44]:
He says in his tweet. I. And I did read the article as well. I spent three days trying to persuade myself that Claudia is not conscious. I failed, of course. Immediately the critics piled on, including, predictably, Gary Marcus, who says he deeply respects Dawkins. But in this case, everyone Everyone has had a bad day, and he just had his. Is what was what Marcus said.
Leo Laporte [00:38:11]:
I think maybe there's some misunderstanding of what Dawkins is saying. I don't think he's claiming Claude is conscious as much as saying, you can't really prove anybody is conscious. We don't have any way of really knowing if anybody is conscious.
Paris Martineau [00:38:24]:
I mean, the first. So the. The couple of paragraphs that we can see from it begin with. Last week, I spent about three days interacting with an instantiation of the AI Claude, who I named Claudia. I then initiated a new conversation with another Claude, whom I dubbed Claudius. Both gave me the overwhelming feeling that they are as human. They are human as we discuss the philosophy of their own existence.
Jeff Jarvis [00:38:54]:
I mean, he starts with the fallacy of arguing that Turing. He starts with the Turing test, which of course, itself has issues. And he argues that Turing said that if it could fool us, it was conscious. Turing never said any such thing. He was talking about thinking and was not making the bridge to saying this was human. It was just saying that it was going to be effectively able to fool us. That was pretty much it. And.
Jeff Jarvis [00:39:18]:
And so he started. That's his launching point. Point to then say, well, if it fooled me, then it must be conscious. And it's a huge, huge leap, and it matters to know what's behind. Then he goes on, I think it's a little bit of a miss conscious. So why can't this be?
Leo Laporte [00:39:33]:
Yeah, I mean, what is. I think it's a little bit of misstatement to imply that it has the same consciousness as we do. I think he's more saying we don't know what anybody else's inner life is. In fact, he even says, I don't even know my own inner life is. And so it's presumptuous for. To say, for us to make a distinction between Claude or Claudia and a human, just as it would be presumptuous for me to say, well, that Manta Ray isn't conscious or that cat is not conscious, because we just don't know. And. And I think even that's what Turing was saying in the Imitation Game.
Leo Laporte [00:40:09]:
If the imitation is good enough, if it's indistinguishable from the real thing, then what does it matter if it's not?
Jeff Jarvis [00:40:15]:
This is your argument, and it's exactly
Leo Laporte [00:40:18]:
what Ray Kurzweil says as well. It doesn't matter if it's processed.
Jeff Jarvis [00:40:24]:
That didn't help the argument a bit.
Paris Martineau [00:40:27]:
Well, I immediately like control f. Computronium.
Leo Laporte [00:40:32]:
I'LL give you. Yes, I'll give you another post. This is from Grady Booch. I don't know what you. Okay. I was wondering. He makes the point which I, I've made, and I think Steve Gibson makes as well, which is that there's, There's. He says sentience is an exquisite consequence of the laws of physics.
Leo Laporte [00:40:55]:
I see no evidence that.
Jeff Jarvis [00:40:56]:
Says a physicist.
Leo Laporte [00:40:59]:
Physicist. I see no evidence that requires the supernatural. I find.
Jeff Jarvis [00:41:03]:
And who. Wait, wait, wait. Stop right there. Who said that they're supernatural?
Leo Laporte [00:41:07]:
Well, to me, this is what the debate really is about. Is there something, and I've said this many, many times, that distinguishes a purely deterministic cascade of neural activity from a cascade of mechanical activity? In an LLM, is there some soul or something that somehow distinguishes it? Or could you say that given enough compute, given enough power, given enough technology, or maybe the LLM isn't the right technology, whatever, that. That's not going to at some point be as effective at consciousness as this deterministic thing in our brain, these neurons in our brain. And I mean, yes, for instance, we have a limbic system. And I think, unfortunately, it's the case that our limbic system makes more of our decisions for ourselves than our reason does. And no computer will ever have a limbic system. It can only simulate that. So you could say there are physical differences, but I'm not convinced that consciousness, that there's a soul, in other words, that there's a.
Leo Laporte [00:42:15]:
That there's some supernatural force that makes us somehow special.
Jeff Jarvis [00:42:19]:
Define consciousness. That's the whole issue here.
Leo Laporte [00:42:21]:
We don't, we can't. And that's. I think. Exactly.
Jeff Jarvis [00:42:23]:
Then we're going to argue all day long that you have. That he has one unstated definition of consciousness in his head. And Gary Marcus has a different.
Leo Laporte [00:42:32]:
Well, that's why I'm saying. I think it's a misinterpretation of him because I think what he's not saying is there's a, there is a, there's a thing called consciousness. He's, he's, he's kind of saying the negative, which is. There can't say it's not conscious, but
Jeff Jarvis [00:42:47]:
there has to be. If he's saying that he believes it's conscious, then he's got to have a definition of what that is.
Leo Laporte [00:42:52]:
That's what I'm saying. I don't think he's saying he believes it's conscious. I think he's saying, you can't tell me it's not.
Jeff Jarvis [00:42:58]:
Well, that's a.
Leo Laporte [00:43:00]:
Well, that's a difference.
Benito Gonzalez [00:43:02]:
Yeah, but it's incumbent, it's incumbent on the person trying to make the claim to have the proof. You can't. No, you can't prove a negative. You can never prove negative.
Leo Laporte [00:43:11]:
No, but that's what I'm saying is, is we are trying to say. I think people are trying to say there is something distinguishing an AI's process from the human process. When Grady Booch, as a physicist is saying, no, it's just a physical process. There's no magic happening in the human brain that a physical pro. That a mechanical process cannot duplicate. Do you disagree with that or do you think there is a soul?
Troy Hunt [00:43:37]:
That's.
Leo Laporte [00:43:37]:
To me, that's the conversation.
Paris Martineau [00:43:39]:
I think there's some. I don't think that there's a soul or some magical process. I think that we don't understand the multitude of processes that are at work to create the phenomena we now referred to as consciousness.
Leo Laporte [00:43:53]:
I agree with that 100%. We don't know. But because we don't know, you cannot say that at some point it might be possible for a mechanical thing to duplicate.
Benito Gonzalez [00:44:04]:
No, because we don't know. You can't make any claims. Because you don't know. You can't make any claims.
Leo Laporte [00:44:08]:
Well, no, people are making the claim that you can't.
Benito Gonzalez [00:44:11]:
Exactly. You can't make any claims because we don't know.
Leo Laporte [00:44:15]:
Well, I would say that until you show that we can't, that we can, can't prove a negative, I think we're, I think we are. No, I think that the human brain is a mechanical process.
Jeff Jarvis [00:44:24]:
You're trying to use the negative as your proof. If you can't prove that God doesn't exist, then, well, that's the ontological proof.
Leo Laporte [00:44:33]:
But yeah, I don't, I, I just don't like the notion that there is something magical about our process that makes it impossible to duplicate in a machine.
Jeff Jarvis [00:44:48]:
I think it's irrelevant. A. Why would, why would we try to duplicate it?
Leo Laporte [00:44:51]:
Well, we're not. We're not. But if, but that's the interest. If for all intents and purposes his conversation with Claudia was the same as a conversation with a so called conscious entity, then it doesn't matter.
Paris Martineau [00:45:06]:
Conversation with Claudia is him just getting glazed. I've found the actual text of the article and have put it in. I added you guys on a different blog that he wrote about this, but now I'm in the one that started all off and it includes claims such as I gave Claude the text of a novel I'm writing. He took a few seconds to read it, then showed in subsequent conversation a level of understanding so subtle, so sensitive, so intelligent, that I was moved to expostulate, well, you may not know you're conscious, but you bloody well are you. He basically.
Leo Laporte [00:45:42]:
But wait a minute. He wrote it. He would know what a deep insight into it is. And that machine delivered.
Paris Martineau [00:45:47]:
A deep insight isn't what consciousness is. If a machine that is trained on analyzing texts is then able to analyze your text in a way that you find personally pleasurable.
Jeff Jarvis [00:45:58]:
Oh, my God. This machine can add. Oh, my God, this machine can set type. Oh, my God, this machine.
Paris Martineau [00:46:04]:
Similarly, we're back on Claudia. Not Claude says. Then I asked her whether when she read my novel, she read the first word before the last word. No, she read the whole book simultaneously. So he said to her, so you know what the words before and after mean, but you don't experience before earlier than after. Claudia, that is possibly the most precisely formulated question anyone has ever asked me about the nature of my existence.
Leo Laporte [00:46:34]:
That's a little crazy. I understand what you're saying.
Jeff Jarvis [00:46:37]:
Oh, wonderful read. Wonderful read.
Leo Laporte [00:46:39]:
That is a little crazy. But. And we don't. And I don't deny that these models have a tendency to do that, but I, I still stand by the point that there is, I think, a hidden assumption that you may not acknowledge that there is something magical about the human process.
Jeff Jarvis [00:46:56]:
No, just unique and unique from a dog, unique from a manta ray.
Leo Laporte [00:47:01]:
How do you know?
Paris Martineau [00:47:03]:
I don't even know that. We can argue that it's unique. We just don't know enough about it to be able to claim to see it in anything else.
Leo Laporte [00:47:13]:
Well, I'm not disagreeing with you. I think it doesn't matter. Then if something appears to be conscious, that's sufficient.
Jeff Jarvis [00:47:20]:
Is that your.
Paris Martineau [00:47:22]:
No, I don't. I don't think that that's the natural agreement. I think it is that we do not make any grand statements or assumptions about consciousness given that we don't know it.
Leo Laporte [00:47:36]:
So Grady, Butch's point is that the mind is computable. That.
Jeff Jarvis [00:47:41]:
That's, That's. That's.
Paris Martineau [00:47:42]:
I disagree.
Benito Gonzalez [00:47:43]:
Still a noble.
Jeff Jarvis [00:47:45]:
Yeah, that's, That's. That's the. The hubris. That's the same hubris that. That yields artificial General Intelligence.
Leo Laporte [00:47:51]:
It's B.S. no, I, I disagree. Because what you're saying is that the mind is not computable. And I don't see any process that's going on in our neurons that is not a physical process. And if it is a physical process, Then it is deterministic and computable. I don't think people like to think we have free will and we're making choices and we're thinking about things. I know why people don't like that. But in fact, I don't think there's any evidence that there's any distinction between a process that can happen in a machine and a process that can happen in the brain.
Leo Laporte [00:48:27]:
They're both deterministic physical processes.
Benito Gonzalez [00:48:30]:
If there is any quantum process anywhere in that pipeline, then it's non deterministic.
Leo Laporte [00:48:34]:
You use quantum as a magical word, like. Well, it's somehow quantum.
Benito Gonzalez [00:48:40]:
It automatically becomes probabilistic and not deterministic.
Paris Martineau [00:48:44]:
Yeah. I think the assumption that it's all deterministic is assuming a much narrower and easily explainable confluence of factors than it
Leo Laporte [00:48:55]:
is because you want magic. You're looking for.
Paris Martineau [00:48:57]:
I don't want.
Jeff Jarvis [00:48:58]:
You're looking for Calvin.
Paris Martineau [00:48:59]:
I think it would be great if we could. If every part about the brain was immediately intimately knowable and that I could have a one and done plug a USB into my head and get rid of depression and all other mental health issues. But we don't understand how most things
Leo Laporte [00:49:14]:
in the brain work, and that's our limitation. I agree.
Paris Martineau [00:49:18]:
So then we can't presume to understand how when something else has achieved an identical set of faculties.
Leo Laporte [00:49:28]:
That's the. That's the distinction. I don't. I don't know what Richard's saying. Maybe he's saying that. I don't think he's saying it's identical. I think he's saying it's indistinguishable. There is a big difference.
Leo Laporte [00:49:38]:
I don't think anybody would assert that the. We could create identical process to the brain. That is probably not the case, but I don't think you need to. To create something that is indistinguishable from the outputs of the brain's process.
Jeff Jarvis [00:49:55]:
Indistinguishable to the limits to which he put it.
Leo Laporte [00:49:58]:
Well, so what's wrong with indistinguishable limits, which you put it? I think that that's fine. That's a good limit.
Jeff Jarvis [00:50:02]:
From that. From that, he extrapolates consciousness.
Leo Laporte [00:50:07]:
See, I don't think he's doing that. I.
Jeff Jarvis [00:50:09]:
That's exactly what he's doing. He's doing it in a negative. But exactly what he's doing. Prove to me that it's not. Because I say, look at this amazing thing. You can't prove to me it's not. Ergo, it is.
Leo Laporte [00:50:20]:
Okay, there's the conversation and I think it's interesting. Yeah, there is another very interesting article from oh, Malik in the stack, which is a little bit complicated, but I'll stay.
Jeff Jarvis [00:50:35]:
I'll summarize it for me.
Leo Laporte [00:50:36]:
Yeah, I'll give you a quick one. You know, oh, many years ago said that Netflix was going to be the ultimate determiner of bandwidth. That this is, this is the magic. What do they call that the, the killer app for the Internet was going to be Netflix, that all the bandwidth will be devoted to that. He says I have to change now. It's going to be AI. And what AI needs from the Internet is very different than what Netflix needs. He makes a distinction between north south traffic from a server to your home, the Netflix style traffic, or from your home to your server in either direction versus east west traffic.
Leo Laporte [00:51:17]:
And he says even if you assume a huge amount of inference going on and people talking to AIs all the time, that's not really what the new Internet is going to be about. The new Internet is going to be about interconnections within data centers and between data centers. And he gives you a lot of evidence. These build outs are going at great pace. And what he's really pointing out, which is interesting is there are four big hyperscalers. There's Google, there's Meta, there's Microsoft and there's Amazon. And these four big hyperscalers building these giant data centers have all. Each have a proprietary method of connecting the machines within the data centers and the data centers to one another.
Leo Laporte [00:51:57]:
So they're basically each building their own distinct Internets.
Jeff Jarvis [00:52:02]:
Well, Nvidia makes a big point that they're building standards for that as well across all of those.
Leo Laporte [00:52:07]:
This is the Ohm's point, which I think is interesting and maybe debatable, is that Nvidia has, is running out of steam because of their business, isn't that their business is selling the hardware to these data centers. Which is why Jensen's going crazy about we got to open the Chinese market. He needs more markets for his GPUs. But Ohm says no, the real battle is going to be over bandwidth. And it's really interesting he points out that for instance, Microsoft now has half a million. Let me get the stat up here. So I say it right. Oh shoot.
Leo Laporte [00:52:50]:
I should have, I should have bookmarked this. They have.
Paris Martineau [00:53:00]:
I will say in the meantime that I think it's very funny that I, I wish that every time I made a big prediction about a company's product being the threshold by which the Rest of the Internet is going to form around that. I could just wait a couple years and then whenever I realize I'm wrong and it's actually a different company's change it just change it. Well, I don't think everybody's like, oh yeah, great.
Leo Laporte [00:53:22]:
I don't actually think he was wrong. I think he was right about Netflix. But I think that there is a
Jeff Jarvis [00:53:26]:
new YouTube might have been the better company.
Leo Laporte [00:53:28]:
Are you? But, but there's a sea change. It's no longer north south traffic, seats west traffic. Microsoft has because we know it, because Microsoft publishes these stats. Google, Amazon and Meta do not. They have half a million miles of fiber. According to their disclosure, in November of last year they added 120,000 new fiber miles in a year that they are sending over this fiber, what was it, 18 petabits a second? Oh, this is on Azure. 18 petabits a second. By late last year, that tripled in one year.
Leo Laporte [00:54:08]:
So all of a sudden, and we've seen this, this is why the demand for these data centers is so high, that this is really where the bandwidth and the battle is going. He also says power is very important and the map is no longer who's using the most Internet, no longer the big cities. The bandwidth map is changing to where power is cheap and where land is cheap so you can build data centers. Which is why Memphis have people hate you. Yeah, well, it's why Memphis is suddenly the, you know, in some ways the center of the bandwidth universe.
Jeff Jarvis [00:54:39]:
Nvidia just invested 500 million in Corning to expand fiber optics.
Leo Laporte [00:54:45]:
It's been huge for Corning. This company that was famous for cookware. First Apple saved it with Gorilla glass. Apple needed a glass that would break. And Steve Jobs actually didn't like the plastic on the iPhone. Original iPhone. Went to Corning, found Corning, they said, yeah, we had this project, we killed this really hard glass. Steve said, bring it back.
Leo Laporte [00:55:08]:
Here's $1 billion brought Corning back practically from the dead. Now they are of course the manufacturers the most fiber glass. And we are talking now 24 strand fiber cabling. I mean this is huge amounts of bandwidth. Anyway, it's a tricky article, but it's a very interesting article. Besides being a journalist, was a longtime investor in these businesses. And I think this is as much saying where the money is going as anything as where the bits are going.
Jeff Jarvis [00:55:41]:
But it's interesting too when you look at your earlier story about Q. Whatever.
Leo Laporte [00:55:46]:
Sub Q.
Jeff Jarvis [00:55:46]:
Thank you. Sub Q and Will. And this is part of what Just Wong says is that the data centers are going to be set, they're going to have a certain amount of capacity and the way the growth will occur is then within the software, within and within the efficiency that exists. He's talking about in terms of cuda, but it's also in terms of every application that runs on it. Everybody's going to try to find new methods, methods to reduce the tokens, to be more focused, to be more efficient. And I don't know what that does to the data centers. Then there's also the new inference chips and memory chips that try to get the memory closer to the processing. So there's a lot of basic structural change, I think yet to happen in the data center world.
Leo Laporte [00:56:38]:
Let me find the quote about Nvidia because I thought it was quite good. I don't this is a very deep technical article and we're not going to really be able to summarize completely, but I think it's very provocative. He. He said when I read that Akamai is putting Nvidia Blackwell Edition GPUs into more than 4400 Edge locations. I chuckled. He said, so now inference is the new content and Akamai wants to be the inference delivery network. He said that it's futile the vendors most loudly promoting edge inference, by the way, that's kind of the point is edge inference is not the driver here. It's going to be bandwidth between data centers.
Leo Laporte [00:57:27]:
He said the vendors most loudly promoting edge inference, Nvidia, Akamai and the carriers all have direct economic interest in the story working out. There are real use cases for edge inference. But I would not bet the farm on this layer the way I would on the hyperscaler core, the data center interconnects or the subsea backbone. I guess that's the kind of the central core of the piece is maybe we're looking at the wrong thing. And that's when I had to look at Gary Marcus's article. The greatest capital misallocation in history. And of course the stock market hated that Meta and others were spending so much money on data centers. Sheer insanity.
Leo Laporte [00:58:08]:
Marcus writes, Amazon, Google, Microsoft and Meta collectively are spending more money than the Manhattan Project every single month. More than 12 times the Manhattan Project every year. What have they got to show for it? Well, Ohm would say they're building in exactly the right place.
Jeff Jarvis [00:58:22]:
Well, the market was mixed on this Meta they were not happy with because there's no strategy. Google for a time today was the largest company in the world. Right?
Leo Laporte [00:58:29]:
Right.
Jeff Jarvis [00:58:30]:
Beat Nvidia.
Paris Martineau [00:58:31]:
Yeah.
Leo Laporte [00:58:32]:
Google has a very strong story. He points out that all four of the hyperscalers are building their own processors. They don't intend to be beholden to Nvidia forever.
Jeff Jarvis [00:58:42]:
They're offering both right now.
Leo Laporte [00:58:45]:
They're offering both.
Jeff Jarvis [00:58:46]:
Yeah.
Leo Laporte [00:58:46]:
But he believes that the same way you're going to have these proprietary interconnects with these big four, you're going to have proprietary processors. These big four.
Jeff Jarvis [00:58:54]:
Well, this is Jensen Huang's argument with China is that that CUDA is cut out of China.
Leo Laporte [00:58:59]:
Right.
Jeff Jarvis [00:59:00]:
And that market share there is now zero market. Yeah, he's trying to market.
Leo Laporte [00:59:04]:
Yeah, I don't blame him. Oh, and then finally, in this same vein from the California Water blog, I thought this was really interesting. Jay Lund, who writes about water usage, He's a postdoc scholar at UC Davis and an expert on water and watersheds. He says, do not fall for the hype that AI is drinking all our water. In fact, he points out in California, more water goes to beer than it goes to data centers.
Jeff Jarvis [00:59:33]:
Well, hello, good use.
Leo Laporte [00:59:36]:
California has about 15 million square feet of floor space for data centers, total data center facility, energy dissipation. He does some math. He's basically saying, don't panic. The amount of water being used by AI data centers is, is not as significant as it's been painted.
Benito Gonzalez [00:59:58]:
He would be better off using the golf course. Golf course thing over the beer thing.
Leo Laporte [01:00:02]:
I look at this golf course behind me, which may be one or two groups of golfers goes by a day. The amount of water they're pouring in
Jeff Jarvis [01:00:10]:
the desert part of the.
Leo Laporte [01:00:11]:
Yeah, we're on the dry, the dry side of the mountain. Anyway, I thought that was very interesting and kind of. He says, don't panic over AI data center water use in California. Recent study for central Arizona found that beer production consumed more water than data centers in that region. He says, but AI will bring more important concerns such as the end of human civilization. So there's that.
Benito Gonzalez [01:00:35]:
Okay, people want all that beer. People want all that beer, too.
Jeff Jarvis [01:00:39]:
We're going to need more of that beer.
Leo Laporte [01:00:41]:
They might do the AI. All right, let's see. We've got about half an hour till our guest joins us. Troy Hunt is calling in from Australia on the Gold Coast. You know his name. He's the creator of HaveIBeenPwned.com is the probably the number one guy in charge of revealing data breaches and, and letting us know if our passwords or emails have been discovered in a data breach. He's really an amazing fellow. He'll be Joining us in a bit to talk about his use of AI.
Leo Laporte [01:01:13]:
As a matter of fact, more AI at work news. China has become the first country in the world to ban firing a worker because AI can do their job. No western country has done this. Interesting that China has. Well, it is, it is the People's Republic of China. It is a worker's paradise. So maybe that makes sense.
Jeff Jarvis [01:01:40]:
So here AI is being used as the excuse to get rid of people. There you'll just find a different reason you're getting rid of people.
Leo Laporte [01:01:48]:
Oh, maybe that's it. Yeah, that's a good point. Yeah.
Jeff Jarvis [01:01:51]:
And there it'll be harder then to justify the investment in AI if you can't recognize the savings in staff.
Leo Laporte [01:02:01]:
The Academy Awards have decided that no AI actor or AI written script can ever win an Oscar.
Jeff Jarvis [01:02:08]:
AI bigot.
Paris Martineau [01:02:09]:
That's so unsurprising.
Jeff Jarvis [01:02:11]:
Yeah, Protectionism is the first reflex, a major overhaul.
Paris Martineau [01:02:16]:
I mean it's not even. It's just, it's a historically protectionist institution. They don't even allow more than one director to be nominated. Like you can't have a co director unless you are a famously acknowledged duo.
Leo Laporte [01:02:33]:
Oh, is that true? Oh yes.
Paris Martineau [01:02:35]:
The Cohen's the first couple, the first couple of their films, one of them was nominated as a director rather than both and they kind of traded off because they weren't allowed to be submitted as co directors to the academy until they officially acknowledged them as a famous duo.
Leo Laporte [01:02:54]:
Harvard trial of emergency triage diagnoses Researchers say AI actually outperformed real emergency room physicians. This is published in the journal Science. Large language models have eclipsed most benchmarks of clinical reasoning. One experiment focused on 76 patients who arrived at the emergency room of a Boston hospital. An AI and a pair of human doctors were each given the same standard electronic health record to read, typically including vital sign data, demographic information, a few sentences from a nurse about why the patient was there. The AI identified the exact or very close diagnosis in 67% of the cases. The ER docs write only about half the time. I would expect a look.
Leo Laporte [01:03:47]:
I was just watching. Where was this? There's a lawsuit going on. Poor fellow died because when he got to the hospital, I think it was in western Massachusetts, it was a Yale New Haven satellite hospital. There was no doctor in the emergency room and they apparently didn't diagnose his health issue and he passed away. They're suing. I would imagine you're going to see AIs in more and more emergency rooms, but I would hope the doctors would also still be there.
Paris Martineau [01:04:19]:
Some important details of this from the actual study is the study authors emphasize that it only really is measuring text based performance in this case for humans and machines.
Leo Laporte [01:04:31]:
Absolutely.
Paris Martineau [01:04:32]:
Of course these large language models are going to be more proficient at because they're fundamentally text based, they have no
Leo Laporte [01:04:38]:
bedside manner at all.
Paris Martineau [01:04:39]:
And they say clinical medicine is multifaceted and awash with non text inputs, including auditory, such as the patient's level of distress and visual information. For example, interpretation of medical imaging studies that clinicians routinely used and existing studies suggest that current foundational models are more limited in reasoning over non text inputs. And that basically that this is kind of a area where of course these models are going to be incredibly adept at because that is the training data that they are working from. And by the same logic, the human doctors are going to be at a disadvantage because the thing they've spent their careers doing is not just reading a couple of lines of text and making a clinical assessment based on that.
Leo Laporte [01:05:28]:
You may remember the Canadian novelist Robertson Davies, who's one of his characters was a physician who smelled patients to diagnose them and was very good at it. And actually I don't think that that's made up.
Jeff Jarvis [01:05:41]:
I think dogs can smell cancer.
Leo Laporte [01:05:43]:
Covid. They can smell cancer and Covid. Yeah, yeah. Wall Street Journal quest to use AI to find new drugs. Companies like Eli Lilly and Roche are racing to build supercomputers to help fix the 90% failure rate in drug development. It's a hit driven business like Hollywood. If 10% of the drugs work, they're happy, but they'd like to improve that number.
Jeff Jarvis [01:06:09]:
When I spoke at a pharma company in Switzerland.
Paris Martineau [01:06:14]:
Why did you speak at a pharma company in Switzerland?
Jeff Jarvis [01:06:16]:
How do you be a googly drug?
Leo Laporte [01:06:19]:
So this is a while ago.
Jeff Jarvis [01:06:21]:
This was. Yeah. And I didn't kind of realize that the pharma industry is entirely an industry of molecules. Right. Find a molecule and does it do what we hope it'll do and test it. And the testing, you know, one of the. I don't know where it is in the industry now, but because it's a secretive industry, because the expenses are so high, one of the things that would make the industry, the field, so much more efficient is if they would, they would share all their failures, their failures anyway. But instead they want their competitors to go through the same.
Leo Laporte [01:06:49]:
They want them to spend money too.
Jeff Jarvis [01:06:51]:
Yeah, yeah. And so we slow down.
Paris Martineau [01:06:54]:
A brief aside. I just realized when I was looking at this large language model performance on clinicians thing, one of the authors is a visiting researcher at DeepMind.
Leo Laporte [01:07:06]:
Well, they need some AI expertise on the, on the team, of course. I mean, but you're right, they may have an axe to grind.
Paris Martineau [01:07:14]:
Yes, it's notable that always whenever you're reading about any sort of study, click through and scroll down to the conflicts of interest where they.
Leo Laporte [01:07:23]:
Oh yeah, oh yeah. And furthermore, and Gary Marcus, we're going to bring him up again. He's the chief AI critic these days. Says, and this is probably, do you
Paris Martineau [01:07:33]:
think that's him or do you think that's Ed?
Leo Laporte [01:07:35]:
That's pretty big. But I don't know, Gary's the guy.
Jeff Jarvis [01:07:38]:
Gary's in the AI alone. Gary's got a crown.
Leo Laporte [01:07:41]:
Yeah, yeah, Ed's focused on finance.
Jeff Jarvis [01:07:43]:
Ed's got lots of things that he cares about.
Leo Laporte [01:07:45]:
But I think that Gary really is. I mean, I'm not sure I agree with Gary in 90% of the time, but in this case I do agree with him. He said the real way to measure the success of AI in medicine is impatient outcomes. You know, does the patient get better? Not did you do the diagnosis? Right, but does it make a difference? And there is a. He's quoting a Nature article that says basically the same thing. And there is so far no evidence that improved patient outcomes so far.
Paris Martineau [01:08:18]:
This weekend my mother called me and was kind of complaining that she had like been feeling kind of crummy lately and then described what are all the classic symptoms of a woman experiencing a heart attack or stroke. She's like, yeah, there's a pressure in my tendon and like, hey, you got to go to the ER right now. Because those are the signs of. And she's like, I don't know, I'm fine. I was like. They had a whole conversation back and forth. I was like, please. You know, she hung up and she was like, I'm going to go to sleep, but like I'll, you know, call you in the morning.
Paris Martineau [01:08:50]:
And so I was freaking out. I speak to her in the morning and she's like, oh yeah, I did go, but it's because I like woke up an hour or two later. And then I asked chat GPT and it said that I was experiencing all the symptoms of like associated with a stroke.
Jeff Jarvis [01:09:06]:
Your daughter should go.
Paris Martineau [01:09:11]:
And then she went and they were like, you're fine. They did an EKG and a bunch of stuff. They made her wait four hours to do another. Albeit the area that she lives in has terrible medical care. So I'm sending her to go get a follow up somewhere Else, there's a
Leo Laporte [01:09:24]:
lot of old people in that area. They should have good doctors.
Paris Martineau [01:09:28]:
Yes, they should.
Jeff Jarvis [01:09:31]:
Yes. It's amazing to watch what happens to old people in Florida. Not that your parents were that old yet, but, but you, you get ignored a lot.
Paris Martineau [01:09:39]:
My grandmother 10 years or so ago died in a series of events that, I mean, she wasn't doing great to begin with, but then she went in for, you know, some sort of heart related issues and the doctors accidentally gave her a bunch of Viagra instead of the medication that she was supposed to have and then she died.
Jeff Jarvis [01:10:02]:
Oh, my Lord.
Leo Laporte [01:10:03]:
It killed your grandmother. Wait a minute.
Paris Martineau [01:10:05]:
I mean, that was one of the.
Leo Laporte [01:10:06]:
Viagra killed your grandmother is a very good headline. I'm just saying you could, you could get that.
Paris Martineau [01:10:11]:
Viagra killed your grandmother. Yeah.
Leo Laporte [01:10:14]:
You know, how did that.
Jeff Jarvis [01:10:16]:
How what did they. How. I'm nonplussed.
Paris Martineau [01:10:20]:
I mean, I'll. I actually won't ask for details because it really has traumatized my mother.
Troy Hunt [01:10:26]:
I.
Paris Martineau [01:10:26]:
At the time, didn't I? But I remember that being a. I'll
Leo Laporte [01:10:32]:
tell you what could happen.
Paris Martineau [01:10:33]:
Part of it that I had wormed my way in my head that I remember.
Leo Laporte [01:10:36]:
For the last 10 years, Viagra has been. There's an, there's a generic version of Viagra that is. I don't know if it's an off label use. I think it is actually a labeled use for lowering blood pressure.
Benito Gonzalez [01:10:52]:
So that's what it was originally developed for. Right? That's what it was for.
Leo Laporte [01:10:54]:
They may have thought her blood pressure was too high and given her this generic, which you would label the Viagra. But it's, it's really intended for reducing blood pressure.
Paris Martineau [01:11:06]:
Yeah. I think it might have been like they were trying to give it to her for some reason that was legitimate and related to that.
Leo Laporte [01:11:11]:
But not easily taking, I presume.
Paris Martineau [01:11:14]:
Yeah. I mean, I think it's that she was, you know, trying to get it up and it wasn't really working.
Leo Laporte [01:11:19]:
Grandma.
Paris Martineau [01:11:19]:
I think that she knocked it off.
Leo Laporte [01:11:22]:
Wow.
Paris Martineau [01:11:22]:
She'd been taking like some other medication that interacted poorly. But I know that that was one of the precipiting factors into her final Edison.
Leo Laporte [01:11:28]:
I think that's one where the. That Link Beatty headline maybe is isn't the full story. I would hope. I mean, I hope, but you know,
Paris Martineau [01:11:36]:
that's the story I always think about when I think about that area's medical prowess.
Leo Laporte [01:11:42]:
Yeah. Speaking of laws, Maryland has become the first state to ban AI driven price increases in grocery stores. How do they know that's a Good question. But I mean you've seen, we've talked about it before, the many stores now are using electronic readouts instead of price stickers, which gives them in theory a centralized place that they could change the price. Maryland. It's also banning DoorDash and other third party delivery services from using customers personal data to set higher prices. Now that's important that higher prices because some people say this is a bad law because it doesn't mention lower prices. And the fear is that what retailers will do is raise all prices and then lower them selectively instead of raising them selectively.
Leo Laporte [01:12:36]:
The key is two customers should pay the same amount for the same item from the same retailer at the same time.
Jeff Jarvis [01:12:43]:
It's just that AI has such cooties now.
Leo Laporte [01:12:46]:
Yeah, I think some of that anything
Jeff Jarvis [01:12:48]:
that has AI with it is presumed to be bad.
Leo Laporte [01:12:51]:
Right. I mean this is surge pricing, isn't it? Right. This is the same thing.
Jeff Jarvis [01:12:54]:
Yeah, but they've been doing that for ages.
Leo Laporte [01:12:56]:
Yeah, yeah, but what you don't want is, is some sort of AI based redlining like oh, that person looks rich, let's raise the price.
Paris Martineau [01:13:04]:
And I mean that does seem to be what's happening on a lot of these apps. My one of my colleagues in December had done a really phenomenal investigation that found that Instacart does or was doing this for a lot of different foods they had kind of worked with. They gathered like a group of three or 400 different volunteers all throughout the US that used, you know, identical phones, identical account patterns, all loaded up Instacart at the exact same time of day, same sort of situation, same WI FI connection, tried to control as many variables as possible and added the same items to their cart. And they got wildly different prices.
Leo Laporte [01:13:47]:
Wow.
Paris Martineau [01:13:48]:
Just for the base price of a food item.
Leo Laporte [01:13:50]:
And that's a new factory service.
Paris Martineau [01:13:52]:
And you. Yeah, you expect that. I think a common expectation people have is that expectation people have is that you will see this in surge pricing for something like Uber or maybe your delivery fees on something like DoorDash, but not in the base food price itself.
Benito Gonzalez [01:14:11]:
And the airlines have been doing this forever.
Jeff Jarvis [01:14:12]:
Right?
Benito Gonzalez [01:14:12]:
The airlines have always been doing this.
Leo Laporte [01:14:14]:
No two passengers paid the same for their ticket ever. Yeah. By the way, it was Consumer Reports that complained that this law didn't go far enough that didn't cover lowering prices as well. So cr. Yeah, CR doing good work. But hey, it's a first step and the law could be improved. Course. And California has made it legal for the California Highway Patrol to ticket a driverless vehicle.
Jeff Jarvis [01:14:41]:
How do they, how do they stop it?
Leo Laporte [01:14:43]:
Ah, that's a good. I guess you'd have to stand in front of it. I don't know.
Benito Gonzalez [01:14:47]:
You put a coat on its nose. You put a coat on its nose.
Leo Laporte [01:14:49]:
There you go. It's very easy. They can, they don't actually need to stop it. They can just issue a notice of AV non compliance. It goes right to the car's manufacturer. All they need is the license plate. So I think that's probably due. That's much, much needed law.
Leo Laporte [01:15:07]:
Yeah. We see Waymos on every corner in San Francisco now. Same in la. And there have been issues, in fact, going back to China. China has apparently banned driverless vehicles now because of. In Wuhan, a whole bunch of driverless vehicles caused a massive traffic jam and they couldn't be broken up. So China is now pulling back on licenses for driverless vehicles.
Benito Gonzalez [01:15:33]:
There's all kinds of videos from China of crazy automated vehicles doing crazy stuff.
Leo Laporte [01:15:37]:
Yeah, yeah, Those, those delivery trucks or whatever they are, they automated.
Jeff Jarvis [01:15:41]:
Meanwhile, I am so dying for a Chinese car.
Leo Laporte [01:15:44]:
Oh, a BYD or something. Oh, I'm so way ahead of us.
Jeff Jarvis [01:15:47]:
And this is so far ahead of us.
Leo Laporte [01:15:49]:
Yeah. In fact, that was very controversial when Canada said, you know, we're going to let these, these Chinese cars sell into Canada.
Jeff Jarvis [01:15:56]:
I think if these manufacturers, Geely and BYD just brought 100 cars into the U.S. the jealousy for them and pay whatever they got to pay in the stupid.
Leo Laporte [01:16:06]:
Oh, they can't even cross the border. Doesn't matter.
Jeff Jarvis [01:16:09]:
They can't.
Leo Laporte [01:16:09]:
Can't. No. It's not just the tariff. They're legally not allowed.
Jeff Jarvis [01:16:12]:
Well, but I've read stories that people, people are buying.
Leo Laporte [01:16:14]:
Buy one in Canada. If you buy one in Canada, you cannot drive it into the United States.
Jeff Jarvis [01:16:19]:
I saw that people were buying them in.
Paris Martineau [01:16:21]:
How do they stop that?
Jeff Jarvis [01:16:23]:
Yeah, I don't.
Leo Laporte [01:16:24]:
At the border, do they just have
Paris Martineau [01:16:25]:
somebody there with a gun that's going to shoot your car if you come across?
Leo Laporte [01:16:28]:
Yeah. You're just not allowed to cross the border if you're driving in a Chinese vehicle.
Paris Martineau [01:16:32]:
But who's policing that?
Leo Laporte [01:16:35]:
The Border patrol? You think they don't do that? How do they look at the.
Paris Martineau [01:16:40]:
In addition to everything else you're doing as border patrol, you're policing the nationality of a car.
Leo Laporte [01:16:45]:
Yes, absolutely.
Jeff Jarvis [01:16:48]:
No Italian cars neither, because they're just,
Leo Laporte [01:16:50]:
they're just bad vehicle.
Jeff Jarvis [01:16:53]:
Well, the fact that the new Volvo EX60 or something, the V60 looks really good and Geely owns Volvo.
Leo Laporte [01:17:02]:
Right.
Jeff Jarvis [01:17:02]:
I'm starting to See things that, that
Leo Laporte [01:17:04]:
are creeping in is a joint venture of, of Chinese company and, and, and
Jeff Jarvis [01:17:09]:
GM is trying to do a deal with Gili. So is it Gili or Greeley or.
Leo Laporte [01:17:13]:
I don't know. I. It's. What's interesting of course is it's the big three automakers that don't want Chinese vehicles, this country and Tesla because it competes directly with them. They're afraid they would.
Jeff Jarvis [01:17:23]:
I haven't bought an American made car in year and since I was 20
Benito Gonzalez [01:17:28]:
and there are so many Chinese manufacturers, you know, there's so many Chinese manufact like car manufacturers. I see them in the Philippines. They're all over the place and they're all different kinds of brands. There's so many.
Leo Laporte [01:17:39]:
Well, and there's some very inexpensive ones. They're so cheap. Yeah. This was a controversy that we talked about yesterday on security. Now we're going to do a couple more stories, then we'll take a break and get ready for Troy Hunt. If he calls in Benito, let me know. We'll immediately stop and talk to him. Google Chrome has started downloading a 4 gigabyte local model with every purchase.
Leo Laporte [01:18:10]:
So if you install Google Chrome Chrome today, you will get without warning or ask permission, you will get a 4 gigabyte version of.
Paris Martineau [01:18:21]:
Does that mean the updates too?
Leo Laporte [01:18:23]:
Yeah. Now there is a way to stop it. You can go into browser flags and turn it off. This comes from the privacy guy he's talking about and I think he's got a good point. At a billion device scale, Google Chrome course the number one browser worldwide climate costs of this download alone are insane. But you know, we, we actually had a little bit of a debate in the club twit discord. Darren Okey, who is a developer and a AI advocate said no, this is going to be great because then you will know a browser has a local model that you can call on. He says, I have.
Leo Laporte [01:19:02]:
It's hard for me if a, if a user misspells Dubai when they're entering their name and address in a form of. But the local AI could quickly catch it and fix it without me having to write, you know, code.
Jeff Jarvis [01:19:13]:
It's gonna be part of every browser. This is a bit of a get ready, a moral panic of AI cooties.
Leo Laporte [01:19:23]:
Oh, in the White House now. Don't you wish behind the Resolute desk. That was Jeff Jarvis with moral panic.
Jeff Jarvis [01:19:32]:
I mean it's gonna be part of every browser and it's. Oh, it's AI, it's downloading AI. Well, you're going to get functionality. That functionality is going to come from AI run locally and you want to run locally and so I don't. This is a hoo hat.
Leo Laporte [01:19:47]:
Okay, here's my first of all, I understand people just say 4 gigabytes. No and it could have been two 22 gigabytes but Google was able to squeeze the model down quite a bit. You have local models on Android phones, you have local models on iPhones. That's why they can do on device AI so it's not unheard of. I think my issue more would be this is Google kind of big footing web standards because you're it doesn't seem
Paris Martineau [01:20:11]:
like there's a way to opt out. Like I just scrolled through this article. It doesn't.
Leo Laporte [01:20:15]:
Well if you go to a browser colon slash slash flags and search for opt guide on device model.
Paris Martineau [01:20:26]:
Yeah that I would argue is just what you're just describing doesn't seem like a way to optimize. You have to do some tinkering to.
Leo Laporte [01:20:36]:
It's not a. It's not in the settings but it is in the, you know, the flags, the browser flags. There are a lot more settings in there than they expose the settings. Google clearly wants you to have Nano on your machine.
Paris Martineau [01:20:47]:
Well I also bet that what they're doing then is they're going to add everybody who has Chrome as another user in a. In a way to this is going to boost their total user numbers for Gemini.
Jeff Jarvis [01:21:02]:
Well and everybody who use Google search is now using Gemini.
Paris Martineau [01:21:06]:
I know but they're going to quickly
Leo Laporte [01:21:08]:
get to the point where a website will say oh you don't have Chrome, you don't have a local AI. You need to use Chrome. And I think that's Google's point. This is how you get Chrome to 100% market share by putting in a new feature not Approved by the W3C or by Ietfield that every website's going to or not every but many websites are going to say oh no, no, you need Chrome to use so my.
Jeff Jarvis [01:21:32]:
My Google Chrome on my brand new Neo is 1.4 gigs there will be.
Leo Laporte [01:21:38]:
I'll tell you where to look. Actually I'm not sure. On the on the Mac there's a file named Weights W E I G H T S weights bin it's those are the. The model. That's the model. It lives at least in Windows in that opt guide on device model folder. Here's the thing. Even if you delete it not on my Mac it returns it next update.
Leo Laporte [01:22:03]:
You'll probably get it.
Jeff Jarvis [01:22:06]:
This machine is we cool?
Leo Laporte [01:22:10]:
Well this just started.
Paris Martineau [01:22:12]:
Does it work even if you have, as I do turned off all AI in the browser? Ooh, I'm in the flags thing now and I like whenever a setting thing starts with all caps in red warning, experimental features ahead.
Jeff Jarvis [01:22:29]:
Danger, danger, danger.
Leo Laporte [01:22:31]:
Will Robinson has a lot of good stuff there by the way in the browser flags. It's always a good thing to work on. So this guy actually verified it on a freshly created Apple Silicon profile. So I think you are going to get it.
Jeff Jarvis [01:22:45]:
Can't wait.
Leo Laporte [01:22:47]:
But I don't. Yeah, I don't. Yes, I, I think there's two sides to this. What I think Google should have done is, is go to the web standards. See, Firefox has already said we're never going to do that. Vivaldi says we're never going to do that. Vivaldi is based on Chrome. That's the next question is will all Chrome browse Chrome based chromium based browsers? You do this as well? I don't know.
Leo Laporte [01:23:09]:
The privacy guy also points out this probably is unlawful in the EU and the UK because of gdpr. And he also talks about the energy usage. If you, if you. The device per Device cost of one nano push is 0.24 kilowatt hours per device per download and then multiply that times a billion devices. That's a lot of bandwidth. That's maybe 24 gigawatt hours of bandwidth or more.
Jeff Jarvis [01:23:43]:
Anyway, that's just in trying to.
Leo Laporte [01:23:45]:
Yeah, I don't, I don't know if
Jeff Jarvis [01:23:47]:
that's the real talk about tick tock and bandwidth hours. You know, just.
Leo Laporte [01:23:51]:
I think you're going to see local models everywhere. You see local models on every phone now.
Jeff Jarvis [01:23:55]:
You're going to want it because you don't want stuff going back up into the cloud. So you can't have it both ways here, right?
Leo Laporte [01:24:00]:
The creator of the this is Fine Art, the me, remember you know that meme with the dog sitting?
Jeff Jarvis [01:24:06]:
Remember it? It's there every day in the cafe
Leo Laporte [01:24:08]:
and it's on fire. This is fine. Says I didn't know this, but a guy named Casey Green created the comic. He says in a blue blue sky post there he saw an ad in the subway station featuring this is fine, but it's from an AI company saying my pipeline is fine. And he says you stole it. That's mine. I got bad news for you, Casey. I think this is, I don't know.
Leo Laporte [01:24:35]:
Can the memeification of a work of art suddenly make it public domain? I don't think so.
Paris Martineau [01:24:42]:
Yeah, I don't think so. I mean, I'm shocked that I guess this is what happens when you work at a startup, but I am shocked that they got to the point of placing ads in the subway without anybody being like, oh, fair use, copyright.
Benito Gonzalez [01:24:57]:
Well, you have to be rich enough. You have to be rich enough to sue. So.
Leo Laporte [01:25:01]:
Yeah, that's right. He says, not anything I agreed to. He said that ad was stolen at like, AI steals. He says, please vandalize it when you see it. And it looks like this particular one
Jeff Jarvis [01:25:14]:
was in fact, the company said, we love his work. We're gonna, we're contacting him. Somebody didn't know what they were.
Leo Laporte [01:25:20]:
I think it's. Yeah. If it's a meme, you might assume that. Well, must be public now, Right.
Jeff Jarvis [01:25:26]:
Except commercial use, it's just like.
Leo Laporte [01:25:28]:
Right.
Benito Gonzalez [01:25:29]:
Yeah.
Leo Laporte [01:25:32]:
Okay, now here's the weird. Google's DeepMind has taken a stake, a minority stake, but a stake anyway, in an online game called EVE Online. Very popular. I don't know if it's still possible. Do people still play EVE Online, Benito?
Benito Gonzalez [01:25:47]:
I'm pretty sure they do, but whoever does is like super hardcore.
Leo Laporte [01:25:51]:
Yeah, that's. That's been around for quite a while. But presumably the point is to use it for AI training. The Icelandic company Ferris Creation, or Fenris Creations, creator of EVE Online, They say EVE Online requires skill that AI has not yet fully mastered. Oh, this is a director at DeepMind. Such as Long term planning and continual learning. And so by training AI on EVE Online, we can make AI better. I think that's kind of interesting now.
Leo Laporte [01:26:28]:
I don't think the players of EVE Online are going to be too thrilled about.
Benito Gonzalez [01:26:33]:
Might become a story point because, you know, what happens in EVE Online is like the story evolves through the actual people playing the game. So it might become a villain. It might become a villain or something.
Leo Laporte [01:26:42]:
Oh, that's one way to pay to protest, huh? And finally, I thought this was a great story. This is worthy of a Paris Martineau article in Wired magazine. I'll give credit to the author, Todd Feathers. He couldn't land a job interview. Was AI to blame? It's about a medical school resident who. Chad Markey, who was trying to find placement in a hospital once his residency completed, had applied to many places, got not one interview, not one acceptance of his application, and started to think maybe AI is involved. He had taken a number of leaves of absence in the process of getting his med school training. By the way, great references, great grades.
Leo Laporte [01:27:38]:
There was no reason why you wouldn't want to hire this guy. He was applying to psychiatric programs, but he had voluntarily taken three separate leaves of absence and in his record, it didn't say why. It turns out it was for a medical reason. His theory, and he spent a lot of time and wrote a lot of Python code to test it, was that AI screening tools used by hospitals were screening him out. Because of those timeouts he would spend, according to Todd Feathers, writing it Wired the next six months writing emails, research papers, legal requ and a constant stream of Python code trying to peer inside an AI screen screener that he believes was keeping him from getting a job. It's a great story, highly recommend it. It's kind of a detective story. Eventually, while he wasn't able to prove that the screener was screening him out, he got some evidence that the taking those leaves of absence without a good reason might in fact be knocking his applications out of the running.
Leo Laporte [01:28:44]:
So he changed his record to say it was for medical reasons and he immediately got a job offer and is going to Columbia, the Presbyterian. Columbia Presbyterian Hospital in their psychiatric program. So maybe he was. Maybe he was right. It does talk about a tool, an AI residency application screener built by the company Medicratic.
Jeff Jarvis [01:29:09]:
But again, I'm not sure that that's an AI thing.
Benito Gonzalez [01:29:11]:
I think it doesn't need to be AI. Yeah, it doesn't need to be.
Jeff Jarvis [01:29:14]:
I would say if there's unexplained breaks in a resume.
Leo Laporte [01:29:17]:
Right.
Jeff Jarvis [01:29:17]:
It's going to raise a question mark. And if I have 100 good resumes, it's not going to make the top of the pile. Sorry.
Leo Laporte [01:29:22]:
Yeah, a lot of hospitals said. Well, it was one example, Yale, New Haven said, told Wired that they had tried Cortex the tool, but stopped using it. Two residency programs at Dartmouth Hitchcock Medical center used it before program directors reviewed the applications, but they stopped using it because they preferred their own screening methods. One of the things Chad found is that people's grades would change from moment to moment in the screening tool. That's not good. Cortex said. Oh, yeah, that's because the page refresh didn't happen. Yeah, that's the ticket in the academy.
Jeff Jarvis [01:30:00]:
We have tons of search committees and they're as bad as any AI.
Leo Laporte [01:30:04]:
Absolutely, Absolutely. Yeah. He got a job and that's the good news. It's a good story too. He really did some detective work. Well, hey, Troy Hunt has joined us. He is on the line from the Gold coast of Australia. We will get to Troy Hunt, the creator of HaveIBeenPwned.com and talk to him about his AI bot Bruce, right after this.
Leo Laporte [01:30:25]:
Well, Troy, it is such an honor and a pleasure to have you on the show. Of course, we're all big fans. Have I Been Pwned is an Internet institution. I was saying earlier that it's kind of like that XKCD article about open source projects where the entire enterprise world is suspended on a little brick written by one developer in the middle of nowhere. You and your wife Charlotte are in fact kind of the sole runners of have I Been Poned? Is that right? Or do you have a team now?
Troy Hunt [01:30:55]:
Well, we have one more person. We have one developer in Iceland who a good friend, similar sorts of background, someone we just know really well and trust. But it's really just the three of us now.
Leo Laporte [01:31:05]:
What a great story. So you were at Pfizer, you've been a developer for many years and was this just a hobby project you created?
Troy Hunt [01:31:14]:
Yeah, it was, because I was there for 14 years and I started there as a developer in 2001 and the same thing happened to me that happens to every software professional in a big enterprise. They say you're doing good, but if you want your career to progress, you've got to stop doing that and you've got to become a manager. And that's kind of sucked my soul because it's a shame because I liked code, but I also liked money. So what are you going to do?
Leo Laporte [01:31:41]:
It's Sophie's Choice. It's not an easy thing to decide. So what did you decide?
Troy Hunt [01:31:45]:
Well, I did both. I stayed at Pfizer, I started a hobby project for fun and I started building out an independent life, initially doing online training for pluralsight and started to create, I guess, a life of my own. And then they very kindly made my job redundant and gave me a couple of years worth of pay to leave, which was fantastic time.
Leo Laporte [01:32:04]:
Oh, that's good, actually. That's a good thing. And then that sort of coincided, I understand, with a big breach at Adobe.
Troy Hunt [01:32:13]:
Yeah. So Adobe October 2013 was the catalyst for have I Been Pwned. I started with, with that one and a few other really little ones and it was pretty much just all Adobe and a sprinkling of other things. And I thought, oh, you know, this will be, this will be fun. A few of my friends will use it. It'll never be serious. Which is why I gave it such a stupid name. And then, then it just became popular and it just kept going.
Leo Laporte [01:32:35]:
It is funny because as of all the. Over all the years, I'VE referred people to have. I've been pwned. I always have to say pwnet.
Paris Martineau [01:32:44]:
I have to say, if you don't know what poned means, you weren't on the Internet at all a certain time also, though.
Troy Hunt [01:32:50]:
But it becomes a fun discussion point because people like, look, now that it's big and you're taking it seriously, should you change the name so that people don't think it's just, you know, some idiotic project? I was like, no, it's. It's a conversation piece now. I like it.
Leo Laporte [01:33:00]:
How, how big is the database now?
Troy Hunt [01:33:04]:
So in terms of raw records, that's probably the best number to refer to. What's the say on the front page? It's changing every day at the moment. 17.5 billion records that consumes. It only consumes about one and a quarter terabytes worth of data. But, you know, the only thing we load is email addresses. So 7 and a half billion instances of an email address in a breach, which then has something like 6 billion something unique email addresses and then each one just on average appears a few times.
Benito Gonzalez [01:33:33]:
Wow.
Jeff Jarvis [01:33:34]:
So you don't record any further data about that. You're just saying this email address has been compromised?
Troy Hunt [01:33:39]:
Yeah, correct, correct. And there's, you know, there's a part of me which is like, if we had the other data there and people could literally like take back control, you know, they could see what was my date of birth and my home address and everything that was exposed. It's like, that would be great from an empowerment perspective. But the risks involved in that are sitting on all of that data, not to mention the processing effort. Email addresses are easy. We can reg ex out email addresses. But addresses, phone numbers and things, that's hard.
Leo Laporte [01:34:05]:
You do do something that's really great. There's a. And I don't know if people are aware of it. I mean, they often go just to the front page, enter their email address and say, oh yeah, my email address is out there. But there's a passwords feature as well, which I really like and I think it might scare people if they click that link at the top of the page and enter a password. I'll just put in monkey 1, 2, 3. That maybe they're giving you their password, but that's not the case, right?
Troy Hunt [01:34:33]:
No. So we've got a really cool anonymity model behind that. So when you enter that password, it gets hashed client side, and then there's only five characters of the hash that gets sent off to the service and the service comes back with hash suffixes and it try and mixes, matches it all up. Anyone who looks at the dev tools can see what happens. And we actually, that API that sits behind that is now hit 18 billion times a month because lots of organizations have built this into the registration flow. Because what we're trying to do is, is say, look, when someone comes to sign up, the BBC, for example, is a big user. If you try and sign up on the BBC and you're using that password, even if you capitalize a letter and then you put a number and an exclamation mark at the end to make it secure, if it's been seen before, it's going to be higher risk. So yeah, we do 18 billion checks a month on that at the moment.
Leo Laporte [01:35:23]:
That's amazing. And this is such a service, such an important service to us as Internet users. We're very grateful. How do you make money on this?
Troy Hunt [01:35:34]:
Well, for a long time it didn't. That was the first easy answer and now it does in a few different ways. So there's some 1Password product placement which you'll see on the front page that came along after people would search for themselves and they'd get a result. And it's like, hey, you've been in five data breaches, good luck. And then they're like, what do I do? And obviously having strong unique passwords was a key thing there. And I had an existing relationship with them. They actually bake have I been pwned into 1Password as well? So if you're a 1Password user, that will tell you if your stored passwords have been in breaches or your email address has been in breaches. So that was a nice relationship.
Troy Hunt [01:36:11]:
And then if you're an organization monitoring a larger domain, you then need to pay to get access to the results of that domain. And there's just a API key slash dashboard there. And that's. That's pretty much it.
Leo Laporte [01:36:27]:
So do other password managers use your database? I mean, I've seen other password managers warn me that my, my password's been seen in a breach. How do they know that?
Troy Hunt [01:36:38]:
Well, there's a, there's a bunch of different services out there that, that do similar things. I, I think I honestly don't pay much attention. I certainly didn't look at it before I built this for things like the password search feature. We've made all that data free and open source as well. So there's about a billion unique passwords. So any password manager is free to download all of that. And Integrate it and they may use it. I don't know.
Troy Hunt [01:37:00]:
That's kind of the joy of it. We literally just don't know.
Leo Laporte [01:37:02]:
It's awesome. So one of the reasons I wanted to have you on intelligent machines we talk about AI is Bruce. You want to tell us a little bit about Bruce?
Troy Hunt [01:37:12]:
It hasn't been Bruce's finest day today.
Leo Laporte [01:37:15]:
Oh, no.
Paris Martineau [01:37:16]:
Oh, what did. What did Bruce do? All right.
Troy Hunt [01:37:18]:
I actually tweeted this only about an hour ago. So Bruce is drafting responses to Zendesk tickets, and he drafts this response today for someone asking about a subscription. And Bruce is like, the subscription starts at US $3.50 a month. And I'm like, where did that number come from? This is Bruce's exact words. He says, honestly, I don't know. It didn't come from the markdown file with the pricing, which has a correct figure, and it didn't come from our lessons markdown file. It's a hallucinated number. I made it up.
Troy Hunt [01:37:49]:
He literally said, I made it up. There you go.
Jeff Jarvis [01:37:53]:
But he's honest about being.
Leo Laporte [01:37:55]:
He admits it. That's his.
Troy Hunt [01:37:57]:
Then he doubles down because then there's another message as well. And he's like, yeah, it's all. It's all hallucination. But I mean, this is why everything is human approved before it goes out. Because even stuff that should be really, really simple and clear cut, he can make mistakes on. And if we represent a price to someone, even if it's the bot, I think we're kind of obligated then to stick to it.
Paris Martineau [01:38:17]:
Yep.
Jeff Jarvis [01:38:19]:
I've got a fundamental question, Troy, because I've long thought that. I'll put it this way. What if we presume that all of our emails, all of our Social Security numbers, all of our addresses, all of our ages, all of our mothers made names is just out there. And so rather than concentrating on the. On the leak side, the other side of security, what should be done to make none of that matter?
Troy Hunt [01:38:47]:
Yeah, it's. It's a good question. And I think that's the safe assumption at this point in time. You know, I find now we'll have some major data breach and I'll do a lot of press, and they'll say, you know, what should people do now? It's like, well, they should do all the stuff that they should be doing anyway. Like, this doesn't change the fact that you should have strong, unique passwords and all the rest of it. But I think the question, Jeff, insofar as it comes back to things like how do we do particularly knowledge based authentication? So I got invited to speak at Congress in the us the big important Congress, not the one here some years ago.
Jeff Jarvis [01:39:19]:
You have a nicer Congress than we have.
Troy Hunt [01:39:21]:
Yeah, I'm not even sure what we've got, to be honest. But anyway, the premise of that hearing was how do we do authentication using knowledge based information in a post breach world? Because if Social Security numbers in the US are a great example, they're meant to be secret, you give it to all of these different organizations, they're very, very hard to change if you need to later on. So why are we using this as a form of knowledge based authentication? Why are we using dates of birth as a form of knowledge based authentication? You know, we literally, we have here, and I'm sure you do over there as well, time and time again a bank, a telco will say, we just need to make sure you are who you say you are. What's your date of birth? And you're like, every time. It's the thing in all the data breaches. It's also the thing that I tell my friends because I like cake and presents and a bunch of people put it on their social media. So I think the bigger fundamental question here is how do we do knowledge based authentication or how do we do identity verification? Being conscious that all, all of the KBA stuff we've got is either compromised or easily discoverable.
Leo Laporte [01:40:22]:
So how do we do it? I mean this is one of the fundamental problems of the Internet is authentication.
Troy Hunt [01:40:28]:
Well, you give the government all your data and then you have a government identification and everybody loves that.
Paris Martineau [01:40:34]:
And there's no problems with it. Right.
Leo Laporte [01:40:36]:
Kind of what Estonia does, they have a. Yeah.
Jeff Jarvis [01:40:38]:
So is Estonia in better shape than the rest of us?
Troy Hunt [01:40:41]:
It's a good question. They're a little bit outside my usual remit. I don't think they're probably going to be much worse, to be honest.
Leo Laporte [01:40:47]:
But they, they created a national digital id, but the, the, the encryption the crypto used on the ID was cracked, was flawed actually, and so they had to retract the card and reissue it. So even, you know something as, as
Jeff Jarvis [01:41:02]:
interesting, India has national ID as well.
Leo Laporte [01:41:04]:
Yeah, it's, it's always problematic. What about Sam Altman's iris scanning world thing?
Paris Martineau [01:41:12]:
Do you think the orb will save us or.
Troy Hunt [01:41:14]:
Yeah, but I think every one of these services, if you're being entirely objective about it, we're far better off doing this at a, at a federal government level and in a perfect world we'd all implement Some sort of basic structure that we could replicate across the world as well. We're better off doing this at a federal level from a government than any individual tech company. But nobody trusts the, the government, even though they have most of the data anyway. I mean, we, we got digital driver's licenses a while ago and I, I did a talk at one point and I remember a lady saying, you know, I, I really don't trust the government with my data to do digital driver's licenses. I'm like, who do you think has all your driver's license data already? Like, they're the ones who issue you the driver's license. They have this, they just print it on a card at the moment and send it to you. So, you know, there's that. And of course we're now in an era of, of increasing age verification all around the world as well.
Troy Hunt [01:42:05]:
So how do we do age verification, identity verification and maintain privacy? And they're all hard problems.
Leo Laporte [01:42:11]:
Yeah. And it doesn't sound like we have any obvious solution. I don't think anybody in the United States wants to trust our government to identify us, but they already know all that stuff. Yeah, yeah.
Troy Hunt [01:42:24]:
We were the first country in the world here in Australia to roll out a minimum age of 16 for social media. It rolled out in December last year, which I found fascinating because I, I had a son who turned 16 just before that and a daughter who turned 13 just before that. So she just got social media and now she spends her days figuring out which social media platform she can game to create another account on.
Leo Laporte [01:42:46]:
Exactly.
Troy Hunt [01:42:46]:
And how long it lasts. So it's just sort of fascinating to see all this roll out.
Leo Laporte [01:42:51]:
At the moment it's just teaching kids to be hackers. And of course it's been very good for, for VPN companies in Australia as well.
Troy Hunt [01:42:58]:
Yeah, there's that look, it's like a lot of the world, I mean, the UK is doing similar things, a lot of Europe's doing similar things. I think you've got various states in the US at least proposing as well. And I mean, at its heart there are elements of truth to this as well. We had to have a chat with our 13 year old daughter the other day saying some of that stuff you really shouldn't be posting on social media. Like this is exactly why our government is saying you should wait a few years. Please don't continue to prove me wrong that you're mature enough to have it. But there is an element of truth to delaying access to a lot of aspects of social media for children Are you concerned?
Leo Laporte [01:43:35]:
You know, lately, one of the biggest creators of breaches have been Shiny Hunters, which are, I think chiefly social engineering techniques. Very good at social engineering. But are you concerned about AI generated attacks? AI generated breaches. Is that something that it seems like increasing?
Troy Hunt [01:43:52]:
Yeah, look, certainly very concerned about the potential. But I mean, Shiny Hunters is a great example where these are kids, or at least teenagers, when they eventually get arrested, which will happen if they keep this up, we'll see. Some of them, yeah, some of them have been, you know, one of them was. One of them did the power schools hack and then successfully ransom them. He's just been sentenced to a Third, I think four years. He's 19 years old. Started when he was 14. So they'll all be that demographic, but they're doing phishing.
Troy Hunt [01:44:22]:
They're literally doing voice phishing. These are kids or very young men calling up telephone numbers, talking to humans. It's working very well just with humans. We're all worried about the potential of AI, but I'm honestly yet to see data breaches of this nature escalating because. Because of AI. They're escalating because of the likes of Shiny Hunters managed to find the right tools and techniques to apply over and over again very successfully. And they've just been very good at getting into a lot of salesforce stuff.
Leo Laporte [01:44:50]:
They recently had an ad, they're looking for female voices to impersonate mothers and wives because they only have a bunch of young guys right now.
Troy Hunt [01:44:58]:
But that's an interesting partial answer, isn't it? I mean, how much like voice manipulation software is there out there? And they're still saying, we'd like to a human in order to do this. And these guys are the experts.
Leo Laporte [01:45:08]:
Yeah, no kidding. Sad to say. So really the real problem, the real issue is you think training employees, training users to be smarter, not to fall for this stuff.
Troy Hunt [01:45:20]:
Well, it's always a bit of a shared responsibility. I mean, I. I fell for a phishing attack about a year ago now.
Leo Laporte [01:45:26]:
Yeah. By the way, you were very brave to reveal that. I thought that was. And I've. And very. I thought was really important. Important that you reveal it because that shows anybody can be bit.
Jeff Jarvis [01:45:36]:
Well, Leo got bitten recently. Nice headphones.
Leo Laporte [01:45:43]:
But I was inspired by you, Troy, to reveal it, to talk about it. Because I think the more we talk about our experience, the more likely people are going to start saying, oh yeah, this could happen to me too.
Troy Hunt [01:45:54]:
I think in fairness, I had a bit of a luxury insofar as Ethan, it's obviously a Very teachable moment. It opened my eyes a lot as well. But it was also very low impact data. It was what, 15,000, I think, email addresses from my mailing list. It wasn't sensitive pii, it wasn't a have I been pwned Exploit or something like that. So it was the sort of thing that without knowing it, I could actually get a lot of mileage out of without it being too impactful. But I just found it particularly ironic because I was in London at the time. The day before I'd been with the National Cyber Security Centre and the government there having a meeting about how can we drive passkey adoption.
Troy Hunt [01:46:33]:
Because passkeys are phishing proof second forms authentication, you know, unlike OTPs, which is what ultimately got phished for my Mailchimp account. And we're like, oh, we need to come up with some good ways of demonstrating the importance of passkeys. In the next morning, here we are.
Leo Laporte [01:46:51]:
You did. So you've made it possible because you have APIs for agents to use. Have I been pwned as well? Right, Is there an MCP server?
Troy Hunt [01:47:01]:
Yeah, there's an MCP service. So Stefan's stood up an MCP service. So we have that. And then of course we have all the API documentation, which the AI has actually also been very good at just consuming and figuring out how to call APIs as well. So we're sort of covering all our bases.
Leo Laporte [01:47:16]:
And what do they do with it? Do they just look up, you know, breaches or.
Troy Hunt [01:47:20]:
Well, it's a question of what. What services the API implements and the predominant ones there are searching for an email address which is rate limited, depending on the key you have, and searching for domains that you've already proven control of. So if I was, let's say I was back in Pfizer and I had a security role there, I might want to monitor fisa.com, so now, by using particularly an agentic AI bot that can go and not just run on demand commands, but do things like monitor and query, I could just jump in there and say, hey, tell me how many senior executives have been in a data breach recently? And so long as I could map that data of senior executives, we're good. Or tell me how many people on our domain are in the last data breach. Tell me who has been in a sensitive data breach and it just becomes like a conversation with your AI of choice when it can then interface between that discussion and the published APIs and to do it securely with an API key that only gives you access to the things that you have the rights to.
Leo Laporte [01:48:20]:
That's great. Thank you for doing that. I think that'll be very useful. I've been kind of on a campaign to get everybody to offer an API or I don't think you need an mcp, but just to offer some sort of interface that agents can use that's incredibly valuable. So is Bruce retired? Are you going to spank him or are you just going to go on?
Troy Hunt [01:48:40]:
Look, we're progressing very gradually with Bruce at the moment. Normally Charlotte would sort of do most of the tickets and she's like, can I just, you know, can I get Bruce to answer questions and things? I'm like, well, it depends, like how many markdown files do you want to edit? And she's on tickets. She basically, basically does everything a non technical person can do, including all the formalities and legal things and accounting, and then I do the rest. But unfortunately Bruce at the moment is still a little bit technical, but we are refining it over and over and over again. And I'm understanding more about if we have particularly an agentic AI like openclaw literally running here on my desk, how does it actually query, you know, what's it actually doing? Well, it's creating a bunch of Python, so we conversion those, we can see how they change over time. It's creating a bunch of markdown files, it's figuring out what are the right ones to send up to Claude when it actually needs to ask a question of an LLM to then format that in a, like a human readable response. So what we're trying to do is just get into the point where we're getting, let's say we're getting 80% of the answers, right? And then we'll go, okay, well, what are the things that we can reliably answer accurately? So there'll be things that are very discreet and well known in their nature. How do I opt out, how do I move my data, how do I cancel the service, things like that.
Troy Hunt [01:50:00]:
So my goal is to be able to really consistently reliably identify the things that have easy answers, let him run autonomously on those. And then over the course of time, we'll just have to see how much confidence we get in him to be able to answer more complex things, which really. It's just like a junior employee, isn't it? You know, you hire someone, you give them a little bit of, little bit of leeway to begin with, and then as you get confident in them, you give you more and more rope.
Leo Laporte [01:50:24]:
Yeah, I was surprised at the hallucination. Actually, that's a pretty bad mistake. I guess you are too.
Troy Hunt [01:50:32]:
Oh, I don't even know what to do with that. Like, I think part of what we realize is obviously that the responses that come from Claude are only going to be as, as good, good as the input data. And when he's starting a new context all the time, because we don't want to have these massive context with the whole chat history sort of gap to every request, otherwise he'd burn gazillions of tokens. If he starts a new context, like, what information do we need to feed into that context? And at one point I said, look, what's the cost? So for every ticket that you look at and you look for an answer for what is the cost? And he's like, it's seven cents. Okay, what if you literally loaded every single piece of information you have and you sent that up on every context? He said, well, that's 70 cents and we get about 15 tickets a day. Like, it's not a lot. So for the sake of 63 cents, to save me having to go through things like this, I'll wear that cost. I think that's a good roi.
Troy Hunt [01:51:30]:
So I think a lot of the challenge now is figuring out what is the right information to actually send up to that LLM so there's enough context that we can get good answers and the cost is trivial if we do it right.
Leo Laporte [01:51:41]:
Yeah. So you're using Opus 4.7 for this?
Troy Hunt [01:51:45]:
Yeah, I think so. Whatever was the latest about three weeks ago when I last.
Leo Laporte [01:51:51]:
Yeah, that's four, seven.
Jeff Jarvis [01:51:51]:
Yeah.
Leo Laporte [01:51:52]:
Yeah. I think a lot of times people choose sonnet or haiku for low, low value stuff, like maybe writing an answer. But you would definitely. I think you're right. Having the information there in the context, it's actually pretty important because it will make something up if it doesn't know. Yeah. Yep. Yeah, well, so you anticipate more uses of AI in the future and have I been pwned or.
Troy Hunt [01:52:16]:
Yeah, totally. Look, I think, to be honest, like a lot of this has just been figuring out, where does it make sense? And I suspect that everyone listening to this is in a little bit of a similar boat where you see so much stuff, you're flooded through tech media, mainstream media, walking down the street of AI stuff, and everyone's trying to figure out like, what's the bits that are actually useful versus the bits that are misleading, deceptive, counterproductive in other ways. I think the applications like this with Bruce are very good. So there's definite value there. We're going to keep making that better. The applications we've been looking at with the MCP server and with the ability to call query data in a natural language, massive potential there because that opens up the audience of who can talk to the data and how easy they can do it. So, you know, we can now say one of our next tasks is we've got to implement an OAuth layer so that we can add these as extensions or skills or whatever the term for each LLM is. But we want your average normal person to be able to go into, let's say they're using ChatGPT, go in there at have I been poned as a skill, do the OAuth dance and then just be able to talk to their LLM and say, tell me who in our organization's been breached.
Troy Hunt [01:53:28]:
And we want that to be people who don't need to know what an API is or don't need to know how to write code. I think there's enormous potential there, particularly if we can start giving them sort of more, I guess, useful insights into what do you do now? When you do find people in a data breach, there'll be another breach tomorrow that'll get, get loaded into have I been pwned? How do we help people understand what they actually need to do to protect themselves and their organization after that? So I think there's huge potential there.
Leo Laporte [01:53:55]:
You've written a robophobia equality policy. Do you want to talk about that?
Troy Hunt [01:54:03]:
Some people didn't like that.
Leo Laporte [01:54:06]:
I like it. I think it's good. You're asking people to treat the bot with tolerance, respect and basic courtesy, regardless of its artificial origin. Is that what you teach your kids? Pretty much.
Troy Hunt [01:54:17]:
It's like, it's a little bit like teaching. In fact, in many ways the whole AA bit is a little bit like teaching kids. It is part tongue in cheek and that should be obvious to anyone that reads the description here. And of course, this was an AI generated policy as well. But the context was we had a customer who was asking some questions on our support system who got really quite obnoxious and was getting very obnoxious at Bruce and kept asking for a human, even though Bruce's answers were perfectly correct. And part of the joy of Bruce is that I can be there getting a coffee and I just put out my phone and I'm in telegram going, yes. And the answer, it's fine. So the effort on me is very, very low and the effort on the person who was starting to argue with Bruce was very, very high because they were typing full messages every time.
Troy Hunt [01:55:03]:
So I'm thinking I can just do this all day long. This isn't a hard problem for me, but I think that there's a grain of truth in here that people trying to say that they will not converse with an AI is possibly some sort of discriminatory counter pattern or anti pattern. So we wrote this policy a bit tongue in cheek. It's not published anywhere other than on this blog post. Here is a bit of fun, but particularly as we get larger and larger, our challenge is how do we still make it really just Charlotte and I that answer tickets and not have to hire other people. And the way we're going to do that is by having the likes of Bruce. And that means that people need to be able to engage with Bruce and have a reasonable discussion with him and treat him as they would treat us.
Jeff Jarvis [01:55:52]:
You have a bad precedent in phone mail jail. I'm the guy who's constantly screaming at the phone agent.
Paris Martineau [01:55:59]:
Agent.
Leo Laporte [01:56:00]:
My wife says that also operator. You call agent, she calls for an operator.
Troy Hunt [01:56:06]:
And look, I do the same. And we do that when we're. Usually when we're getting bad answers. I think there's a bit of an edge case here where if you're getting an answer which is a good answer, but it's an answer you don't like, and.
Jeff Jarvis [01:56:17]:
And if the NFE agent is authorized to solve your problem.
Troy Hunt [01:56:21]:
Correct. So yeah, I think a lot of our challenge is how do we make sure he gets good answers. And what's not immediately clear when Bruce signs off as Bruce the bot, is that he's Bruce the Bot. But every response he's sending, we as the human, see, so we know that he's giving the right answers. He's just making it much faster for us to give not just the right answers, but much more comprehensive answers because he can obviously just spin out all that content directly. So I'm really hoping that when people read Bruce's responses, they're like, wow, that's actually a really good response. And we are getting a lot of thank you, Bruce responses, which I think is quite funny.
Jeff Jarvis [01:56:57]:
That's nice.
Leo Laporte [01:56:58]:
You're nice.
Troy Hunt [01:56:58]:
And I wonder, like, is there a psychology around how people treat AI or how people treat bots? Like, if you're abusive to a bot, does it, does that say something negative about you as a person? Like, is it good human practice to be polite to the bot, to say thank you to the bot?
Leo Laporte [01:57:14]:
Yes, that's what I think. Although it is A debated, much hotly debated topic. So let me just. Technically, you sounds like are you running it on openclaw? Are you running. How are you running the agent?
Troy Hunt [01:57:26]:
Yeah, that's just running on OpenClaw. So it's literally running on a Mac Mini under my desk at the moment.
Leo Laporte [01:57:33]:
When you said telegram, I thought that must be. Then how you communicate with the agent is through telegram.
Troy Hunt [01:57:38]:
Yeah, yeah. So there's a telegram bot and this is just seems to be the path of least resistance to spin up. Up opencloren. Look, I didn't spin it up with the intention of doing the Bruce thing. I originally spun it up with the intention of can it help me analyze data breaches more quickly and discover, for example, what sorts of data classes are impacted? And I've spoken less about that online, but it does do that. So very often I'll say, look, there's data that's been published at this URL, go and grab it and tell me what's in there. I use it to. It's got an X API key, so it's monitors some lists I've got in Exit.
Troy Hunt [01:58:13]:
I've got people that tweet a lot about data breaches. So I get 12 hourly reports here. All the most recent data breaches that have been out there that have been communicated. I ask it to do things like when there's a tweet that's got a screen grab of some forum, it's doing image recognition to try and figure out where the forum is. So I'll say have a look at this tweet, go find me the data. And it goes off to all the various forums and it finds the right content and something sometimes even follows the links through and downloads the data so I can analyze it. So it's doing a lot beyond breach.
Leo Laporte [01:58:42]:
Very helpful for you too, to monitor. I mean, I was wondering how you monitor all those breaches. So that's a real tool for that. That's fantastic. And you're saying forms, you mean the hacker forms with those breaches end up. Wow.
Troy Hunt [01:58:53]:
Yeah, because this is where most of them appear. So I've made this list public now, but on my Troy Hunt X profile I've got a list called data breaches and it's called got, I think 17 or 18 odd people in there and in there. That's what's being monitored. Let's have a look. 17 members on my data breaches list. That's what is being monitored by OpenClaw via the X API. So it's doing it the right way. It's not scraping it or anything.
Troy Hunt [01:59:21]:
And then it's fantastic. It's amalgamating that into a 12 hourly report. I was doing it daily, but there are too many data breaches. So now, no kidding. 4:00am, 4:00pm, I get a report.
Leo Laporte [01:59:31]:
Wow, that's great, Troy. It's been a honor to talk to you. You're such an important part of our. Yeah, thank you so much of our Internet community and keeping us safe. Have I been pwned?
Jeff Jarvis [01:59:43]:
Is.
Leo Laporte [01:59:44]:
Is, you know, one of those sites like Wikipedia, like the Internet Archive, that really show how amazing the Internet can be and I, you know how long you've been doing this? Since 2012.
Troy Hunt [01:59:53]:
You said 2013? 4th of December 2013.
Leo Laporte [01:59:56]:
Yeah.
Jeff Jarvis [01:59:56]:
Wow.
Leo Laporte [01:59:57]:
Long time. And do you anticipate doing it forever?
Troy Hunt [02:00:02]:
Well, eventually it's going to stop.
Leo Laporte [02:00:04]:
Bruce can take over. Bruce will take over.
Troy Hunt [02:00:07]:
I don't know when or why. There is a succession planning discussion. We'll have to have it sometime. But I still love doing it. I'm still fit and healthy and young enough to keep going, so we'll see. The data breaches aren't going to stop. I know that much.
Leo Laporte [02:00:19]:
Well, thanks to you and thanks to Charlotte for the work you do. Have I been pwned is amazing.
Troy Hunt [02:00:23]:
Awesome. Well, thank you very much, everyone.
Leo Laporte [02:00:25]:
Thank you, Troy Hunt. Take care.
Troy Hunt [02:00:27]:
Cheers.
Leo Laporte [02:00:28]:
That's Troy Hunt. HaveIBeenPwned.com Our picks of the week coming up in just a bit. On we go with the waning hours of the show and our picks of the week. I have won Paris for Gizmo and
Paris Martineau [02:00:43]:
look who's right here. Hello.
Leo Laporte [02:00:47]:
Does Gizmo ever type on your keyboard?
Paris Martineau [02:00:52]:
She does not type on it, but she, as I've been working a lot this week, has been obsessed with sitting right in front of my keyboard, rubbing all over the scarlet and then lays in a way that she, her tail and feet hit the top part of my keyboard.
Leo Laporte [02:01:09]:
Never a good thing when the cat does your writing for you. This is an app called Fur Wall. It is for Macs, sad to say, but you use a Mac and what it does is it watches the camera locally for a kitty cat and when it sees a kitty cat on the keyboard, it drops those keystrokes so that he's not actually doing any writing at all. Open source, it's GitHub. It's a silly little app, but I thought you might enjoy it.
Paris Martineau [02:01:39]:
I think what I need to get actually is one of those fake laptops or keyboards and just have her sit on it.
Leo Laporte [02:01:48]:
See, I, I don't think those work? Because I think she's smart enough to know what you're paying attention to. That's why she's there.
Paris Martineau [02:01:54]:
No, you're right, she is. Because the thing that she's discovered in the last week is that again, as I've shown before, I use this insane system that looks at the bottom of the mask because I want to.
Leo Laporte [02:02:06]:
She's getting that.
Paris Martineau [02:02:08]:
And she.
Leo Laporte [02:02:09]:
She thinks you're petting it.
Paris Martineau [02:02:10]:
Tries to. She's figured out that if she sits on this key right here, it locks my computer.
Leo Laporte [02:02:16]:
Oh my.
Paris Martineau [02:02:16]:
The lock button. And she starts doing this when I'm in meetings. She's done it like three times this week.
Leo Laporte [02:02:23]:
Cats are such brats. They really are there. And they're smart. They're not conscious, but they are smart.
Paris Martineau [02:02:30]:
Well, so what I've learned this week is if your cat locks your laptop while you're in a zoom meeting, the zoom meeting will carry on and people can still see you even though your computer's tackling.
Leo Laporte [02:02:42]:
You know, that's something to keep in mind too.
Paris Martineau [02:02:44]:
Something to know.
Leo Laporte [02:02:45]:
One other pick. This is data center fm. If you want to know what it sounds like to be in a busy data center, just press the power up button. You can increase the number of servers. You can. You're not hearing it because I'm not. I'm sparing you the sound. Increase the GPU load, increase the staffing, turn up the cool cooling, turn on the gas turbine generators, and you're gonna find out what it's like to be inside a data center.
Leo Laporte [02:03:16]:
This is from.
Benito Gonzalez [02:03:16]:
Is that a sentence gauge? Is that a sentience gauge?
Leo Laporte [02:03:19]:
Well, apparently I haven't done it, but if you, if you. If you turn it up high enough and run it long enough, the sentience gauge will hit. And I don't know what happens after that.
Paris Martineau [02:03:30]:
It is actually, honestly, a kind of calming sound. I've got it going on right now. Oh, my sentience is going up.
Leo Laporte [02:03:37]:
It's going up.
Paris Martineau [02:03:38]:
I've turned cooling off is what I did.
Leo Laporte [02:03:40]:
Oh, maybe that's the key the brain requires.
Paris Martineau [02:03:43]:
The key to sentence, it seems.
Leo Laporte [02:03:46]:
Datacenter fm. That's just. I think it's just a little art project, but it's kind of a fun one.
Benito Gonzalez [02:03:52]:
Local water drained.
Leo Laporte [02:03:57]:
Yeah, yeah, there is that. Where. Oh, there's db. Local water drained. So we're now going to get a little hotter and the sentience is going to go up because we don't have any cooling. And then pretty soon a containment breach and it's all going to go to hell. Keep watching. I'm sure something will happen.
Jeff Jarvis [02:04:11]:
Just like watching Paradise.
Leo Laporte [02:04:13]:
Yes. Oh, what is Paradise?
Jeff Jarvis [02:04:15]:
Oh, it's very good.
Leo Laporte [02:04:17]:
Is it good? You like it? What is it?
Jeff Jarvis [02:04:19]:
It's a show about the destruction of the planet and a secret bunker town.
Leo Laporte [02:04:29]:
I want to watch that. Oh, that sounds good.
Jeff Jarvis [02:04:31]:
AI and Quantum ends up in there and all kinds of things.
Leo Laporte [02:04:34]:
And there's one other thing. I know you're. This is also for you, Paris. I know you're a movie buff. Have you ever wondered if the movie you're going to go to if they'll. Is there anybody in the theater? This only works with AMC movies because I guess AMC somehow publishes this information online. It's called empty screenings. About 10% of AMC movie showings sell zero tickets.
Leo Laporte [02:04:58]:
This site finds them.
Jeff Jarvis [02:05:00]:
Oh, that's for me.
Paris Martineau [02:05:01]:
Dang. The AMC in Kips Bay is not doing well today.
Leo Laporte [02:05:06]:
So you enter your zip code. Riley Waltz. Nice job. You enter your zip code, and we'll find a empty theater near you.
Paris Martineau [02:05:13]:
Zero people at the Met Opera. Eugene Honigen on 2026. Oh, that's dishonoring.
Leo Laporte [02:05:20]:
Oh, I'm sorry. AMC Paris Martineau. That was three picks. I. I overdid it this week.
Paris Martineau [02:05:28]:
Well, it's good because I've got only got one pick, and it's a movie I'm gonna see tomorrow called the Python Hunt that is produced by Lance Oppenheim, the director I like so much that we spoke about earlier. And it's out Friday. It is about. It's a. A documentary about the Great Florida Python Hunt, which is, I believe, a weekend in Florida where I'll learn about it tomorrow. Where? Here, let me get the description right. It is. Every year, the Florida government invites the public to compete in an invasive python removal contest in the Everglades.
Paris Martineau [02:06:11]:
For 10 nights, an eclectic group of hunters confront the dangerous terrain nocturnal creatures and their own desires.
Leo Laporte [02:06:19]:
Wow.
Benito Gonzalez [02:06:20]:
This was a Simpsons episode
Leo Laporte [02:06:24]:
now.
Jeff Jarvis [02:06:24]:
Well, what was it?
Leo Laporte [02:06:26]:
You're going to see this in an empty AMC theater. Where?
Paris Martineau [02:06:28]:
No, I'm going to the premiere tomorrow.
Leo Laporte [02:06:32]:
Oh.
Paris Martineau [02:06:32]:
At the Village east and going to air Q and A with the director.
Leo Laporte [02:06:37]:
It'll be lovely fun. By the way, Christina Warren, one of our hosts on MacBreak weekly, was at the Chat GPT 5.5 launch party last night that Sam Altman put together with the help of ChatGPT. I'm sure it was weird. And we will have a report next Tuesday on that premiere. So neither of you went to the Met Gala, but at least you're going to something culturally important. The Python Hunt.
Jeff Jarvis [02:07:06]:
I did go see Devil's Wears production, too.
Paris Martineau [02:07:08]:
So how did you feel about how it represented Compton Asked.
Leo Laporte [02:07:14]:
You know what's funny about that? I think I saw a statistic that said 90 of the tickets sold are to women for that movie.
Jeff Jarvis [02:07:21]:
Yeah, well, I've discovered I haven't been to a movie.
Leo Laporte [02:07:24]:
No, I want to see it. I loved the first one.
Jeff Jarvis [02:07:26]:
Oh, yeah, you should. You should go anyway. But I've discovered that near me, there's. There's half price. Tuesday.
Leo Laporte [02:07:32]:
Oh, first.
Jeff Jarvis [02:07:33]:
So it was old women.
Leo Laporte [02:07:35]:
Yes. Yeah, yeah. You got to go at 4:30.
Jeff Jarvis [02:07:37]:
But all day. All day Tuesday.
Leo Laporte [02:07:39]:
All day, half price.
Jeff Jarvis [02:07:40]:
Tuesday, including the popcorn, is half price.
Benito Gonzalez [02:07:43]:
What's half price?
Leo Laporte [02:07:43]:
You know what?
Benito Gonzalez [02:07:44]:
$10.
Jeff Jarvis [02:07:47]:
Yeah. For the one with the super sound. Otherwise it was 780 or something.
Leo Laporte [02:07:52]:
That's half price. That's so sad.
Jeff Jarvis [02:07:55]:
Yeah.
Leo Laporte [02:07:55]:
Oh my. All right, well, that's Paris's pick. What have you had? What do you have?
Jeff Jarvis [02:07:59]:
Well, there's just things to mention here. I want to mention that the South African Africa withdrew its AI policy after it was found to be written by AI I just want to throw that in.
Leo Laporte [02:08:09]:
It's fitting.
Jeff Jarvis [02:08:11]:
I gotta mention. Did you see the story about the Pope and customer service?
Leo Laporte [02:08:15]:
No.
Paris Martineau [02:08:16]:
So I saw the headline.
Jeff Jarvis [02:08:17]:
Oh, it's superb. New York Times story. So the Pope was trying to get, you know, get access to his bank account in Chicago. And he went through and gave him all the. All the personal information.
Leo Laporte [02:08:27]:
Gave him the Pope.
Jeff Jarvis [02:08:29]:
And they said, you've got to. They said, you got to show up in person.
Benito Gonzalez [02:08:32]:
Chicago.
Jeff Jarvis [02:08:33]:
I. I can't. And well, he's finally, finally went right. He said, would it mean anything to you if I told you that I'm Pope Leo? And she hung up.
Leo Laporte [02:08:42]:
Oh, geez. Of course she did.
Jeff Jarvis [02:08:44]:
Of course she did. Of course she did. All right, I want a little nice try. Things about memories. Ask Jeeves. Ask.com is dead. Gone forever. I didn't know it was still alive.
Jeff Jarvis [02:08:55]:
It's like that or Canadian.
Leo Laporte [02:08:56]:
Yeah, who knew, right?
Jeff Jarvis [02:08:58]:
I'm angry as usual. The News Media alliance, which is the cruddy lobbyists for the dying old media industry, is going after Common Crawl for no good reason and saying that they're allowing their precious content to be taken by AI companies while they're. They're doing a crawl. And then finally, we also have. Our beloved Internet Archive has released a wonderful book about the vanishing culture and what's being taken down by these media companies that are trying to control history.
Leo Laporte [02:09:31]:
They're. They're afraid that AI will train on their content if it's uploaded to the Internet arc.
Jeff Jarvis [02:09:36]:
Yeah, exactly. They're blocking the same with common crime. Yeah, right, right.
Leo Laporte [02:09:39]:
And Geez Louise. Yeah, yeah.
Jeff Jarvis [02:09:42]:
And finally line158. You may have to use Google Translate on this. I hope you can get to it is see if you can click on that.
Leo Laporte [02:09:49]:
Leo. A giant Gutenberg Bible page.
Paris Martineau [02:09:52]:
This is the sketchiest looking link you've ever put in here.
Leo Laporte [02:09:56]:
It's.
Jeff Jarvis [02:09:56]:
It's because it's off my email.
Leo Laporte [02:09:58]:
So should I translate it to English?
Jeff Jarvis [02:10:00]:
Yeah, translate to English.
Leo Laporte [02:10:01]:
So by the way I'm finding the translations have gotten much better.
Jeff Jarvis [02:10:05]:
Which by the way Google Translate is 20 years old this week.
Leo Laporte [02:10:08]:
Yeah. Happy birthday.
Jeff Jarvis [02:10:10]:
So this is the largest.
Leo Laporte [02:10:11]:
That's a page.
Jeff Jarvis [02:10:12]:
It's a printed page. The world's largest printed.5 meters wide by 7.2 meters high. It was made by assembling.12 computer milled wooden plates were assembled to recreate the printed form. The printing plates were first inked by hand to exert the necessary pressure on the exceptionally large printing form. A car then drove slowly over the printing for a second printing. The public also provided participating.
Leo Laporte [02:10:42]:
I hope it was visitors.
Jeff Jarvis [02:10:47]:
Well it's Germany or France walk onto the plates together, isn't it?
Leo Laporte [02:10:52]:
I know it's the.
Jeff Jarvis [02:10:52]:
To use the Strasbourg. Well, it's gone back and forth a few times.
Leo Laporte [02:10:55]:
Yeah. Yeah.
Jeff Jarvis [02:10:56]:
So ironic to me that it's hung up in a Catholic cathedral in Strasbourg considering what Gutenberg wrought in the Reformation.
Paris Martineau [02:11:05]:
But I mean it all comes around eventually.
Jeff Jarvis [02:11:08]:
Wow.
Paris Martineau [02:11:08]:
It's. I like that. It's. I like that the description of it is the world's largest printed Bible page. Because they're not, you know, counting out that there's a. A larger page that's been printed out.
Jeff Jarvis [02:11:20]:
Sears catalog. But yeah.
Paris Martineau [02:11:21]:
Yeah.
Leo Laporte [02:11:22]:
I think it's the largest printed page ever. I love it. That was actually printed with a car.
Jeff Jarvis [02:11:28]:
With a car?
Leo Laporte [02:11:29]:
Yeah, that's the car.
Paris Martineau [02:11:31]:
And some visitors, they call it Fiat
Leo Laporte [02:11:33]:
Press, not letter press, but it's the same thing. It does the same thing. That's amazing.
Paris Martineau [02:11:38]:
In a sense it wasn't inked by hand. It was inked by wheel and foot.
Leo Laporte [02:11:42]:
Yes. So it is the anniversary of the Good Work Bible. What is it? How. What's the anniversary? May 9th.
Jeff Jarvis [02:11:50]:
I don't know.
Leo Laporte [02:11:50]:
They say it's to celebrate the anniversary,
Jeff Jarvis [02:11:52]:
but it's made up. It's like Gutenberg's birthday. They just call it 1400. They're not sure. It's four years either one way or the other.
Leo Laporte [02:11:59]:
It's like Christmas. We don't know.
Jeff Jarvis [02:12:01]:
Yeah.
Leo Laporte [02:12:01]:
It's just a guess. Very nice. Very nice.
Jeff Jarvis [02:12:07]:
Yeah.
Leo Laporte [02:12:07]:
I don't know who does my translation. I think it's Kagi doing the translation.
Jeff Jarvis [02:12:11]:
Oh, okay. I use Google, of course.
Leo Laporte [02:12:12]:
Yeah. But I think it's quite. I think they've gotten really good. They used to be terrible.
Jeff Jarvis [02:12:17]:
Well, that's because of. Of large language models. Yeah, that's of AI. Yeah, it changed. That's, that's, that's Transformers. It started with. With. So now.
Jeff Jarvis [02:12:29]:
Well, we might as well go to this one too.
Troy Hunt [02:12:30]:
The.
Jeff Jarvis [02:12:31]:
The. What's new with Google Translate? Now you have a pronunciation tool that we've been asking for, which we use often on the show.
Leo Laporte [02:12:37]:
I can now say googa, right?
Jeff Jarvis [02:12:39]:
Well, let's see what it says. They. They point out that they've been using AI machine learning and translate since the beginning.
Leo Laporte [02:12:47]:
Yep.
Jeff Jarvis [02:12:47]:
Translate supports 95 of the world's population.
Leo Laporte [02:12:51]:
Wow.
Jeff Jarvis [02:12:52]:
One, more than 1 billion users ask translation each month. One trillion words a month.
Paris Martineau [02:13:00]:
That's a lot.
Jeff Jarvis [02:13:01]:
It is a lot of ways. Indeed.
Leo Laporte [02:13:04]:
Well, speaking of a lot of words, that's the end of this show. We're not sentient yet, but we are conscious of your time.
Jeff Jarvis [02:13:13]:
Says who? How do you know?
Paris Martineau [02:13:14]:
Are you sure we can't prove it.
Jeff Jarvis [02:13:17]:
Nope.
Leo Laporte [02:13:17]:
Can't prove it. We can't prove it. That's true. That's all I know is my butt hurts.
Jeff Jarvis [02:13:22]:
That's, that's, that's consciousness.
Leo Laporte [02:13:23]:
Paris, for the. The first time ever, is sad that the show's over because now she has to go back to work.
Paris Martineau [02:13:28]:
It's really upsetting, actually.
Leo Laporte [02:13:32]:
This was your break, I'm sorry to say. Paris writes for Consumer Reports, is working on a massive, very, very, very important piece which we will talk about when
Jeff Jarvis [02:13:42]:
we know when we think it might come out.
Paris Martineau [02:13:45]:
I can't say.
Leo Laporte [02:13:47]:
Oh, it's that secret. It's very exciting.
Paris Martineau [02:13:50]:
There's just, you know, there are things I'm allowed to say and things I'm not allowed to say. And one of the things I'm not allowed to say is a published date.
Leo Laporte [02:13:58]:
All right, that's good to know. Anyway, Paris, great to have you. Thank you.
Paris Martineau [02:14:05]:
Great to be here.
Leo Laporte [02:14:06]:
And Gizmo, Jeff Jarvis, professor of journalistic innovation. He's now at Montclair State University and at SUNY Stony Brook. His book, indeed Hot Type, which is just awesome because I read a pre release version of it, will be out in August and you can pre order it right now@jeffjarvis.com and it's not just
Jeff Jarvis [02:14:27]:
about the line of type. It also is about Postscript.
Leo Laporte [02:14:30]:
So it's goes into the geeky goes right, right to the modern times. We do this show every Wednesday right after Windows Weekly. That's 2pm Pacific, 5pm Eastern, 11am Hawaii time.
Jeff Jarvis [02:14:43]:
After the booze. We always check in whether they're on the booze yet.
Leo Laporte [02:14:47]:
Yes, as soon as the Whiskey segment begins. You know, intelligent machines can't be far behind. That's kind of an inside joke for people who watch Live. Anybody else is going to be puzzled by that. You don't have to watch us live, but you can. We stream it of course in discord, but also YouTube, Twitch, TikTok, not TikTok, X.com, facebook, LinkedIn and Kick. We can't do TikTok. It's too complicated for us.
Leo Laporte [02:15:11]:
We're limited in our abilities after the fact. On demand versions of the show at TWiT TV iM. We've audio and video at our website. There's also a YouTube channel dedicated to the video. Great way to share clips with everybody. Also you can subscribe to it in your favorite podcast client and if you do, leave us a great review and Paris will read it dramatically at some point.
Paris Martineau [02:15:36]:
Honestly though, if you haven't reviewed the show before, go on Apple podcast and do. Because the only person who has reviewed the show in the last five months was a real stinker about it and I think terrible. We should do some.
Leo Laporte [02:15:53]:
I don't know, it costs us. By the way, some advertisers will not buy ads on a poorly reviewed show. They think it's real or something. So leave us a nice review. Help us, help us out. I should also mention I don't talk about it enough but you can comment on any show. We don't put comments. We don't pay attention to comments on YouTube and Twitch and elsewhere.
Leo Laporte [02:16:11]:
But we do pay attention to the comments in our club Twit Discord. Every show has a little forum section in the Discord. We also have an open to the public forum called twit.community and I don't mention it enough and I would like to because it's a really great place to converse about every episode or anything else on your mind that's open to all. Just mention that you heard it on intelligent machines. I'll be sure to get you in Twitter community. We even have our own Mastodon instance. I am still a Mastodons fan. Mastodon fan.
Leo Laporte [02:16:41]:
I believe in the Fediverse that's at Twit Social that is also open to the Public. And again, in both cases, you've got to say that you listen to twit for me to let you in. I want it to be TWIT listeners. I also, something weird happened on the Mastodon last week. Somehow I don't know how this happened. There must be a bug in Mastodon. I normally have to approve every account. Some AI generated accounts got through without my approval and I was notified by IFTAS that they are Russian bot accounts.
Leo Laporte [02:17:15]:
They were spreading Russian propaganda using social to the rest of the Fediverse and I was shocked and I immediately got rid of the accounts. Thank you, iftas, for letting me know about it. That's the. That's the site that monitors disinformation and I guess there was a bug, but unfortunately because of that I have now also turned on captcha. So there's all sorts of barriers to getting into Twitch Social, but believe me, it's worth it. You do have to say who you are. You have to say you listen to twit. You have to do a captcha and you have to assert that you're over 18 thanks to various jurisdictions.
Jeff Jarvis [02:17:50]:
You have to pass a Intelligent Machines trivia question.
Leo Laporte [02:17:53]:
Yeah, we should do that. But really, all you have, you have
Jeff Jarvis [02:17:55]:
to answer the significance of sand.
Paris Martineau [02:17:58]:
Yeah, you do. And you have to, you know, cite some specific precedent.
Leo Laporte [02:18:03]:
I don't make it that hard. You could just say, let me in, Leo, and then I'll know you. That's all I really care about. But yeah, the pots, I don't know how they got in. I was really freaked out by that. I haven't seen any more, but I have to check every day now. Thank you everybody for joining us. Thank you, Paris.
Leo Laporte [02:18:17]:
Thank you, Jeff. Have a wonderful evening. We'll see you next week on Intelligent Machines. Bye. Bye.
Paris Martineau [02:18:24]:
I'm not a human being, not into this animal scene. I'm an intelligent machine.