Intelligent Machines 844 transcript
Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.
Leo Laporte [00:00:00]:
It's time for Intelligent Machines. Parris and Jeff are here. Our guest this hour, Jeremy Berman. He is a post training researcher at Reflection AI and his AI recently just got the highest score on the ARC AGI test. How did he do it? What is post training and what's the future of AGI all coming up next on Intelligent Machines podcasts you love from people you trust. This, this is twit. This is Intelligent Machines with Paris Martineau and Jeff Jarvis. Episode 844, recorded Wednesday, November 5, 2025.
Leo Laporte [00:00:45]:
Poob has it for you. It's time for Intelligent Machines, the show. We cover the latest in artificial intelligence and robotics and the smart little doohickeys surrounding us all. Look, I've got smart doohickeys everywhere in the house. Jeff Jarvis is here, professor emeritus of journalistic innovation at the Craig Narmack Graduate School of.
Jeff Jarvis [00:01:08]:
Newmark.
Paris Martineau [00:01:09]:
You almost had it, Leah.
Leo Laporte [00:01:10]:
I do. We made a deal that if I got to the end that we wouldn't play the jingle. Jeff is the author of the Gutenberg Parenthesis and magazine, a history of magazines. And when does hot type come out? Because I'm loving it. June. I have to wait till June.
Jeff Jarvis [00:01:25]:
Oh, but tell me if you put anything up. Tell me if I got anything wrong.
Paris Martineau [00:01:28]:
It's got to be warm weather for hot type. Steph's going to have a beachside launch party. Right.
Leo Laporte [00:01:34]:
That word Smithing comes to you from Paris Martineau, who is a investigative journalist at Consumer Reports. Hello, Paris. Bonjour, Paris. So we have a very interesting guest joining us for as we often do the beginning of the show, we like to do interviews. You may remember when Deep Seek kind of changed the world with its inexpensive model that was. Was kind of using new technologies to create better LLMs using reinforcement learning. Well, there is a company called Reflection AI that has just raised $2 billion to challenge Deep Seek, founded by two former DeepMind researchers. And we've got one of them with us today.
Leo Laporte [00:02:35]:
It's great to have Jeremy Berman here. He's a post training researcher at Reflection AI. Jeremy, welcome.
Jeremy Berman [00:02:42]:
Thank you. I'm not one of the founders.
Leo Laporte [00:02:43]:
Oh, you're not one of the founders. You're just a. A guy.
Jeff Jarvis [00:02:45]:
That's the next company he does.
Leo Laporte [00:02:47]:
That's the next one. Oh, that's right.
Jeremy Berman [00:02:49]:
Someone they pulled in.
Leo Laporte [00:02:51]:
Okay, good. So tell us a little bit about what post training research is.
Jeremy Berman [00:02:57]:
Sure. Yeah. So I guess to start, we should talk about what pre training research is. Okay, so pre training, that makes sense.
Leo Laporte [00:03:05]:
Yeah.
Jeremy Berman [00:03:06]:
Yes. Basically what all language models that Were used, were doing maybe pre2024, mid 2024. And the idea is we had this incredible breakthrough which is we can stuff the entire Internet into these deep neural networks and we can basically with, with the goal of the neural network mimicking the Internet. So it's like a big Internet document, right? So we train these deep neural networks to predict the next word that it sees on the Internet. And eventually it gets really good at doing that. And you can imagine a system that's really good at predicting the next word in a sentence has to have some sort of imbued intelligence to be able to do that. And so pre training is the process of stuffing basically the Internet into a deep neural network and imbuing the intelligence of the Internet into that network. Some consider it kind of like compressing the intelligence of the Internet into a model.
Jeremy Berman [00:04:05]:
But the problem is these models are actually not useful. So you have a pre trained model and it's just a document completer. So it just goes next word, next word, next word, as if it's just building a webpage on the Internet. So that's pre training. And then post training is the process of making that useful for humans and tasks. And so as an example, when you talk to ChatGPT, ChatGPT is a post trained model in that it knows what a user is, it knows what an assistant is, it's trained to be helpful. These are not things that come out of the box. An out of the box language model is just it completes documents.
Jeremy Berman [00:04:40]:
And so post training is then the process of turning that to be useful for certain tasks. And I would say post training, maybe from 2023 to 2024, 2025 was making it useful. So this is how you should respond. You should respond in this way we can imbue a personality and this is all by showing it examples of what we want.
Leo Laporte [00:05:03]:
And in some respects this is what differentiates anthropic to OpenAI to Google. Is this post training? Right. Because essentially the models are all trained on roughly the same stuff.
Jeremy Berman [00:05:17]:
That was actually my assumption before I really got into the weeds. And we're doing pre training and post training. At reflection, that is the prevailing wisdom. I think that's actually not true.
Leo Laporte [00:05:27]:
Oh, interesting.
Jeremy Berman [00:05:28]:
The labs have actually quite different ways they pre train their models. The general structure is the same, but the data they're training on, the mixtures of data they're training on is actually quite different. For example, some models include certain types of code snippets, some models include certain types of data. As an example I think Gemini has grown into the incredible model it is today because of an incredible pre training effort at Google, which I believe is probably the strongest in the world right now. But yes, I think your general principle holds, which is post training is where you give the model the personality and it helps to have a very strong pre trained model. But post training is yes, where anthropic makes their models very good at coding, where OpenAI can make their models very good at reasoning. Yeah, these are things mostly done in post training.
Leo Laporte [00:06:16]:
So as I understand it, we've talked a lot about this on the show. You have a corpus of information, common crawl or a giant dump of pirated books, or the Internet as a whole and you, you, you, you tokenize it and then you also there's some fine tuning that goes on in the TOK after the tokenization. Right. Even some reinforcement learning. Is that what you're talking about with post training or is it even after that?
Jeff Jarvis [00:06:40]:
That's still pre.
Jeremy Berman [00:06:43]:
So I think actually the line is a bit blurry. It's really. And actually at reflection we called it training because a lot of post training.
Leo Laporte [00:06:52]:
Post pre, whatever, training. Yeah, right.
Jeremy Berman [00:06:55]:
And I think maybe you can think about it like what is the objective of the neural network that you're trying what is the objective that you're trying to teach the neural network? So in pre training that objective is predict the next token. Right. And there are certain post, like if you want to train it to be a helpful assistant, you generally have a data set of helpful user assistant messages and then you can post train it to predict the next token. But then there's a whole separate area which is called reinforcement learning, which is fundamentally letting the model learn for itself, reinforcing training the model on its own traces. And this is what Deep Seq R1 did. It brought this whole field into the next generation of actually having the model learn for itself. And so that is also bucketed in post training. So I think generally the lines are a bit blurry, but it's helpful to think of what is the objective here.
Jeremy Berman [00:07:46]:
So the objective in pre training and some of post training is next token prediction. Almost like memorizing the data set, imbuing the data set. And then there's another set which is reinforcement learning or on policy learning, which is I'm going to have this model generate a bunch of answers, I'm going to take the best ones and then I'm going to feed it back into the training corpus until the model learns to basically builds its own reasoning and thinking circuits. And you don't get typically that type of generalization and thinking for yourself when the objective is just predicting the next token. You really want the model to be generating thoughts on its own and then reinforcing on those good thoughts.
Leo Laporte [00:08:23]:
I want to take a step back here because your history is kind of interesting. You do have a master's in computer science, but you were an entrepreneur. You had two startups, part of Y Combinator's winter batch in 2019, and then said, you know, I kind of like, I kind of like this research side of this. I want to get more into the, the AI research size side. You read Jeff Hawkins book? I've inter. We've interviewed Jeff Hawkins. In fact, we got to get him on the show. He's fascinating guy.
Leo Laporte [00:08:51]:
He's the guy who came up with Graffiti for Palm and is an. Is a neuroscientist. And he's written some really interesting books about how humans think, how our brains work compared to how machine brains work. And I, according to our research, that's how that kind of inspired you to, to get into this research side on this. That's a.
Jeremy Berman [00:09:10]:
That.
Leo Laporte [00:09:10]:
So you're relatively new to this?
Jeremy Berman [00:09:12]:
Yes, I would say I've only been in this field for about a year and two months. Wow.
Paris Martineau [00:09:18]:
All of a year and two months.
Leo Laporte [00:09:21]:
I mean, this is a new field, so it's not the end of the world, but that's interesting.
Jeremy Berman [00:09:25]:
Yeah, yeah, I, so I was, I had a startup. It's doing well. We have, you know, we have an office. Still around still. Yeah, still around. And I read Jeff's book and I could just get a sense that if we are able to figure out general intelligence, artificial general intelligence, it will be the most important thing that we do.
Leo Laporte [00:09:49]:
So you're working, you're working on AGI more than anything else.
Jeremy Berman [00:09:53]:
Yes.
Leo Laporte [00:09:53]:
This is your inspiration. And in fact, the reason you're here is because you took on the Arc AGI Challenge and achieved a remarkable goal. First place in 2024 and then broke.
Paris Martineau [00:10:09]:
Records again this year.
Leo Laporte [00:10:11]:
Yeah, 76%, 79%, 79.6%, which is remarkable.
Paris Martineau [00:10:17]:
Can you tell us a little bit about what the ARC AGI Challenge is and what you did in these cases?
Jeremy Berman [00:10:23]:
Yeah, so ARC AGI is like an IQ test for machines. And it's so simple that Children can get 85 to 100% on the V1, but language models at the time, and this was 2024, late 2024, were getting like 4%, 5%. These were the best language models we had. And that was what really sparked my interest because. And let me. So what it is is it's a puzzle. You have grids, input and output grids, and there's a common transformation rule that you can apply to the input grids to get to the output grids. Again, simple children's puzzles.
Jeremy Berman [00:10:59]:
And the key is, can you, given an input, generate the output grid? And children can do it. Everyone on this panel will get 100% on it. But the best language models in the world in 2024 were getting 4, 5, 6% totally not saturated.
Paris Martineau [00:11:13]:
Why do you think that is?
Jeremy Berman [00:11:15]:
So the reason was they were pre trained and then post trained, but not reinforcement learned. And so these models are kind of, you can think of them like stochastic parrots, which is the objective of just predicting the next token and that it's soaked up all of the knowledge of the Internet and then on some post training data sets, but it hasn't learned to fundamentally think for itself. And ARC AGI requires you to be able to think on your feet because you've never seen these challenges before. These are not on the Internet. You can't hack this test. You have to be able to generate on the fly thinking. And this was like, this is a beautiful test. Francois Chollet created it in 2019 and it has really stood the test of time.
Leo Laporte [00:11:55]:
But then he's one of the founders of Zapier, by the way. And still.
Jeremy Berman [00:11:58]:
And still. Mike Knoop.
Leo Laporte [00:12:00]:
Oh, I'm sorry.
Jeremy Berman [00:12:01]:
That's okay. He's the other, the other guy.
Leo Laporte [00:12:03]:
Okay. And I'm taking, by the way, the. You have daily. They have daily puzzles on the ARC website. I'm taking the test right now and not, not doing well. Let's see.
Paris Martineau [00:12:14]:
Do you think that the LLMs also host a PUZ podcast while they do this as well?
Leo Laporte [00:12:19]:
Yeah, right, exactly.
Jeremy Berman [00:12:21]:
That looks like a V2 puzzle, which is harder. And I'll, I'll get to that.
Jeff Jarvis [00:12:24]:
Oh, you're just giving them an easy out.
Leo Laporte [00:12:28]:
But you're right. These are simple. But this is, these are not. By the way, one of the things. This is not verbal. And one of the things that the AIs, the LLMs we're used to are very much about is verbal.
Jeremy Berman [00:12:38]:
But it's important to note that these are the, these are non verbal but easy. And so this goes to Francois Thesis, which I agree with at the time. Which I agreed with at the time and which is if language models are not trained on a thing, if it's out of their distribution, their training distribution, they will not do well. And this is the missing generality, right? Like humans are very good at on the fly thinking, learning new skills, learning a new board game, learning a new puzzle. Language models at the time were not because they were trained to soak up information, they were not trained to think for themselves fundamentally. And this. So I was able to get top score in 2024 by basically generating a ton of Python programs. I had a language model generate a ton of Python programs and then I would have basically a Python executor test those programs.
Jeremy Berman [00:13:22]:
And then the most promising programs would then be put in a revision loop where I would have a language model update incrementally evolve the programs to be better and better. But what's important about that is it got first place. And then two weeks after that, 01 came out, OpenAI's O1 model, which is the first model which, using the Deep SEQ paradigm, which was taught via reinforced learning to fundamentally think for itself. And it smoked my score. I was first for two weeks and then it totally smoked my score. And this totally lit a light bulb. I had a light bulb moment where I was like, this is really the new paradigm here. We have cracked the code for how to teach basically to bring the general into distribution.
Jeremy Berman [00:14:06]:
And that is through reinforcement learning, through having the model try out different answers and then taking the best ones and then reinforcing on those and doing that in a loop.
Jeff Jarvis [00:14:15]:
So what sets Reflection AI apart?
Jeremy Berman [00:14:18]:
So given this, the founders of reflection were at DeepMind when DeepMind was really pushing forward the reinforcement learning paradigm. And the best example is AlphaGo. So AlphaGo is a program that via reinforcement learning. So first actually let's take AlphaGo V1. They took human, the best humans, they took games of the best humans and then trained the model, pre trained the model basically to soak up the information from those games and then play Go. And this worked really well. And they got to human level performance. But what changed the game completely is that then they said, you know what, screw the human data, let's teach these models to do it from scratch.
Jeremy Berman [00:14:59]:
And then we're going to reinforcement learn on their good ideas. Have it basically self play, explore the field, use no human traces and see what happens. And this is what led to AlphaGo handily surpassing the best human. And what is often cited is move 37, which was a move that humans at the time thought was crazy, but turned out to be this brilliant move that no human had really thought of to play. And this is the power of actually letting the models think for themselves. And so at reflection we have language models that are generally, we have this reinforcement learning paradigm which we know produces superhuman intelligence. So why don't we combine them? And that is kind of the principle, I would say, that reflection has been founded on, which is extreme expertise in AlphaGo style reinforcement learning, plus expertise in reinforcement. Sorry, expertise in language models.
Jeremy Berman [00:15:49]:
And that is kind of our origin story.
Jeff Jarvis [00:15:53]:
Jeremy. I read an interesting paper from a colleague at Stony Brook University two weeks ago that I'll make this as quick as I can. They took given authors and they had writers summarize their work, and then they had AI summarize their work, and then they judged that against two pools of people, one who were expert and one who were just plain folk. And for the AI was fine for the plain folk, but the experts threw up their nose. Then they trained the models on the entire oeuvre of the authors, and then the AI summaries beat the human writer summaries with everybody, including the experts. And this really struck me, it struck me as a writer, that I think that there's opportunities in publishing that we can't see in terms of creating a Jeff Bottom but it also struck me about the specialization of that. And so what I want to drill on a little bit is this notion of general intelligence and the effort at that versus what we know is the case is that AI can do specific tasks like AlphaGo really frigging well. And so I wonder, is the seeking general a distraction from all kinds of new frontiers that could be met at a superhuman level, but not a general level? Did I make any sense?
Jeremy Berman [00:17:30]:
Yes, this is currently, I would say, the question of the day, which is, are we building spiky superintelligences Right, where we have this data set, it's clean and we know for a fact, it's just a fact, that if you have a distribution of data, deep neural networks can learn that distribution and do it basically as good as we need it to. Right. But the problem is you can almost imagine it like it's overfit to that training data. So if then you give it data that it hasn't seen before that's sufficiently different, it will do less well. And so this is the spiky. This is the spiky AI paradigm. And then the other paradigm is we figure out how to build generality, how to build basically the skill that builds other skills into these machines. And.
Jeremy Berman [00:18:13]:
And that you would probably see as both superhuman and superhuman at everything. And the latter is what I'm most interested in. But I think people at reflection are mixed. People generally are Mixed, it's not clear which one will win. I have a theory for why I think that we are going to be able to inject generality into these models, but it's not certain.
Leo Laporte [00:18:37]:
You may remember I referred you to a video which you never watched about how you can take an overfit model, reduce the parameters and get it to actually generate an algorithm instead of trying to brute force the solution.
Paris Martineau [00:18:52]:
Leo, Is this the seven hour YouTube video you sent in?
Leo Laporte [00:18:55]:
This was a shorter one, but it was a very interesting theory. And if it's true, if you could have a AI generate an algorithm for solving as opposed to a simple well 2 +2 equals 4, then 2 +3 equals 5, have a truth table that would be much closer to human intel, an actual intelligence. Is that this is the kind of thing we're talking about, right?
Jeff Jarvis [00:19:21]:
Tell us about your theory.
Leo Laporte [00:19:22]:
What's fascinating is that there's a lot of room for innovation here, right? There's a lot of places you can.
Jeff Jarvis [00:19:31]:
Improve.
Leo Laporte [00:19:33]:
The training, the post training.
Jeremy Berman [00:19:35]:
Yes, I agree and I think at the end of the day reasoning with sufficient creativity is the engine of generality. And I think right now what we're doing is we have these data sets that are very particular to a certain set. So we have a math data set and we can train it to be very good at math and we can train it to be very good at reasoning and math. The problem is it's almost like the reasoning and math circuits that we've trained together are fused in that then we ask it a question about something else and it generalizes a little bit, but not as much as we would hope it to. And I think this is what I'm most interested in is figuring out why exactly that is happening. It's almost like when you ask a language model, a task that it hasn't been trained on, it's like an adversarial attack because you're hitting weights that it's memorized in pre training. This is actually what happens in ARC. You have these PhD level machines that are making obvious blunders on again questions that are very simple and it's because it's hitting weights that were pre trained and, and when it hits these pre trained weights it has not learned to think for itself with these circuits.
Jeremy Berman [00:20:36]:
And so it ends up guessing. This is where it hallucinates. It just goes down weird paths. And I think this is, you know, there's one world where okay then let's just build a data set for literally everything. But then, you know, when we have new Data. And the world is full of new data. Right. Our brains are reasoning engines for a reason.
Jeremy Berman [00:20:57]:
Probably evolution knew that we're always going to encounter new things. And so I think the right answer is we need to build the right environments and build the right training paradigm such that the models internalize reasoning for all domains in a general way.
Jeff Jarvis [00:21:13]:
I want to hear about the lunchroom. You said there's disagreement even within reflection. Yes, I want to hear about those lunchroom conversations.
Leo Laporte [00:21:19]:
You want that, by the way? You don't want to everybody say, oh, yeah, we know how to do it.
Jeff Jarvis [00:21:25]:
That's a very theoretical level at which to express disagreement. How does that, how does that discussion go?
Jeremy Berman [00:21:31]:
Well, the people at reflection are very smart. It's, it's, it's very fun. Lunches are very fun. We always have lunch together. And, yeah, it's very fun. How does it go? Well, you know, I think I'll toss out a few theories. They'll tell me why I'm wrong, I'll tell them why I think I'm right, and then we'll go from there. I think, yeah, it's fun.
Jeremy Berman [00:21:56]:
What is it? Probably not. I think it's a lot of paper sharing and saying, look, I told you, this is. I think there's a really good paper, it's technical, from thinking machines that just came out called on policy distillation. Which is what they did is basically they took a language model, they have a teacher model and a student model. And usually the way reinforcement learning works is you have the model try and, let's say, answer a math problem 100 times, you hope that one of those 100 is correct, and then you can take the reasoning that it used for that correct one, and you can train it on its own thinking process. But that's very inefficient. And also, what if the model doesn't even get to the right answer? You've just wasted 100 traces. Well, on policy distillation is a way for the teacher to actually, for each next token, for each next word, tell whether, oh, I think this was a good next word.
Jeremy Berman [00:22:51]:
I think this was a bad next word. So you get a very dense signal, and so you don't have to do as many rollouts and the student model learns quicker. This is just something that came up in a lunch discussion recently, which I think is interesting because there's another debate which is, do they even need to learn from their own traces, or is it okay that we have expert traces and they can just soak up the expert traces and they all of a sudden can kind of imbue the intelligence from the expert. I'm not in that camp. I think fundamentally we need to reinforce its own circuits. But yeah, that's kind of a. I would say that's a window into our lunch conversation.
Jeff Jarvis [00:23:25]:
So then when you want to say, I'm going to show.
Leo Laporte [00:23:27]:
Hold on just a second. We're talking to Jeremy Berman. He is a researcher and a very interesting person at Reflection AI post training research. And you don't win the ARC AGI benchmark, but you did very well. In fact, the new benchmark results come out in a few weeks. Do you know how you did yet?
Jeremy Berman [00:23:50]:
Well, my model is currently at the top with 20 with on arc v1, 79 and then arc v2, almost 30. I think that's going to stand as the top score. But we will see.
Leo Laporte [00:24:08]:
We'll find out in a few weeks. I just want to point a trophy.
Jeremy Berman [00:24:13]:
You do get a trophy. And instead of a trophy, you get a lot of cash. If you're able to get above a certain threshold, let's say I think it's something like 75 or 85% on Arcv2 under a certain budget. And it has to run on the Kaggle on the notebook. That's a small amount of interest.
Leo Laporte [00:24:29]:
So you do this on Kaggle. That's interesting. And you have a budget for how much you can time or money OR.
Paris Martineau [00:24:36]:
Budget for 120 evaluation tasks.
Leo Laporte [00:24:40]:
By the way, I have failed miserably at V2.
Paris Martineau [00:24:43]:
You are a machine.
Leo Laporte [00:24:46]:
Well, my problem is. I understand. So these are the examples and this is then what you're supposed to do. And you're supposed to see what the transforms are. And it should be very easy. But this one is harder because the transform looks like it's a certain number of steps in color. And I just can't quite figure out maybe this wouldn't be good if you were colorblind, let's put it that way. So, yeah, I've tried twice and failed twice.
Jeremy Berman [00:25:13]:
I also just one distinction is that I'm using models that are. I'm using language models via their API.
Leo Laporte [00:25:20]:
Interesting.
Jeremy Berman [00:25:21]:
And so there are two separate ways to do it. There's a way that you run. Run your experiment on the Kaggle notebook, which is extremely small in compute. And then there's a way that I do it, which is to use an API for the models for other models. And those are different. So I'm on the public leaderboard technically. And what I'm describing on Kaggle notebooks is called the Private leaderboard, and they're the ones that are eligible for the prize. What I'm doing is not actually eligible for the prize, But I get $10,000 of compute per run and my goal is really to just see where the frontier is.
Jeremy Berman [00:25:50]:
That's my biggest goal, is just what are the best language models capable?
Leo Laporte [00:25:53]:
You do this in Python?
Jeremy Berman [00:25:55]:
Yes, well, the program's written in Python, but my ARC v2 solution, which I submitted before coming to Reflection, actually, instead of generating Python programs, I generate the programs in natural language, so they're actually just bullet points. And then I have sub agents or sub instances verify whether the plain English instructions actually work on the grids.
Leo Laporte [00:26:20]:
That's cool.
Jeremy Berman [00:26:21]:
Is basically taking advantage of the fact that thinking models exist now. They didn't when I was originally doing this. And so with thinking models, they're actually much better at reasoning on these types of things.
Leo Laporte [00:26:32]:
So you're vibe coding your solution in effect.
Jeremy Berman [00:26:35]:
Yes. That's a good way to think about it.
Leo Laporte [00:26:38]:
I'm sorry, I didn't mean to interrupt you, Jeff. I just wanted to reintroduce Jeremy for people who joined.
Jeff Jarvis [00:26:42]:
Yeah, that's good.
Paris Martineau [00:26:43]:
So.
Jeff Jarvis [00:26:43]:
So when you have that lunchtime discussion and you say, aha. No, I'm going to show you you need compute to do that. I'm guessing that allocation of compute resources is the key to be able to try these things. What's the currency that you then work in? Or does the company say, this is what we need to try next, and you're trying to convince the company that that's what the company should do? Or are there five different experiments going on? How does innovation go on in a company like this?
Jeremy Berman [00:27:19]:
So I guess I can speak at a high level. I can't talk about, I guess, the specifics, but I would actually, taking a step back, the goal of Reflection right now is to build the best or a frontier open weight model. And so our number one goal is actually to just make sure that we are doing the things that we know to work really well, basically, you know, excellence in execution. And so that's our number one goal. And it's of course great to do research and we do do research. And I think the research we're doing is really interesting. But our number one goal is making sure that we are able to produce a model that is really, really good and open weight.
Jeff Jarvis [00:27:54]:
So talk about the open weight part. That also separates you from some of the others. There's a mission there for our audience. Why don't you talk about that? Just a minute.
Jeremy Berman [00:28:03]:
Yeah, so Right now the frontier language models are closed, which means that if I wanted to run them, I have a business, I can't just download the model and then run it on my own computers. I have to send my request to OpenAI or Anthropic and from a business use case. This is challenging for private workloads or for regulatory reasons. And so if I'm a government, right, I want to use my own models on my own hardware, then I can't use frontier language models. And also there are various regulations or reasons why I wouldn't want to use Chinese models. And the Chinese models are currently overwhelmingly at the top, the best models in the world for open weight models. And again, open weight means you can download them, I can click a button, download them to my computer and actually run them on my own computers. And so we've found this gap in the market where there are a lot of enterprises that really want to run their own models for various reasons, but they find themselves, if they do want to do this, they're using old models or they're using models that are not the frontier.
Jeremy Berman [00:29:01]:
So we're really going after that segment. And then there's a second proponent, which is just philosophical, which is it's really great to be able to contribute to the community, right? Like if we're able to build a great model, we can give it to researchers and researchers will be able to run experiments, they'll be able to fine tune it for scientific discovery, for things that they are interested in. And so I think from a business perspective, it's interesting. But then also for just a research perspective, it's inspiring.
Leo Laporte [00:29:28]:
You say we will know we've achieved AGI when we can't create tasks that are easy for humans but hard for AI. Tasks like this ARC AGI puzzle, which turns out to be hard for a human too. But that's another. That's just me. But more importantly, I think, and I'm reading your substack where you talk about how you solve this, humans are able to extend reasoning from one discipline to another. We have consistent, you say consistent reasoning that transfers across domains. So an economist who's good at logic can become a programmer when they learn to code because it's a reasoning domain that's the same. It's not the same knowledge domain, but it's a reasoning domain.
Leo Laporte [00:30:17]:
You say that this is something that AI is currently aren't good at. They have dead reasoning zones, they cannot cross disciplines that way. And is. Is the training the solution to this or. And also have I misstated what you were saying?
Jeremy Berman [00:30:34]:
I would say just generally they, no, no, I, I, it's for like OpenAI's models are very generally good. But that's because OpenAI has really great training processes and they have, they fill.
Leo Laporte [00:30:44]:
In the dead zones. They know where the dead zones and they fill them in. But we want to get a solution that automatically that doesn't create dead zones.
Jeremy Berman [00:30:53]:
Right, I agree. And that is what I'm most interested in doing. And I think the way you can think about it is we pre train these models and knowledge from pre training is stored like a knowledge web. Right. Where there's not a cohesive or coherent causal relationship between things directly in that you haven't tried out your ideas. If you're a language model only pre trained in the real world. Right. You haven't gotten feedback on how your brain works.
Jeremy Berman [00:31:23]:
You've just been soaking up the information. So you have this, what I call a knowledge web where you have semantic understandings of things. Right. You can connect two things, but you are missing this causal understanding of the world because you just haven't tested your ideas out in the real world. And so I think what reinforcement learning does is it slowly shifts knowledge from this web to more of a cohesive knowledge tree. And I think the dead zones are places where we haven't yet found, we haven't yet merged or molded these weights to be coherent in the real world. And I think this might come from just a different training paradigm altogether where we're able to maybe shrink pre training, massively extend the scale of post training or some other ideas that I'm not exactly allowed to speak about right now.
Paris Martineau [00:32:13]:
Right.
Leo Laporte [00:32:14]:
But that's what we're doing. Right. We're trying stuff until we figure it out. But what's good is that we now have, it sounds like we have a goal. We haven't, we have a, at least we're not sure if this is going to work, but we have a pretty good idea of what we want to accomplish and now we can try different things to try to accomplish it.
Jeremy Berman [00:32:31]:
Yes. And I think this is also why it's important that we have a lot of open weight models that are strong because it's very useful to be able to get in there and actually try out your ideas.
Leo Laporte [00:32:40]:
Yeah. If you want to see what his English language solution looks like. Jeremy on his substack has published a few examples. It's kind of amazing really that you can write a prompt like this to solve something like arcgi.
Jeremy Berman [00:33:01]:
Yes. Yeah. It's. Models have gone a long way. And they will continue. They will never be worse than they are now.
Leo Laporte [00:33:07]:
Yeah, I think it's an exciting time. And, you know, we don't. What we don't know is whether we'll solve this. Right. I mean, it's a terra incognita. We've not been here before.
Jeremy Berman [00:33:21]:
Yes.
Leo Laporte [00:33:22]:
Do you feel fairly confident, though, that there is an answer, there's a solution out there and that we can teach machines to reason?
Jeremy Berman [00:33:28]:
I think I'm very confident that we will be able to build spiky superintelligence. We have a data set. We can train the model to be extremely good at that data set. I think it's more likely than not that in the next 10 years we have new ideas that will lead to true general reasoning that spiky is not the solution.
Leo Laporte [00:33:49]:
Spiky means that there's edge cases where it's just going to hallucinate. It's just not going to give you the good answer. You want it to be more continuous. You want. Yeah.
Jeremy Berman [00:33:58]:
Yes. And a good example is Arkv3, which is now actually a video game. And there are levels. Yeah, it's like an Atari game. But each level adds a new objective, a new twist. And the best language models, frontier language models, cannot do it at all. And it's fun. You can play a game yourself.
Jeremy Berman [00:34:17]:
Over time, you'll learn and you'll understand how it works. The best language models totally fail at this. And I think this is a good distillation of what I mean by we definitely do not have general intelligence. If language models are struggling to even pass the first level, I don't think a language model has even passed the first level of a real architecture.
Leo Laporte [00:34:35]:
Leo's struggling to pass the first level.
Jeff Jarvis [00:34:38]:
So Yann Lecun says that language models have kind of hit their apotheosis and that he's one of those who looks at real world models and other things as the next paradigm. And then we had Karen Howe on her book about Empire of AI a few weeks ago, and she was talking about the controversy of those who think that scale is the answer and those who think that no smarter training and different paradigms are the answer. Just in terms of just at a high level philosophical view. Are you LLM side still or got to do something new? Are you scale side still or something else? Where do you see that future building?
Jeremy Berman [00:35:21]:
I am. I am more. I am more confident than not that language models will get us there. And I am also confident that as we continue to scale, the systems will be smarter and smarter But I think it's important that we're not necessarily scaling and pre training, but that it's post training where we're scaling. I think generally models that are bigger are better as long as they're trained appropriately. But I'm actually not exactly sure if that's necessary. I think it's very possible that we will figure out reasoning and the models will be no bigger than they are today. Or maybe we'll figure it out and then the models will be able to build models that are much more efficient that are the size that they are today.
Jeremy Berman [00:36:02]:
I don't think there's any physical reason why they can't be.
Leo Laporte [00:36:05]:
This is the problem though that I have with Yann Lecun and Karen how. And they're in different categories but in general with the speculation of well, that's probably not going to work. We don't know, right? This is what's.
Paris Martineau [00:36:16]:
We don't know that it's going to work either.
Leo Laporte [00:36:18]:
We don't know it's going to work.
Jeff Jarvis [00:36:19]:
We don't know it's an investment risk.
Leo Laporte [00:36:22]:
And but we, we got to try every avenue and have the smartest people like Jeremy thinking, I love the lunch. I would love to be in that lunchroom to, to get these smart people together thinking about it and trying stuff. And until you, you know, you got to try it all and see what works and see what doesn't work, we just don't know. So speculation from Yann Lecun that oh LLMs have come to the end of the. It's just, it's speculative. We don't know.
Jeremy Berman [00:36:49]:
Here's maybe I can steel man him and I actually want to be careful because I'm sure I'm not going to do a good job. So maybe let me steel man an argument that I think does make sense for people that have that perspective, which is fundamentally neural networks learn the distribution they are trained on, which means you give it data and it will learn that distribution, but it will not be able to generalize outside of that distribution. So that is like Francois generally says this right? Or maybe a variant of this and this is true, but I actually don't think it's very hard to fit all of reasoning into that body. And all you have to do is fit reasoning into distribution and that is it. Because reasoning is the engine that builds all knowledge. You can deduce from simple axioms, you can deduce almost everything. And so it doesn't need to be in distribution if it's deducible through coherent reasoning. And so if we have coherent reasoning and the ability to have creativity and taste on what to reason about, I don't see why it is the case that language models will not be able to discover new things.
Jeremy Berman [00:37:51]:
And discovering new things is to me the heart of general intelligence.
Leo Laporte [00:37:54]:
Yeah, love it.
Jeff Jarvis [00:37:56]:
So talking to us is probably like the question I'm about to ask. What's it like when you go and have Thanksgiving with the family? They say what are you doing, Jeremy? How, how detailed do you try?
Leo Laporte [00:38:08]:
You're still doing that little computer thing that you were doing.
Jeff Jarvis [00:38:11]:
Or maybe it's just like talking to us right now. I don't know.
Leo Laporte [00:38:13]:
It's probably very similar. Yeah.
Jeremy Berman [00:38:16]:
No, no. What happens is, you know, my mom is like, you really need to clear your plate. And I'm like, mom, I'm working on AGI. Maybe someone that's. Maybe, maybe the lawyers can clean my plate.
Leo Laporte [00:38:29]:
You're doing. So I think I understand completely why you said I'm gonna do this because it is probably the most interesting and exciting and thing that humans are doing right now. And yes, it may be a dead end. We don't know. But you gotta try. And if you get there. Well, you're not worried about we're all be dead, are you? You're not a. You're not.
Jeremy Berman [00:38:53]:
No, no, I'm not.
Leo Laporte [00:38:56]:
You don't not. You're not with what's his. Eliezer Lekowski says if we get there, we're all dead. No. Okay, good. All right.
Jeremy Berman [00:39:05]:
It's reasonable to assume that they will be around as dangerous as nuclear weapons. And that that's pretty dangerous.
Leo Laporte [00:39:11]:
That's more dangerous than I would have thought.
Jeremy Berman [00:39:13]:
Okay, well you know, you're giving. If we are able to make these extreme. If we are able to do what I'm describing, right. Actually build general intelligence that runs in a reasonable efficiency. This means that everyone on the planet has basically an Einstein in their pocket. Right. So obviously there's more to building a bomb than just Einstein's brain. But there's not much more to a lot of dangerous things if you have, you know, an army of Einstein's working for you for not so expensive, you know, for, for, you know, maybe $100,000.
Jeremy Berman [00:39:41]:
So I think it's on the order of risk as nuclear weapons. But I don't think it is more than that.
Leo Laporte [00:39:48]:
Has. Has the reflection put any models up on a hugging face or anywhere for us to play with or. These are all internal still?
Jeremy Berman [00:39:55]:
No, they're internal, but we. I don't want to say any time It's a goal.
Jeff Jarvis [00:40:00]:
It's a goal.
Leo Laporte [00:40:01]:
Well, yes. Reflection. AI building frontier open intelligence. Imagine going to lunch with the smartest people in the world.
Jeff Jarvis [00:40:10]:
Jobs open there, folks.
Leo Laporte [00:40:11]:
Yeah. How is it that we reason? What is it that we do? And you're right in the middle of the most exciting thing to happen, I think, in human life. It's great to have you, Jeremy. Thank you. Thank you for helping us.
Jeff Jarvis [00:40:27]:
Thanks, Jeremy.
Leo Laporte [00:40:29]:
Neanderthals understand what Thanksgiving Day table. Yes.
Jeremy Berman [00:40:33]:
Thank you very much for having me. And I just. Mom, if you're watching, that was a joke.
Paris Martineau [00:40:38]:
He's gonna clear his plate.
Leo Laporte [00:40:41]:
Jeremy, I will clean, and then we'll have you wash the dishes after. Yeah, I don't know why I'm talking like that. Thank you, Jeremy.
Jeremy Berman [00:40:48]:
Spot on.
Paris Martineau [00:40:51]:
Here in the room.
Leo Laporte [00:40:53]:
I know. Because I'm a dad. I know. Thank you, Jeremy. Take care.
Jeremy Berman [00:40:56]:
Thank you. See ya.
Leo Laporte [00:40:58]:
We will continue with intelligent machines in just a little bit. Got some AI news. Paris Martino. Jeff Jarvis. You thinking about a career change now? I am. What's the.
Paris Martineau [00:41:09]:
Becoming a philosophy major.
Leo Laporte [00:41:10]:
Yeah. There's no future in podcasting if you.
Jeff Jarvis [00:41:13]:
Can'T do the thing that he called simple.
Leo Laporte [00:41:15]:
I. I can't even do that little.
Jeff Jarvis [00:41:18]:
I can't.
Jeremy Berman [00:41:18]:
I don't.
Jeff Jarvis [00:41:21]:
Did you try.
Leo Laporte [00:41:22]:
Little kids can do.
Jeff Jarvis [00:41:24]:
Did you try V1?
Leo Laporte [00:41:25]:
No.
Jeff Jarvis [00:41:26]:
Okay.
Paris Martineau [00:41:26]:
Well, I think trying to do the thing while you're interviewing someone on a podcast is probably a little different than actually trying it.
Leo Laporte [00:41:33]:
I could do it with one.
Jeff Jarvis [00:41:34]:
You two were trying to give him an easy out.
Paris Martineau [00:41:36]:
I was.
Leo Laporte [00:41:38]:
Thank you, Paris. Thank you. All right, we'll have more in just a bit. Our show today, brought to you by Threat Locker. Now, it doesn't take a great brain to understand that we are in a world of hurt and business ransomware is just killing us. But there is a solution. Threat Locker can prevent you from becoming the next victim because it's zero trust. Threat Locker makes zero Trust easy.
Leo Laporte [00:42:02]:
Zero Trust is the way Threat Locker, Zero Trust platform takes a proactive, and this is the key deny by default approach to security. It blocks every unauthorized action unless it's explicitly approved. It can't do it. Which protects you from not just from known threats, but from completely unknown threats. Zero days. Things nobody's ever heard of because you didn't give them permission to do anything bad. That's why companies that can't afford to be down for even one minute trust threat locker. Like JetBlue uses threat locker.
Leo Laporte [00:42:39]:
The Port of Vancouver uses Threat Locker. Threat Locker shields them and can shield you from zero Day exploits and supply chain attacks while providing complete audit trails for compliance. That's a nice side effect of this. You know exactly who did what, when and who couldn't do something. Who was blocked. Right. That's all built into the Zero Trust platform. As more cyber criminals turn to malvertizing, this is something that should scare every business because there's it's almost impossible for you to keep your employees using a browser from clicking a malicious link.
Leo Laporte [00:43:14]:
Malvertizing means you need more than traditional security tools. Attackers create convincing fake websites. They impersonate popular brands, you know, AI tools we all know or software applications. They promote them through social media ads. They put put high they to hijack accounts and they put it up on, on X and Blue Sky. And then the bad guys use, they're so clever. They use legitimate ad networks, they buy ads to deliver this malware to legitimate sites, sites your employees visit every day. That means anyone browsing on a work system is going to be exposed to this malvertizing.
Leo Laporte [00:43:53]:
And the problem is your traditional security tools often miss these attacks because they use fileless payloads, they run in memory, they exploit trusted services, they bypass typical filters. This is why you need Threat Lockers innovative ring fencing technology. It strengthens endpoint defense by controlling what these applications and scripts can access or execute. If not explicitly permitted, they can't run that contains potential threats. Even if the malicious ads get to the user on that laptop, in your office, in the lunchroom, even if they get to the device, they can't run, they can't do any harm. Threat Locker works in every industry. It supports PCs and Macs, it provides great support from the US 24 7. And of course as, as part of this platform, you get comprehensive visibility and control.
Leo Laporte [00:44:51]:
Ask Jack Senisap, Director of IT Infrastructure and Security at Rednor's Markets. He says, quote, when it comes to Threat Locker, the team stands by their product. Threat Locker's onboarding phase was a very good experience. They were very hands on. Threat Locker was able to help me and guide me to where I am in our environment today. Get unprecedented protection quickly, easily and cost effectively with threat locker. Visit threatlocker.com TWIT to get a free 30 day trial and to learn more about how Threat Locker can help mitigate unknown threats and Ensure compliance. That's threatlocker.com TWIT we thank them so much for their support of intelligence machines.
Leo Laporte [00:45:33]:
Back to the show. Jeff Jarvis, Paris Martineau. Paris, you're young. Do you ever think that maybe you give up this journalism thing and start Going into AI research?
Paris Martineau [00:45:43]:
Nope. I briefly had a. I, I thought about the hypothetical today because I saw that Anthropic is hiring for two, like, AI writer roles, I guess, for their website with and someone. I found out about this because some journalist was posting a screenshot of the salary range, which is like 225k to 350k, and I was like, that would be nice. But then I was like, would I want to not be a journalist and instead write a corporate blog for Anthropic? And the answer was no.
Leo Laporte [00:46:19]:
I did this when I was about your age, actually.
Paris Martineau [00:46:22]:
I thought, you know, anthropic briefly.
Leo Laporte [00:46:24]:
No, I thought to myself, sort of. It was the anthropic of the 90s. I thought to myself, I can't be a DJ for the rest of my life. That's a.
Paris Martineau [00:46:34]:
Okay, I'm so sorry. Anthropic of the 90s was working as a DJ?
Leo Laporte [00:46:39]:
No, no, no. I was working as a DJ and a guy came to me who was doing a startup, a computing startup, and said, look, why don't you. You clearly know this stuff. Why don't you come work for me? You could work for a startup. I'll make you a, you know, investor in the company and we can do it together. You'd be like the third or fourth company. The idea was, it was called Paracop. It was parallel computing for science.
Leo Laporte [00:47:06]:
And it was, it was probably 90 or 91 somewhere around there. So this was a, you know, this was like when everything was starting to get going. And I thought, this is great. Here I am, 32, I'm going to get out of this radio business, which has no future, and I'm going to do a story.
Jeff Jarvis [00:47:23]:
You were right about that.
Leo Laporte [00:47:24]:
And I. So about three months in, I was, you know, and we, you know, this was like a real startup. We. I remember driving in a truck with, with all the office furniture hanging off the side of it. I'm holding on for dear life.
Paris Martineau [00:47:36]:
Oh, my God.
Leo Laporte [00:47:37]:
We, you know, it's like the sawhorse with the doors across the tables and painting the office and everything was a real startup. And. But three months in, it was like, I'm doing shtick. I'm standing up in my cubicle, you know, telling stories and stuff. And I thought, I, I'm a performer. I am not. I cannot do this. I'm a bard.
Leo Laporte [00:48:00]:
And I quit and went back to radio. Sad to say, the company eventually sold the Macromedia, then became part of Adobe and is doing quite well. But. And I probably would have been a Millionaire by now. But that's okay, because I was about.
Paris Martineau [00:48:12]:
To say it only took four years for it to sell to Macro Find.
Leo Laporte [00:48:18]:
Oh, you looked it up, did you just.
Paris Martineau [00:48:20]:
I just looked it up right now. Five years.
Leo Laporte [00:48:23]:
You're so funny. Anyway, it was a great experience, but it also taught me that you got to follow your heart, not your wallet.
Jeff Jarvis [00:48:30]:
You want to hear my path not taken?
Leo Laporte [00:48:32]:
Yeah.
Paris Martineau [00:48:32]:
Yeah.
Jeff Jarvis [00:48:33]:
So I'm finishing my freshman year at Claremont Men's College. It still was.
Paris Martineau [00:48:38]:
I wasn't even allowed to look at a woman for 40.
Jeff Jarvis [00:48:40]:
Not. No, it wasn't.
Leo Laporte [00:48:41]:
That's a great school.
Jeff Jarvis [00:48:42]:
Part of the reason I left.
Leo Laporte [00:48:43]:
I didn't know you went to Claremont. That's a great school.
Jeff Jarvis [00:48:45]:
First year. Then I transferred to Northwestern. Foolishly.
Paris Martineau [00:48:48]:
Were you allowed to look at women there?
Jeff Jarvis [00:48:50]:
Yes.
Leo Laporte [00:48:51]:
That's why Abby got into the woman's scripts. Scripts?
Jeff Jarvis [00:48:55]:
Pitzer.
Leo Laporte [00:48:56]:
No, not Pitzer scripts, I think.
Jeff Jarvis [00:48:58]:
Yeah. Yeah.
Leo Laporte [00:48:59]:
And so it's the whole. It's the whole college complex. And I was so excited. I said, oh, God, that's going to be great. Go. And she didn't like it. She went to Bard instead.
Jeff Jarvis [00:49:07]:
Yeah. Yeah. So summer before I moved, I wanted a job. And there was this internship, political internship. And you go and. And somebody sent me and said, you should go and interview for this. And I go. And you're going to go to all the Democratic press conferences and events, and you're going to record things.
Jeff Jarvis [00:49:27]:
You're going to record them for creep. The Committee to Reelect President.
Leo Laporte [00:49:31]:
No.
Paris Martineau [00:49:32]:
For who? Sorry.
Leo Laporte [00:49:34]:
Nixon campaign arm in 19. This would be the 72 campaign.
Jeff Jarvis [00:49:40]:
Right. Yeah. I see.
Leo Laporte [00:49:41]:
And. And it was kind of. I don't know why they thought this was a good name. The Committee to Reelect the President. Creep.
Paris Martineau [00:49:50]:
You could have worked on CREEP for Dirty Dick.
Leo Laporte [00:49:54]:
Dirty Dick. Dirty Dick's creep. Did you not take the job?
Jeff Jarvis [00:49:58]:
No. No, no, no, no.
Leo Laporte [00:49:59]:
Because you're a good. A good Democrat.
Jeff Jarvis [00:50:01]:
Yes.
Leo Laporte [00:50:02]:
Wow.
Paris Martineau [00:50:03]:
Temporarily. Was this the time where it was Committee to Reelect Nixon? So, yes, it was.
Leo Laporte [00:50:09]:
I was a kid at the time. I was.
Jeremy Berman [00:50:11]:
Oh, yeah.
Jeff Jarvis [00:50:11]:
Rub that in. Yeah.
Leo Laporte [00:50:12]:
I was campaigning for McGovern at the time. I was going door to door in Daly City, ticky the ticky tacky houses. Because I wanted McGovern. So, Paris, congratulations on your new mayor.
Paris Martineau [00:50:25]:
Speaking of the one time a week I'm allowed to not wear a burqa. And I think that's beautiful.
Leo Laporte [00:50:33]:
You're not allowed to be political, so we're not going to.
Paris Martineau [00:50:36]:
Yeah. No, that was a joke. Purely based on the Strange reaction from a lot of local news organization.
Leo Laporte [00:50:44]:
People are acting like, yeah, I will say you're going to be a sharia.
Paris Martineau [00:50:47]:
Hottest. The hottest ticket in New York City. The thing you can't find is a copy of today's New York Post cover. Have you guys seen this?
Leo Laporte [00:50:55]:
No.
Jeff Jarvis [00:50:55]:
Oh, yeah. If you go to my feed, it's in there.
Paris Martineau [00:50:58]:
I would say. I know Jeff has seen is.
Jeff Jarvis [00:51:02]:
Hold on.
Paris Martineau [00:51:02]:
It's a Post cover that says the Red Apple and it's a photo of Zoron holding up the hammer and sickle. And it says, on your marks, get so socialist Mom. Donnie wins the race for mayor. And Zoron supporters are buying it up on mass because it's kind of a, kind of a banging cover.
Jeff Jarvis [00:51:22]:
Just put it in the discord. You can show it.
Leo Laporte [00:51:24]:
Oh my God. It is a banging cover. Holy cow.
Paris Martineau [00:51:28]:
Yeah.
Leo Laporte [00:51:30]:
This is Rupert Murdoch's New York Post. Of course.
Jeff Jarvis [00:51:32]:
Eat it, Rupert.
Leo Laporte [00:51:33]:
On your Marx get set, Zo. Well, he is. He says, I'm a socialist.
Jeff Jarvis [00:51:41]:
Democratic socialist.
Paris Martineau [00:51:42]:
He's a democratic socialist. It's different.
Leo Laporte [00:51:45]:
Okay. He doesn't believe that the means of production should be in the hands of the proletariat. Oh, good. Okay.
Paris Martineau [00:51:51]:
Correct.
Leo Laporte [00:51:54]:
I, I think, you know, it's a great experiment with the largest city in.
Jeff Jarvis [00:51:57]:
The United States, just period. I watched him the day before the election on, on Ari Melburn and on Morning Joe and he parries anybody's questions. He's just, he's just amazing.
Leo Laporte [00:52:09]:
It's nice to see occasionally somebody who's eloquent and intelligent because most politicians seem to be somehow unable to make a simple, simple statement.
Paris Martineau [00:52:23]:
So that's worth noting that this is, I believe the, I mean, it's got to be. This is the first mayor of New York City that met his wife through hinge.
Leo Laporte [00:52:34]:
Oh, well, there's something win for all.
Paris Martineau [00:52:37]:
Of us out here.
Leo Laporte [00:52:38]:
Something. We don't know who Michael Bloomberg was meeting on hinge. But anyway, that's good.
Paris Martineau [00:52:43]:
No Michael Bloomberg Field.
Leo Laporte [00:52:45]:
If we're being Bloomberg. Anyway, enough of enough politics. Let us.
Jeff Jarvis [00:52:51]:
You forgot New Jersey. I got a new governor.
Leo Laporte [00:52:53]:
Congratulations.
Jeff Jarvis [00:52:54]:
The first woman.
Leo Laporte [00:52:55]:
First woman governor in both New Jersey and Virginia.
Jeff Jarvis [00:52:57]:
We have and, and showed how. How absolutely flawed polling is and how wrong it is. And how did they say that she was going to lose? We, I, I was nervous. We all thought it was going to be close as hell.
Leo Laporte [00:53:07]:
It wasn't.
Jeff Jarvis [00:53:08]:
13 points.
Leo Laporte [00:53:09]:
Yeah.
Jeff Jarvis [00:53:09]:
Ridiculous.
Leo Laporte [00:53:10]:
Yeah. Here is. We were talking about Eliezer Yadowsk Yudkowski. If anybody, if anybody builds it, everyone dies.
Jeff Jarvis [00:53:21]:
Drive me Crazy.
Leo Laporte [00:53:22]:
Here is a. A reply, a rebuttal from your Cosmotron. Ben Goertzel's substack. He says he's known. Yeah. What does it mean? You're no idea.
Paris Martineau [00:53:36]:
I just think it sounds great to me. I'm sure it means something.
Leo Laporte [00:53:38]:
I'm sure it means something.
Jeff Jarvis [00:53:39]:
What's. What's the stuff that you get when we hit the Singularity?
Leo Laporte [00:53:44]:
Computronium computer?
Jeff Jarvis [00:53:46]:
Related to Computronium?
Leo Laporte [00:53:47]:
I don't know, but I. That would be a good name for a blog as well. He says he's known yudkovsky since the 1900s. Goertzel started an AI company in 25 years ago. Webmind. They were trying to build AGI in. In 2000 and even then Yudkowski was trying to convince them to stop and. Or think about AGI safety.
Leo Laporte [00:54:13]:
What's funny is, and this. A number of people pointed this out. At the same time as he says we've got to stop, he also says AGI is the most important thing on the planet.
Jeff Jarvis [00:54:22]:
Yes. So.
Leo Laporte [00:54:24]:
Which is.
Benito Gonzalez [00:54:26]:
That's.
Jeff Jarvis [00:54:27]:
That's this whole. His whole shtick, right, is it's. He so believes in it that if it's not done the way he says, then he endorses thermonuclear war to destroy those who are trying to do it now the wrong way.
Leo Laporte [00:54:44]:
I mean, you. You could say as. As our guest did, that AGI would be the equivalent of the atomic bomb, but that is the atomic bomb. That is actually using the atomic bomb to destroy research is not perhaps the best recommendation for your thesis, Eleazer. So Gertzer says what? Eliezer is right about one thing. We cannot know with certainty that AGI will not lead to human extension. But the leap from uncertainty to everybody dies is a tremendous failure of imagination about both the nature of intelligence and our capacities shape its development. I don't know.
Leo Laporte [00:55:29]:
Are you worried? Paris, you seem to be the most worried about this of the three of.
Paris Martineau [00:55:34]:
Us, about AGI and the impending doom.
Leo Laporte [00:55:37]:
I mean, you're the one who's going to have to put up with it. We're going to be. Jeff and I will be long gone, probably before there's AGI.
Paris Martineau [00:55:45]:
I think there are a million other things to worry about before we.
Leo Laporte [00:55:48]:
I agree.
Paris Martineau [00:55:49]:
I think that's AI's ability to accelerate humanity's already natural tendency to careen towards like global thermonuclear disaster or general, like.
Leo Laporte [00:56:01]:
Exacerbating or bigotry or face recognition or.
Paris Martineau [00:56:07]:
Bad thing here.
Leo Laporte [00:56:08]:
Yeah.
Paris Martineau [00:56:09]:
This technology seems uniquely predisposed to make.
Leo Laporte [00:56:14]:
Things worse, did you see the Atlantics piece about Common Crawl? Alex Reisner writing the. We interviewed, of course, Common Crawl, the company quietly funneling paywalled articles to AI developers. It's a, it's a, it's a hit piece on Common Crawl.
Jeff Jarvis [00:56:31]:
It's very much a hit piece. And so I, I emailed Rich Strenta, the head of Common Crawl, in the morning and he woke up to my.
Leo Laporte [00:56:40]:
Message saying he was our guest a few months ago. Yeah.
Jeff Jarvis [00:56:42]:
Who put it. Oh, sorry, I just put it in the, in the, in our chat. I'll put it in the next one. So I see. Who put a stick up the Atlantic's rear end? My question. And he had multiple back and forth, but he said that the article was already written and the view was already set. Journalists do this more than they admit. And he accuses them of lying and other things.
Jeff Jarvis [00:57:03]:
So Rich wrote. Rich wanted to kind of ignore it. He wrote a response which he says.
Leo Laporte [00:57:08]:
He spoke with Rich twice during the reporting on this story.
Jeff Jarvis [00:57:11]:
Yeah. And the guy kind of ignored it.
Leo Laporte [00:57:13]:
Yeah. So having spoken to Rich Scrint, I kind of think Common Crawl is a good thing.
Jeff Jarvis [00:57:22]:
Common Crawl was started because only Google had a crawl.
Leo Laporte [00:57:25]:
Right.
Jeff Jarvis [00:57:25]:
This was a crawl that enabled others. 10,000 academic papers cite Common Crawl. It's been useful for that for almost 20 years. And then it's open. It's open. Anybody can use it. It's like Wikipedia. Anybody can use it.
Jeff Jarvis [00:57:36]:
Right. Wikipedia can make rockipedia. Do you condemn Wikipedia because of that? Because it's open? No. You just recognize what happened. So now the Atlantic view is, oh, evil AI came along and they didn't stop and they did this and, and this is terrible. And they're. And they're consorting with the devil, so they must be the devil.
Leo Laporte [00:57:52]:
The devil.
Jeff Jarvis [00:57:53]:
And then the other complaint here is, is you have. And Common Crawls put this list out. You have a lot of media companies, starting with the New York Effing Times, who were insistent on being taken down. Now, Common Crawl does not go behind paywalls. It does not sign in as users. It takes things that are free on the web and open. And as Rich says, if you don't want the world to read it, don't put it on the Internet.
Leo Laporte [00:58:15]:
Right.
Jeff Jarvis [00:58:15]:
And so that's what they capture.
Paris Martineau [00:58:18]:
They put it on the Internet behind a paywall.
Jeff Jarvis [00:58:20]:
No, no, this is. No, this is no, Paris. He does not go behind the paywall. This is not stuff behind the paywall. It's the stuff that is open and visible that's all they crawl. They do not go behind paywalls. And so the problem becomes then is that you want, these companies want to take down things post facto. They want to go back and say, take us out of prior crawls, which is difficult.
Jeff Jarvis [00:58:43]:
And they're trying to do it, but it's not easy because those crawls are data sets. But no, they do not go behind paywalls. Paris.
Paris Martineau [00:58:52]:
So you're saying that the headline from the Atlantic is factually inaccurate.
Jeff Jarvis [00:58:56]:
Yes.
Leo Laporte [00:58:57]:
Interesting.
Jeff Jarvis [00:58:58]:
What they say is, what the Atlantic alleges is that if you open an article in certain cases before you get the paywall challenge, you can, the. The computer is fast enough, it can see some stuff. But it's, this is common crawling behavior.
Paris Martineau [00:59:14]:
If you click the stop loading button before.
Jeff Jarvis [00:59:16]:
But they don't, they don't do those tricks. It's just, this is, this is how crawlers work. This is how Google gets stuff too do. It's how crawlers work. And they're just, they're doing the same thing all crawlers are doing. But Atlantic is coming along because it has a dog in this hunt and it wants to demonize AI and demonize training of AI. This is a hit piece, pure and simple.
Leo Laporte [00:59:38]:
It's a headpiece, and it won't be the last. From many outlets. There's definitely a, a schism now between haters and believers, and there are not many people. I think we, we, the three of us represent a kind of this reason.
Jeff Jarvis [00:59:54]:
We represent that debate because Paris and I disagree about this, but it's a healthy debate that we have. Yeah, the larger question, to me, that's really interesting, and I talked about this on a paper a few weeks ago, where 10 times as many reputable information sites as disinformation sites block all of the crawlers. Now that's what's happening. So no longer does common crawl include the New York Times at all. Well, what does that then do? If people are going to use AI and AI gets filled with disinformation like the web is filled with disinformation, what are we doing to the information ecosystem of society? Do we as journalists have some obligation?
Paris Martineau [01:00:34]:
Are you arguing that all companies then have a moral right to make all of their work available for free?
Jeff Jarvis [01:00:39]:
No, what I'm arguing is we need a discussion between the AI companies and the content companies, which includes mutual benefit, immediate negotiation here and not a sane negotiation. And this is why I'm arguing for an API for news where we can go to the AI companies, say, okay, let's talk, let's talk about payment, let's talk about brand, let's talk about links, let's talk about placement and use and have that discussion so that we can try to include it. The other thing I'm arguing, Paris, is that I'm seeing again and again and again. I'm seeing brands, commercial brands, they're dying to be part of AI because they want AI to link to them, they want AI to call upon them. And we're seeing responsible news brands cut off AI. Well, that's cutting off their nose, despite their face. And just like the early days of the web and just like the Leistungshutsrecht in Germany when news companies tried to say, you're stealing our stuff. And Google said, fine, then we won't scrape you.
Jeff Jarvis [01:01:36]:
And they said, well, no, please scrape us again because we need you, we need the eggs. Old joke. And so I think that what we're going to find at some point is that authors and publishers and news publishers are going to have second thoughts about their demonization of AI and say, oh crap, everybody's using it and we're not there. And so what do we do? And the strategy they've had in the US is we're going to sue you and we're going to get legislation against you. The strategy in Norway, as I always point out, is no, let's make our own LLMs and let's deal with this at a straightforward level and let's get access to the technology. Let's have a good discussion.
Leo Laporte [01:02:16]:
You know, it's interesting.
Jeff Jarvis [01:02:17]:
How does society end up better off rather than worse off in the end?
Leo Laporte [01:02:20]:
Darren Okey's positing something I think is pretty important in our discord, which is that news probably isn't important to training a because news like fish wrap is. News like fish is only a welcome guest the first day or so. There's a lot more other information out there. And I think the authors of books, for instance, maybe have more of an argument than than the New York Times.
Jeff Jarvis [01:02:47]:
But they also have more of a need to be called upon. One thing that was found with the Google Google Books project is that when all these books were scanned, it increased sales of backlist books because people didn't know about them.
Leo Laporte [01:03:00]:
Right.
Jeff Jarvis [01:03:01]:
It wasn't there. So.
Leo Laporte [01:03:04]:
Well, one of the things that I think people both love and hate about AI is AI generated video. It's really interesting to see the reaction of people. In fact, sometimes the same person, and I'll include myself on the one hand, thinks hey Sora. And playing with it is really cool and interesting and at the same time, God, I don't want to see yet another AI Video. We're gonna get a chance to really test this. During the holiday advertising season. Google has created a new AI Ad. Coca Cola's created a new.
Leo Laporte [01:03:38]:
Here's the Google Ad. Planning a kick, quick getaway. Just ask Google. I won't put the audio on it because I don't want to get taken down. But it's a. It doesn't look like it's an AI Ad. It looks like it's a stuffed animal.
Paris Martineau [01:03:50]:
Kind of does look like an AI Ad, though. The motion isn't right for stop motion and not right for smooth, either.
Leo Laporte [01:03:58]:
It's too smooth. Good. Yeah. But I think, you know, probably people will go, oh, well, it's just an animated turkey. It's not. I. But on the other hand, the Coca Cola ad. So I didn't.
Leo Laporte [01:04:15]:
The Google Ad doesn't bother me because it's maybe because it's an animated turkey. The Coca Cola ad I hated. Have you seen this? It is creepy.
Paris Martineau [01:04:25]:
Was watching the World Series, the other. The final game, and of course, there were ads, and I was, like, talking to it. I was like, wow, I never see ads anymore. Like, yeah, me either. I'm like, oh, wait, no. The one time I see ads is on my podcast where my two podcast ads show me ads. There's another one.
Leo Laporte [01:04:41]:
Paris, look at this ad.
Jeff Jarvis [01:04:42]:
Is it advertising? Fascinating.
Leo Laporte [01:04:44]:
So Santa arrives as he did last year. Coca Cola did this last year.
Paris Martineau [01:04:48]:
So.
Leo Laporte [01:04:49]:
AI it looks terrible. It's more AI Than last year.
Paris Martineau [01:04:52]:
And a team of hundred people that worked on this as well, in addition to the AI If I recall correctly.
Leo Laporte [01:04:56]:
Well, this is one of the stories is the Coca Cola said, hey, isn't this amazing? And we use fewer people, go all.
Jeff Jarvis [01:05:03]:
The way to the end.
Paris Martineau [01:05:04]:
100 is fewer.
Leo Laporte [01:05:06]:
Yeah.
Jeff Jarvis [01:05:06]:
Yeah. Oh, yeah. Coca Cola, of course, is associated with our sense of Santa Claus.
Leo Laporte [01:05:12]:
Yeah.
Jeff Jarvis [01:05:13]:
Here's the AI Coca Cola Santa Claus, which I find interesting because it carries it through.
Leo Laporte [01:05:18]:
Yeah. The Santa that we know is. Yeah, there's AI Santa, and he looks just like the one in the ad.
Paris Martineau [01:05:27]:
I think it's notable that despite all of this progress we've seen in AI Generated video, these multibillion dollar companies can't use it to make an ad that we all don't kind of agree. It sucks.
Leo Laporte [01:05:43]:
Well, so, Paris ads, you just don't see enough of them. During the World Series, you're probably seeing the best ads, by the way.
Paris Martineau [01:05:51]:
I did not pay attention to the ads.
Leo Laporte [01:05:53]:
Yeah. Those are the best ads. The super bowl, same thing. Those are the best ads of the year because that's the most expensive ad real estate of the year.
Jeff Jarvis [01:06:00]:
But I was talking earlier today with Jason. There's an ad I saw the other day about a mutual fund and there's a young woman who's at work and then she's talking to an agent and then she's walking down the street.
Jeremy Berman [01:06:12]:
Street.
Jeff Jarvis [01:06:12]:
And I swear it's AI but how.
Leo Laporte [01:06:15]:
Do you know, right?
Jeff Jarvis [01:06:17]:
You don't know. Now, I'll bet Paris, you're seeing a lot more AI ads than.
Paris Martineau [01:06:22]:
Of course you don't see any ads. The idea of you knowing enough about you seen to describe it is crazy.
Jeff Jarvis [01:06:30]:
Because we see them a hundred times.
Leo Laporte [01:06:32]:
Hey, lady, you're in an ad supported medium now. This is your life.
Paris Martineau [01:06:37]:
I'm not my primary. My salary comes from Consumer Reports accepts no ads.
Leo Laporte [01:06:41]:
Well, I was thinking of this show.
Paris Martineau [01:06:43]:
This show is great as well. But we are also partially members.
Jeff Jarvis [01:06:47]:
They're not AI ads.
Leo Laporte [01:06:48]:
No, no, that's true.
Paris Martineau [01:06:49]:
And ads about AI.
Jeff Jarvis [01:06:52]:
This week, he wanted, he wanted companies to put their mascots, their advertising mascots up in Sora, which is. Imagine what's going to happen. So my joke.
Leo Laporte [01:07:00]:
You can, by the way, you can. And I did this. You can now put your animals, your pets, even your stuffed animals into Sora. You can add character soras, which is great. It's so much fun.
Jeff Jarvis [01:07:11]:
So you will not understand this gag.
Leo Laporte [01:07:13]:
Yes, it's great.
Paris Martineau [01:07:15]:
Well, I don't want to see the weird stuff people would do to Gizmo.
Leo Laporte [01:07:18]:
Well, don't. You don't have to do that. Here. Here I am with my kitty cat.
Jeremy Berman [01:07:23]:
Patience, partner. The night is young.
Leo Laporte [01:07:25]:
You're stealthy. This is actually I, I scanned Rosie, our kitty cat for this video of me and Rosie hunting mice. It looks just like her.
Paris Martineau [01:07:36]:
Okay, that's a little compelling.
Jeremy Berman [01:07:37]:
I see him.
Leo Laporte [01:07:38]:
You like it?
Jeff Jarvis [01:07:39]:
Gizmo, you're gonna be a star.
Leo Laporte [01:07:41]:
You're gonna be a star, Gizmo.
Paris Martineau [01:07:44]:
I doubt, I doubt that they will be able to get her spots right. They never do.
Leo Laporte [01:07:49]:
Well, I, you know, so the way this worked, I took literally a 4 second video of her like running away from me because I was trying to take a video of her and it's, it's pretty good. Her handle, I don't know if other people can use it. I think is you can. Is Chief Twit Dot Whisker Snow. I don't know why. Whisker W H I S K E R S N O So I don't see any reason not to put my cat's cameo. Make that public as minus permission. Wow.
Jeff Jarvis [01:08:19]:
The cat is sure to say, rosie.
Leo Laporte [01:08:21]:
Can I make your public? Lisa's been doing some stuff with her too. She does some cat training videos now. But that looks just like her. I think it's pretty amazing what they can do.
Jeff Jarvis [01:08:33]:
Don't jump, Rosie. I hate heights.
Leo Laporte [01:08:36]:
That's her look, by the way that she's talking. I don't like her voice. Here, I'll play her voice. I don't like her voice at all.
Jeff Jarvis [01:08:43]:
Her Sora voice?
Leo Laporte [01:08:44]:
Yeah. Where is it? Oh, I guess it's not coming out. Yeah, it's a fake E voice. But I didn't know how to make her voice anything. But. But that.
Paris Martineau [01:08:58]:
I don't know.
Jeff Jarvis [01:08:58]:
So, Paris, have you ever heard of the Limu Emu?
Leo Laporte [01:09:02]:
Oh, she doesn't know advertising.
Jeff Jarvis [01:09:04]:
You do.
Leo Laporte [01:09:04]:
You do, right, Paris, when I say Liberty, Liberty, Liberty. What do you think of.
Paris Martineau [01:09:09]:
Yes, I know. Liberty.
Leo Laporte [01:09:12]:
Liberty Biberty. Liberty Biberty. The Emu is. Is also Liberty Mutual.
Jeff Jarvis [01:09:18]:
So. So my response when Samuel asked me.
Leo Laporte [01:09:21]:
The other day, she said, what would network television do without insurance companies? Half the ads are for insurance companies.
Jeff Jarvis [01:09:27]:
So what I want is a snuff film of the li Mu. That's how I want to hear it.
Leo Laporte [01:09:33]:
Sicky.
Jeff Jarvis [01:09:34]:
I'm sick of the.
Leo Laporte [01:09:35]:
But that's what. See, this is why the parent company is not going to put that Scott, because that's what's going to happen.
Paris Martineau [01:09:43]:
Kill the. The Limu Emu.
Leo Laporte [01:09:45]:
Yeah, yeah. Lisa tried to make.
Paris Martineau [01:09:48]:
Who do you want to do the killing, Jeff?
Jeff Jarvis [01:09:49]:
Well, that's a good question.
Leo Laporte [01:09:50]:
Sylvester Stallone? Oh, no. John Wick, because he's a really good shot.
Paris Martineau [01:09:54]:
I was thinking, you know, the Geico Gecko could get in there.
Leo Laporte [01:09:56]:
Oh, that'd be fun. You're pretending you don't know advertising. You know more than you admit to, young lady.
Paris Martineau [01:10:05]:
From when I was a child.
Leo Laporte [01:10:10]:
Advertising is some of our premier creative endeavor in this country. Some of the best stuff is in.
Jeff Jarvis [01:10:18]:
Imagine the early days of radio where the ad agencies were the ones that made the programs.
Leo Laporte [01:10:22]:
Yeah, what a world. Even the early days of tv. You know, Milton Burl was brought to you by. Who was it? I don't remember, Grandpa. Who was Milton?
Jeff Jarvis [01:10:30]:
I don't know. Perry. Perry Colo had the craft musical.
Paris Martineau [01:10:33]:
I feel like back in those days the companies that were advertising is like corn made by American farmers. This program is brought to you by.
Leo Laporte [01:10:44]:
Dove Texico Stars Theater. Milton Borough was brought to you by Texaco. But there was also, I mean, Philip Morris.
Jeff Jarvis [01:10:52]:
Oh, yeah.
Leo Laporte [01:10:53]:
Oh, yeah, cigarettes. Well, I don't. I think Sam is just trying to make more money. Sam has an issue. He, he, he apparently shouted down, no, this must be your story that you put in here, Jeff. When somebody asked him how open AI was can commit to spending $1.4 trillion on training AIs while merely earning billions a year. What did, what did Sam Altman reply to that? Because I can't get the article up.
Jeff Jarvis [01:11:30]:
Enough. Stop asking me that.
Leo Laporte [01:11:34]:
He literally said, enough.
Jeff Jarvis [01:11:36]:
Enough.
Leo Laporte [01:11:38]:
Like, enough of you and. Or enough. I'm gonna. We're gonna make enough.
Jeff Jarvis [01:11:43]:
Yeah. So here's Brad Gerstner on the big two pod was asked the single biggest question that's hanging over the market. How can a company with $13 billion in revenues make 1.4 trillion in spend commitments? You've heard the criticism. Sam Altman said, if you want to sell your shares, I'll find you a buyer. Altman said in response, enough.
Leo Laporte [01:12:12]:
Oh, boy, that's. A lot of people go on a show like that. See, this is why we don't have Sam Altman on our show.
Jeff Jarvis [01:12:18]:
Yeah. Because he doesn't handle, he doesn't want.
Leo Laporte [01:12:20]:
Those kinds of questions.
Jeff Jarvis [01:12:21]:
Yeah, I think there's a lot of people who talk with a lot of breathless concern about our compute stuff or whatever that would be thrilled to buy shares. Altman said we could sell your shares or anybody else's to some of the people who are making the most noise on Twitter about this. Very quickly.
Leo Laporte [01:12:38]:
Wait a minute. Does Gerstner have shares? Because this is, by the way, not a publicly traded company, so.
Jeff Jarvis [01:12:43]:
But you can buy, you can buy through the privileged market. We do plan for revenue to grow steeply. Altman said on the podcast. We are taking a forward bet that it's going to continue to grow. And of course, at the same time, the word is that they're Preparing for an IPO at a valuation of $1 trillion.
Paris Martineau [01:13:02]:
$1 trillion.
Jeff Jarvis [01:13:03]:
$1 trillion trillions.
Leo Laporte [01:13:05]:
This. Is this the story from the New York Times how open AI uses complex and circular deals to fuel its multi billion dollar rise. And we've been saying this me that.
Paris Martineau [01:13:18]:
We'Re not in an AI bubble.
Leo Laporte [01:13:20]:
Oh, yeah, we're in a bubble. There it is, ladies, haunting the eight. Oh, my God. My God, her hands. Are they made of. Had to met you. You look good, Matt. You make a good evil.
Paris Martineau [01:13:40]:
For anybody listening. I can't describe what just happened. You gotta, you've gotta watch it.
Leo Laporte [01:13:45]:
I'm sorry.
Jeff Jarvis [01:13:45]:
Can we do it? Again with a commentary. Do it again. Do it again.
Paris Martineau [01:13:50]:
I look down. I hold up a bubble. The bubble's filling up the screen. The words AI Are in it in a bubble. I'm backlit. Suddenly, Freddy Krueger s cams come and pop the bubble. I look satisfied at the camera and reveal my Freddy Krueger claws.
Leo Laporte [01:14:07]:
That is amazing.
Paris Martineau [01:14:09]:
That was great.
Leo Laporte [01:14:10]:
You could get a job in the movies with that. I mean, use that.
Benito Gonzalez [01:14:14]:
Anthony Nielsen.
Jeff Jarvis [01:14:15]:
Anthony. Brilliant.
Leo Laporte [01:14:16]:
Brilliant.
Paris Martineau [01:14:17]:
Bravo. That was incredible.
Leo Laporte [01:14:19]:
But notice how much better these have gotten, even in the six months we've been doing them.
Jeff Jarvis [01:14:24]:
That's pretty. Damn.
Leo Laporte [01:14:25]:
That was amazing.
Paris Martineau [01:14:27]:
That was quite good.
Jeff Jarvis [01:14:28]:
So, so, so. And. And let's give points to Bonito. He's been. How long have you had that? How long have you been waiting for her to say.
Benito Gonzalez [01:14:36]:
That was actually. That was new today. I was just waiting for.
Paris Martineau [01:14:39]:
I was gonna say Anthony just added me in the chat, maybe like five minutes ago, saying I should say the.
Leo Laporte [01:14:44]:
Word AI the whole thing was a setup.
Paris Martineau [01:14:47]:
And so I did. I did. I'm sorry. I'm sorry to pop your bubble.
Leo Laporte [01:14:53]:
All right, let's take a little break. We'll come back with more. You're watching Intelligent Machines with Paris Martineau, the evil Bond villain with adamantian hands. And Jeff Jarvis, who is a professor emeritus, a villain of his own sort. Villain of his own. And the author of some marvelous books, including one coming out this June that is the history of the. The. Of the Linotype.
Leo Laporte [01:15:16]:
Linotype, which is more fascinating.
Jeff Jarvis [01:15:20]:
More fascinating.
Leo Laporte [01:15:21]:
Actually it sounds pretty fascinating if you've seen a Linotype. It's like the most amazing Rube Goldberg invention. It's amazing that it works. And in fact, many of its six predecessors did not work, including the one in Mark Twain's basement.
Jeff Jarvis [01:15:34]:
Yep.
Leo Laporte [01:15:35]:
So the fact that it did work was quite an achievement. High tech of another time. Our show to brought to you today by agency. The agency building the future of multi agent software with agency AGN tcy Agn TCY it's now an open source Linux foundation project. Right on. Agency is building the Internet of Agents, a collaboration layer where AI agents can discover, connect and work across any framework. All the pieces engineers need to deploy multi agent systems now belong to everyone who builds on agency, including robust identity and access management. Very important that ensures every agent is authenticated and trusted before interacting.
Leo Laporte [01:16:23]:
Agency also provides open standardized tools for agent discovery, seamless protocols for agent to agent communication and modular components for scalable workflows. Collaborate with developers from Cisco, Dell Technologies, Google Cloud, Oracle, Red Hat and more than 75 other supporting companies to build next gen AI infrastructure together agencies dropping code specs and services, no strings attached. Visit agency.org to contribute. That's again a G N T C Y.org an open source collective building the Internet of agencies. Agency.org or we thank him so much for supporting not only supporting intelligent machines but supporting a very important effort in in AI. We're going to be covering agents MCP in our AI user group this Friday. If you're a member of the club, let me check the time. Anthony Nielsen and I do this every.
Leo Laporte [01:17:25]:
Every month. It's the first Friday of every month. It has become kind of one of the things I really look forward to. It's every month 2pm Pacific, 5pm Eastern 2200 UTC November 7 if you are you can watch Live View if you're not in the club but then it will be on the Twit plus feed for club members and join us. I think the plan is to get Darren okey. I'm hoping Darren will do this. He just got a new job, which worries me, but I'm hoping Darren will do this. He is.
Jeff Jarvis [01:17:53]:
I'll be happy for Derek.
Leo Laporte [01:17:55]:
No, I'm very happy for Darren. But he's. He has been writing his own MC Priorities.
Jeff Jarvis [01:17:59]:
Darren, you know the podcast comes first.
Leo Laporte [01:18:00]:
Yeah, no, the podcast, the unpaid appearance on the Twitch club Twitch shows comes first. Darren has been writing his own mcps. He's been. We've been talking about mcps. Apparently it's not that hard to create one. We'll talk about what an MCP is, how to create it, how to use it and that kind of thing. He has an obsidian MCP that I'm dying to use actually.
Jeremy Berman [01:18:20]:
Really smart guy.
Paris Martineau [01:18:21]:
Speaking of club twit podcasts, mark your calendars. Our upcoming part two of our DND adventure on Monday, November 17th.
Leo Laporte [01:18:32]:
I'm very excited.
Paris Martineau [01:18:33]:
Also to 8pm Eastern. Whatever.
Leo Laporte [01:18:36]:
Two to two to five Pacific. There I am. The Bard. SAG bot on the cheerful playing. My bad.
Paris Martineau [01:18:42]:
I'm gonna. I'm gonna get a my own AI generated image and actually work costume this time.
Leo Laporte [01:18:48]:
It looks like they left you out of the. The promo copy.
Paris Martineau [01:18:54]:
Wow.
Leo Laporte [01:18:54]:
I might have to add your name. Oh no, there it is. Paul Thorot, Paris Martineau, Jonathan Bennett and Jacob Ward. And I will. We're in a corn maze. Micah Sargent by hand.
Paris Martineau [01:19:04]:
Propaganda.
Leo Laporte [01:19:06]:
What is the name of yours? Catheter. Sag Bottom. What is no Catheter? Catherine Long Swallow or something like that? Yeah. Cathera. How is that pronounced? Kathira.
Paris Martineau [01:19:19]:
Cat I was saying Catherine, but could be Kathira.
Jeff Jarvis [01:19:23]:
Did you make it up or did the. The.
Paris Martineau [01:19:25]:
I just did a random generator.
Jeff Jarvis [01:19:27]:
Okay.
Paris Martineau [01:19:27]:
On DND beyond Kathira.
Leo Laporte [01:19:30]:
Long swallow, right?
Paris Martineau [01:19:31]:
Yeah, yeah. It's no sag bottom the cheerful. That's for sure.
Leo Laporte [01:19:37]:
I made that one up all by myself. As you probably could tell, Paul, throughout.
Paris Martineau [01:19:40]:
His helm hammer, all of your great bard quips that you made up all by yourself.
Jeremy Berman [01:19:46]:
Shh.
Leo Laporte [01:19:47]:
People didn't like that. So I'm gonna have to. I'm just not quick enough on my feet. So one of.
Paris Martineau [01:19:53]:
Just write, like, just over the next two weeks, just write single sentence. Five, like, two to five quips.
Leo Laporte [01:20:01]:
So one of my spells in this Dungeons and Dragons adventure is something. A vicious mockery.
Paris Martineau [01:20:09]:
Yeah.
Leo Laporte [01:20:09]:
And the problem was I had to apply it. Vicious mockery to a plow.
Paris Martineau [01:20:17]:
I mean, and you could.
Jeff Jarvis [01:20:19]:
So this sounds like so much fun.
Paris Martineau [01:20:21]:
The spell is that you mock it so viciously, it loses health. And so Micah was like, leo, like, do you have anything you want to say to the plow? And you could just have said, like, oh, you good. No good. Good for nothing. Terrible plow.
Leo Laporte [01:20:34]:
I wanted to be clever.
Paris Martineau [01:20:35]:
Instead, he recited a full chat GPT poem.
Leo Laporte [01:20:39]:
It was really good. I thought. I thought, you know, good.
Paris Martineau [01:20:43]:
But it was. It was too long, so that it was obvious that it was written by something else.
Leo Laporte [01:20:47]:
The work that I did was preparing that and typing in the prompt quickly enough so that I could use it.
Paris Martineau [01:20:54]:
So that is true.
Leo Laporte [01:20:56]:
Yes. See if I can find it. I guess I. You know, I must use perplexity. I don't see it on my chat GPT Anyway, it will be a lot of fun. There are part two of the Horror in the Cornfield with Micah Sargent, who has a great scarecrow costume. Mundane of every Once again. Yeah, yeah.
Leo Laporte [01:21:17]:
He's gonna have to put the makeup on and everything. He had a vampire costume for Halloween. Oh, how did your log lady go?
Paris Martineau [01:21:23]:
Oh, yeah, it went great. So good. Yeah. Let me find a photo. There's one on my Twitter, but there's a better photo on my Instagram, to be honest.
Leo Laporte [01:21:35]:
All right, I'm gonna. Everybody's mocking the fact that I let Chat GPT write my invective, so I will. I will write some of my own.
Paris Martineau [01:21:48]:
You can just be mocked. That's okay.
Leo Laporte [01:21:50]:
Yeah. I thought it was pretty clever of me to come up with it. I will. I'm not as great at improv as I wish. This is where my dissatisfaction with my own performance on the show. Hey, you really look like a Log lady. Oh, yeah, that.
Paris Martineau [01:22:03]:
Someone else found it for me.
Leo Laporte [01:22:04]:
Where'd you get my log?
Paris Martineau [01:22:06]:
Hold on.
Leo Laporte [01:22:07]:
That log is fantastic. You made that? Paris was complaining because she'd been searching all over New York City for a log. You think you could find a log in that big.
Paris Martineau [01:22:17]:
You can't find a log. I mean, now it's kind of. It's been dropped on the ground a lot, so you can kind of see.
Leo Laporte [01:22:21]:
Is it Styrofoam?
Paris Martineau [01:22:22]:
No, it looks Styrofoam that. I had two different pieces of Styrofoam that I then glued together and clamped together. Then I carved it from that, from a square into.
Jeff Jarvis [01:22:35]:
Aren't you crafty, Martha Stewart?
Paris Martineau [01:22:38]:
And then I added this.
Jeff Jarvis [01:22:40]:
I think you need a crafts with Paris show.
Leo Laporte [01:22:43]:
Are you gonna bring that to Florida for Thanksgiving?
Paris Martineau [01:22:47]:
Yeah, I'm actually leaving Gizmo behind, but I'm gonna just bring this log.
Leo Laporte [01:22:50]:
Let's bring the log instead.
Jeff Jarvis [01:22:53]:
Sure. Did all of your friends get it? The costume?
Leo Laporte [01:23:00]:
None of them.
Paris Martineau [01:23:01]:
It's actually, yes, almost all of them did. But what I'm trying to say is, in. So I live in a specific neighborhood in Brooklyn that is really, really popular on Halloween. Like, it is deep throngs of children everywhere. You see, like, difficult to navigate. And I was going one neighborhood over to a neighborhood called Fort Greene, and as I was walking, like, through our neighborhood, like, some people were kind of looking at my customer. As soon as I cross over to the Fort Greene border, they're older. Every single person is like, log lady.
Jeff Jarvis [01:23:30]:
Wow.
Paris Martineau [01:23:30]:
Great. Log lady. And I was like, wow, I'm among my people here.
Leo Laporte [01:23:37]:
So if you want to be a log lady, go to Fort Greene. And if you're. If you're David lynch, you should be living in Fort Green, obviously. Obviously, Perplexity keeps doing interesting things. They've now got a tool designed to help you find patents. Now you can go to the USPTO website and search for trademarks and patents, but it isn't the best keyword search, right? Yeah, it's not great. So they've launched an AI tool, a patent research agent that lets you use natural language. Are there any patents on AI learning, something like that? Or key quantum computing patents?
Jeff Jarvis [01:24:14]:
How about the line of technology? You search for the line of type.
Leo Laporte [01:24:16]:
Oh, should I. I have a. A Perplexity Pro account. Let me see. Do I have to go to a special line?
Paris Martineau [01:24:27]:
Will this be useful for patent trolls? Probably, right?
Jeremy Berman [01:24:31]:
Yeah.
Leo Laporte [01:24:31]:
But it'll also be useful for you as a. As a researcher.
Paris Martineau [01:24:35]:
When am I ever looking up patents.
Jeff Jarvis [01:24:37]:
When you invent the log maker.
Paris Martineau [01:24:40]:
Yeah, they don't have anymore. You know, they just can't find a blade sharp enough handle in your, in your business.
Leo Laporte [01:24:48]:
I think that it's going to come up.
Jeff Jarvis [01:24:50]:
Yeah.
Leo Laporte [01:24:51]:
Yeah.
Paris Martineau [01:24:52]:
I mean, I feel like the patent. I've searched for patents before.
Jeff Jarvis [01:24:55]:
I mean way back when I was searching on Mergenthaler, the inventor of the Linotype, he didn't appear in newspapers at all except enlistings of patents in regular newspapers. Patents were news back then. There weren't that many of them.
Leo Laporte [01:25:07]:
What year should. What year range? 18.
Jeff Jarvis [01:25:13]:
To 1880. Not to 1900.
Leo Laporte [01:25:18]:
1880 to 1900. So I'm gonna say show me patents for the Linotype or other printing tools.
Jeff Jarvis [01:25:24]:
Other. Other line casters. How's that? That'll be more specific. See how it does.
Leo Laporte [01:25:29]:
Yeah. Oh, look at this. There's quite a few. Yeah. Here are notable U.S. patents covering the line type of their printing tools, focusing on hot metal line casting and contemporaneous printing press tool innovation. See, wouldn't it be useful for.
Jeremy Berman [01:25:45]:
Yeah.
Leo Laporte [01:25:46]:
And presumably now I know, Paris, you're gonna say, well, how many of these are hallucinations? But it does have links to the actual USPTO entry for that. This is, this is. So I didn't have to do anything special. I just used my regular Perplexity and it automatically.
Jeff Jarvis [01:26:04]:
They do cool things. Yes.
Leo Laporte [01:26:06]:
Yeah, I think that's really. I think this is a great example of how tools like this are great for research.
Jeff Jarvis [01:26:11]:
More of this, please.
Leo Laporte [01:26:12]:
More of this? Yes. Well, less, Less Coca Cola ads. Fewer Coca Cola ads.
Paris Martineau [01:26:17]:
Perplexity though. This week Amazon sued to stop Perplexity.
Jeff Jarvis [01:26:21]:
Did they sue or did they send a cease and desist?
Paris Martineau [01:26:25]:
Amazon filed cease and desist. This is from Bloomberg. I don't know if it's in the rundown. I'll put it.
Leo Laporte [01:26:34]:
Yeah, at first it says cease desist letter, but I guess they've, they've raised a lawsuit and this is awful. So this is.
Jeff Jarvis [01:26:42]:
Which is awful, Amazon or.
Leo Laporte [01:26:44]:
Yes, Amazon. It's totally anti competitive. Perplexity has an agentic browser comment. I don't use it. In fact, if you're interested, you should listen to security now from yesterday where Steve kind of says how dangerous these, these agentic browsers can be because they're, they're very vulnerable to prompt injection and other hacks. Nevertheless, people can use Perplexity's Comet browser to shop on Amazon. You could say, buy me something on Amazon and it will do that. Amazon, which has its own Rufus, which is an AI agent That does buying and has of course the echo which does buying basically says, we don't think Perplexity should be allowed to do this.
Paris Martineau [01:27:27]:
They're specifically accusing Perplexity of committing computer fraud by failing to disclose when comment is shopping on a real person's behalf, which this is in violation of Amazon's terms of service.
Leo Laporte [01:27:38]:
Well, I mean, and maybe they're going to win on that grounds. Like just like Apple says, you know, you can't put another messenger app on the iPhone or whatever.
Jeff Jarvis [01:27:47]:
Here's the question this, you've dealt with this entire career, Leo, on terms of service. Is it always found that I am bound by the terms of service of the site that I go to, the company I go to?
Leo Laporte [01:28:00]:
No, no, this is the so called shrink wrap license, which is, you know, just by virtue of using something you, you have agreed to the terms of service.
Paris Martineau [01:28:10]:
I would also say, I mean notably Amazon in November last year asked Perplexity to stop deploying AI agents capable of purchasing products on the site until the two companies came to an agreement in the practice, says Bloomberg. Like the startup originally complied, but then this August it started using its new Comet browser which had logged into users. Amazon's accounts like to do this, which was in violation of their previous agreement.
Leo Laporte [01:28:39]:
But I have to say, it's not like somebody stealing from Amazon, they're using a browser to go buy something from.
Paris Martineau [01:28:47]:
This is kind of interesting context. So like I just said, in August Perplexity kind of launched this Comet browser. This time Perplexity identified the agents as Google Chrome browser users to.
Leo Laporte [01:29:01]:
Yes, it doesn't say it's then when.
Paris Martineau [01:29:04]:
But Amazon asked them to stop. When Perplexity refused to stop the bots, Amazon tried to block them. Then Perplexity released a new version of Comment to get around the security measures that Amazon introduced.
Leo Laporte [01:29:16]:
So let me just say this. No browser is completely honest in its user agent. If you check what your browser's user agent is, it often says, hold on a second, my mother's calling. I'm not going to answer.
Paris Martineau [01:29:34]:
Does she want to join the show?
Leo Laporte [01:29:37]:
That might be risky.
Paris Martineau [01:29:39]:
Yeah, that's fair.
Leo Laporte [01:29:41]:
She has moments of lucidness, but then she has moments where she's less lucid. And I don't know which of those moments. Do you need to pick her up?
Paris Martineau [01:29:50]:
Yeah, we can Van.
Leo Laporte [01:29:51]:
No, I don't need to pick her up. No, no, no, no, no. She, she doesn't, we don't need to go into it. I will call her right after the show. So if you go to whatismybrowser.com, you can see what user agent your browser passes. So let's. Let's parse my own. Now I am on something called Zen Browser, but it identifies as Mozilla 5.0 gecko Firefox.
Leo Laporte [01:30:18]:
Yeah, it's based on Firefox, but it's not exact. I'm not using Firefox, I'm using a different browser. You could do this with the most browsers. The user agent. There's no law that says the user agent has to say exactly what you are. In fact, most of the time it's not. If you use Safari, it's going to say it's Mozilla.
Jeff Jarvis [01:30:35]:
Why should Amazon care?
Leo Laporte [01:30:37]:
It's a. It. It's a.
Jeff Jarvis [01:30:39]:
Or is it just a way to.
Leo Laporte [01:30:43]:
It's a straw man. It's not. It's not a red.
Paris Martineau [01:30:46]:
One of the things that Bloomberg points out is that shopping agents like cities comments Comet could one day pose a significant threat to Amazon's really lucrative advertising business. Yes, that's how it makes a lot of money by selling prominent placements on its web store in response to shoppers product search queries. If you have a bot shopping for customers, then the advertising placement is less valuable.
Jeff Jarvis [01:31:12]:
Amazon's ad business is larger than the ad business of the entire worldwide magazine industry.
Leo Laporte [01:31:17]:
So they make more money showing you ads when you go to buy something on Amazon highly targeted, focused than they do by selling you the product probably. Plus it's more likely that you'll see.
Benito Gonzalez [01:31:27]:
The Rick like on the store. They still don't make money on the store.
Jeff Jarvis [01:31:30]:
Right?
Benito Gonzalez [01:31:30]:
The store is still not profitable. Right. Like Amazon all makes all their money on aws.
Paris Martineau [01:31:36]:
Well, aws, but Amazon advertising has been a big boom for the company especially since the pandemic. Like they make a considerable amount of money doing that. It's not AWS level money, but it's not.
Leo Laporte [01:31:49]:
This is the same thing though. I mean if I use perplexity to do research, I'm not seeing the ads on the websites it's pulling from. This is this. This is the. The whole issue with these AI agents in general is they are disintermediating the advertising on your website. Does the. I guess you're not paying for the content because you're not seeing the ad. But people run ad blockers.
Leo Laporte [01:32:14]:
More than half, more than half of.
Jeff Jarvis [01:32:16]:
America blockers Now can you recognize the ads on Amazon? The ads on Amazon don't look like normal ads.
Paris Martineau [01:32:24]:
No, they don't. I believe the markup one time had a really Good. I guess investigation or study of this about just the insane percentage of basically everything you're seeing on Amazon's front page is advertising in some way. And in search requires the sellers to have paid the company in some.
Leo Laporte [01:32:47]:
Yeah, well, so for instance, I just searched for spoons on Amazon and there's going to be the overall pick. Well, how does this particular spoon become the overall pick? Amazon.
Jeff Jarvis [01:32:59]:
Beautiful, Leo. It is the essence of a spoon.
Leo Laporte [01:33:01]:
It's a perfect spoon.
Jeff Jarvis [01:33:02]:
You cannot improve on that spoon.
Leo Laporte [01:33:04]:
And is the Amazon Basics disposable clear plastic spoon the best spoon? I. I don't know, but this is. These are essentially, as you say, and as the markup said, these are essentially paid ad deals, even though it doesn't say advertising.
Jeff Jarvis [01:33:19]:
If you're wondering what to get Paris for Christmas, I think she needs the spoons because she has these. These sad, tiny silverware that she still uses.
Leo Laporte [01:33:27]:
What is your. What is your flatware situation, Paris?
Paris Martineau [01:33:30]:
It's a bunch of flatware that has come from other roommates.
Jeff Jarvis [01:33:35]:
Yeah, she showed us the.
Leo Laporte [01:33:36]:
When you're young, it's only appropriate that you should have mismatched flatware.
Paris Martineau [01:33:41]:
Well, I was planning on bringing for some really fancy Sabra flatware sets, but. But now I can't buy them.
Leo Laporte [01:33:49]:
Oh, they're not in the U.S. yes.
Paris Martineau [01:33:53]:
Paris isn't sending them to the U.S. anymore. And if I do buy them, they're like, they've gone from extraordinarily expensive to like. I can't reasonably spend a thousand plus dollars on flatware.
Leo Laporte [01:34:06]:
You need to have a. You need to have things you need so that people can get you wedding gifts.
Paris Martineau [01:34:12]:
Yeah, because I'm gonna be getting married.
Leo Laporte [01:34:15]:
You don't know. You don't know. And I think you don't know.
Paris Martineau [01:34:18]:
You could walk out the door. Door.
Leo Laporte [01:34:20]:
Don't know. Mr. Wright could be standing on your landing right now. No, I'm just saying. Do you have a hope chest? What is that? Never mind. Kids turn podcast comments into secret chat rooms. This is from Mike Masnick on Tech Dirt. Did I don't know.
Leo Laporte [01:34:44]:
Have you read in any of our reviews, have you seen kids exchanging messages?
Paris Martineau [01:34:50]:
I don't think there are any children watching this show.
Jeff Jarvis [01:34:53]:
Yeah, Paris is as young as it gets.
Paris Martineau [01:34:55]:
I'm probably the youngest person that listens to this show, and that's just because I am on it.
Leo Laporte [01:35:01]:
Well, don't be so sure. Here's an example. This is an episode of a show called what Leadership Looks like, which I don't think sounds like a kid show, but clearly in the comments. There are kids talking to one another. You're gorgeous, Carmen. You're really pretty. Omg, she pretty. What's she talking about?
Paris Martineau [01:35:22]:
I mean, this is the YouTube comments. Yeah, sort of thing.
Jeff Jarvis [01:35:25]:
Years ago when I ran local sports news stories and forums at the news sites nj.com places like that, people would take them over. In that case for racist, horrible things before 4chan existed.
Leo Laporte [01:35:37]:
But here's the point. Why do you think kids are doing this? Because they are being blocked from using social media.
Jeremy Berman [01:35:43]:
Yep.
Leo Laporte [01:35:44]:
And this is the real.
Jeff Jarvis [01:35:45]:
They're smarter than the grownups.
Leo Laporte [01:35:46]:
In just a few weeks, kids 16 and under will not be allowed to use social media of any kind in Australia. They will find a way happening. Oh, yeah.
Jeff Jarvis [01:35:56]:
Oh yeah. It's a lot.
Leo Laporte [01:35:58]:
They will find a way. Kids will. And maybe they'll do it in a.
Jeff Jarvis [01:36:04]:
Hidden way you can't find because they're smarter than their parents.
Leo Laporte [01:36:07]:
Here's the list of social media platforms that will be banned for kids under the age of 16 starting December 10th. Tick tock, Instagram, Snapchat, YouTube, YouTube, Facebook X, Reddit, kick threads. This is. This is not. Not. They have to age check. Not that they have to age verify they're not allowed to use it.
Benito Gonzalez [01:36:34]:
YouTube is a choice.
Jeff Jarvis [01:36:35]:
Get ready, get ready.
Leo Laporte [01:36:38]:
Yeah, Isn't that interesting that kids can't. Under 16 can't use YouTube.
Jeff Jarvis [01:36:44]:
The other is, you know where this comes from. It's not a bubble, it's a moral panic.
Paris Martineau [01:36:55]:
You're sleeping Bonito.
Jeff Jarvis [01:36:57]:
I warned you.
Benito Gonzalez [01:36:58]:
My buttons aren't working.
Leo Laporte [01:37:00]:
Oh, no.
Jeff Jarvis [01:37:02]:
So title my working.
Leo Laporte [01:37:04]:
But here's the weird thing. Here's what they will be allowed to use. Facebook Messenger, WhatsApp, YouTube, Kids Discord, GitHub, GitHub, Lego Play, Roblox, which is problematic to say the least. Steam and Steam Chat and Google Classroom. Now, of course, when I say it's. It's not age, you know, it's not age verification. It is. What it means is that every single person in Australia will have to verify their age and that they are over 16, 16 or older before they can use YouTube, which is a separate problem.
Leo Laporte [01:37:41]:
A big problem of age verification. Yeah. December 10th. Unbelievable. Unbelievable. They're already. They're putting up billboards and informational commercials for parents on how to help your teenager get through social media withdrawal. Can you imagine being 15 and said, you know, sorry, you can't.
Leo Laporte [01:38:04]:
You can't talk.
Jeff Jarvis [01:38:05]:
What happens if you're a parent? You say, Paris, you can use social media.
Leo Laporte [01:38:10]:
Well, that's an interesting loophole because your parent could say, well, I'm going to verify it for you.
Jeremy Berman [01:38:14]:
Yeah.
Leo Laporte [01:38:15]:
We'll just say you're a 32 year old guy from sexy and. Yeah, I don't know. We're gonna have more of intelligent machines just a little bit. Pair smart. No Consumer Reports. What is your new food poison story about? Anything?
Paris Martineau [01:38:31]:
I'm working on it. It's not going to be about poison and food but it will be about food safety though. Food safety issues and regulatory stuff.
Jeff Jarvis [01:38:40]:
Did you have. A friend of mine sent me who thinks I'm ridiculous using cacio e Pepe afterwards sent me a story that there's listeria poisoning. Like 18 people died.
Jeremy Berman [01:38:49]:
Dude.
Paris Martineau [01:38:49]:
There's so much listeria going on. I'm, I'm subscribed to the FDA alerts. Like it's like I get multiple a day. Stay away from pre made fettuccine Alfredo products because that's where a lot of them seem to be.
Leo Laporte [01:39:02]:
So listeria is, is a bacteria. Yeah. And it is like salmonella or E. Coli. It is a contaminant in food. Where does it come from though? How do, how do we. It's just come out of the air. Do we know?
Paris Martineau [01:39:16]:
That's a great question. I don't know off the top of my head. It something you want?
Jeff Jarvis [01:39:24]:
Vegetables.
Leo Laporte [01:39:26]:
Yeah, it's often in vegetables.
Jeff Jarvis [01:39:30]:
Where does this listeria come.
Leo Laporte [01:39:32]:
And probably the reason for vegetables is they're not processed, they're not cooked, they don't, they're not pasteurized, they're not in any way, you know, cleaned. I clean all my vegetables and baking soil and water. Can you check animal waste?
Paris Martineau [01:39:48]:
Syria is easily killed by heating foods to high enough temperatures.
Leo Laporte [01:39:52]:
Pasteurization. Yeah. Yeah. Okay. Well be careful. It's out there. Be careful out there. As they used to say on Hill Street Blues.
Leo Laporte [01:40:02]:
Now that we.
Jeff Jarvis [01:40:05]:
Is your next commercial a food commercial?
Leo Laporte [01:40:07]:
No, I, I feel like advertisers should have a, a separation.
Jeff Jarvis [01:40:17]:
Yes.
Leo Laporte [01:40:17]:
From things that we talk about in the show. Yeah. It needs to be like we need to talk about something nice for a little bit and then we'll do the ad.
Jeff Jarvis [01:40:27]:
All right, let's find you something nice. Nice and quick. We got, we got Google Maps will soon tell you when to switch lanes. Good.
Paris Martineau [01:40:38]:
What?
Jeff Jarvis [01:40:39]:
Yes.
Paris Martineau [01:40:39]:
That was the Google change law.
Leo Laporte [01:40:45]:
How about this?
Paris Martineau [01:40:45]:
If you're a new listener to intelligent machines, you're going to know what that means. That that's for the fans. If you've been wondering Sandheads out there.
Leo Laporte [01:40:53]:
Once, once you're done with your log. If you've been wondering how you should recycle it. Maybe you'd be interested in this story. Ars Technica. A neural network has discovered an enzyme that can break down polyurethane. Given a dozen hours, the enzyme could turn a foam pad into reusable chemicals. It's a, it, it's a. It's a new way to recycle plastics.
Leo Laporte [01:41:16]:
But what's interesting and to, from our point of view is this was discovered by AI. The tool started working with, the team started working with is called Pythia Pocket. It's a neural network that specializes in a very narrow area. Whether a given amino acid in a protein is likely to contact whatever chemicals that structure combined along with any other functional features. I don't understand what that means. So Pythia, which is a plain old neural network combined with pocket, and then it predicts whether any given protein is likely to form a stable structure. In any event, they used it. The researchers reasoned that a good candidate for breaking down polyurethane would have a number of features.
Leo Laporte [01:42:01]:
It would look structurally like an enzyme they'd already been working with. But. But an enzyme that didn't get the job done would also face a trade off between having a structure that was ordered enough to form a similar binding pocket that could have enzymatic activity, but not so rigid that it couldn't fit around the log. Different types of polyurethanes, anyway, it's pretty technical, but the results were spectacular. Of the 24, the AI came up with 24 highly rated proteins. 21 of them showed some catalytic activity. Eight did better than the best enzyme we've known about up to now. And the best of the designs had 30 times the activity of that enzyme.
Leo Laporte [01:42:45]:
This is done by AI, so there's some value. I mean, that's not AGI. This is the point, I guess.
Jeff Jarvis [01:42:50]:
Well, that's, that's why I think these things that are specific are more promising.
Leo Laporte [01:42:54]:
Yeah, very promising, very useful, very interesting.
Jeff Jarvis [01:42:56]:
And you can trade. It's like, it's like you can train them on an author. You can train them on a task. Yeah, I think that's. That makes more sense to me.
Leo Laporte [01:43:05]:
Let's take a little break. We'll come back with more. You're watching Intelligent machines with Paris Spoons, Martineau.
Paris Martineau [01:43:11]:
I've got a very important announcement when we come back.
Leo Laporte [01:43:13]:
Oh, this is just in breaking news. And Jeff hot type Gutenberg Jarvis.
Paris Martineau [01:43:20]:
Jeff holds up stubby hands that are just little tiny keyboard clacks.
Leo Laporte [01:43:25]:
Yeah, yeah, yeah. Little lead type hands. Our show today, brought to you by Spaceship actually This is a cool product. If you've been listening for a while, we've been talking about Spaceship and there's a good reason for that. If you need a domain, this is the place to go for domains below market price. Domains, but also just the best. Darn website is a very modern website and maybe that's why they're doing so well. Spaceship, which is relatively new, just passed a major milestone.
Leo Laporte [01:43:54]:
Over 5 million domains under management. You don't get that kind of growth by chance. You get it by being the best. Spaceship delivers real quality and features that make sense. And by the way, it's more than just registering domains, but for everything that helps you build and run your online presence. That means hosting, it means business email, it even means tools for creating and managing web apps all in one straightforward platform. Another reason people are switching, of course, the pricing. I always mention the pricing.
Leo Laporte [01:44:25]:
There is essentially Black Friday and Cyber Monday level values all year round. So you don't have to wait for a sale to get a great deal. But maybe right now you'll be glad to know Twit listeners get exclusive offers that make it even better.
Jeff Jarvis [01:44:40]:
Uh huh.
Leo Laporte [01:44:40]:
Spaceship.comTwit so whether you're planning a new online project or moving an existing one, Spaceship has what you need to get it launched, connected and running smoothly. A lot more affordably too. I love all of the little tools that Spaceship ads. I love how their DNS works. I love Alf their AI that helps you do these little DNS chores that everybody hates Alf loves. Check it out spaceship.comtwit to see our exclusive offers and find out why millions have already made the move. Spaceship.comTwit we love these guys. It's my new registrar for everything.
Leo Laporte [01:45:19]:
Spaceship.com twit Tech companies Paris.
Jeff Jarvis [01:45:27]:
Paris.
Leo Laporte [01:45:27]:
Wait wait, wait, wait a minute. Break this just in.
Paris Martineau [01:45:30]:
I'm wondering if you guys have got me anything because it's my two. It was my two year anniversary of joining the show last.
Leo Laporte [01:45:36]:
Oh, happy birthday Paris. Happy birthday. I wanted you something. Now go out and look on your landing and see who's standing. It's Zora. He's. He's waiting to take you on a date. That's right.
Leo Laporte [01:45:50]:
Well.
Paris Martineau [01:45:51]:
And now the co chair of his transition team is Lena Khan.
Leo Laporte [01:45:56]:
What?
Jeff Jarvis [01:45:56]:
Yes.
Jeremy Berman [01:45:57]:
Yeah.
Jeff Jarvis [01:45:57]:
Yes.
Leo Laporte [01:45:58]:
Oh, that makes me very happy. Lena Khan who was Biden's chair of the FTC and did so many great things which have mostly been undermined by the succeeding administration, including the famous click to cancel rule which I thought was what every consumer loves, which is the idea that companies should make it as easy to cancel a subscription as it was to create the subscription. You know, you go to a website, you create a subscription and then they say, oh, but you have to call us if you want to cancel it. Or they hide the cancel box somewhere. The FTC made a rule saying, no, no, you can't do that. And of course now history, because the new FTC has said, no, no, no, you don't have to do that. Just like the FCC is abandoning the broadband nutrition labels that told you exactly what you were getting from a broadband provider. They don't want you to compare.
Leo Laporte [01:46:49]:
Anyway, Lina Khan, good news. I hope she can do something for New York. You're, you know, you're living in the greatest city in the world. If you can make it there, you can make it.
Paris Martineau [01:47:03]:
I'll be accepting all cotton based gifts gifts as it is the second anniversary.
Leo Laporte [01:47:09]:
Cotton. You know, the. Really the anniversary gifts are not all that hot. For many, many years until you get to 50. You know, like wood. There's a clock.
Jeff Jarvis [01:47:20]:
She would appreciate wood.
Paris Martineau [01:47:22]:
That is so specific because the other ones are just materials. And then one year is just.
Leo Laporte [01:47:26]:
Yeah, one is just clocks.
Jeff Jarvis [01:47:28]:
They ran out of things. Corn. Otherwise it would have been corn.
Paris Martineau [01:47:34]:
Could have been corn. Can be a lot of things. I guess there could be a lot of food products.
Leo Laporte [01:47:39]:
Here are the. So there's two lists. There's a traditional and a modern list. This is traditional, which is paper, cotton, leather, fruit and flowers. Wood, candy, copper and wool, bronze for the eighth, pottery for the ninth, tin for the tenth. If you give somebody tin for your tenth anniversary. No. Steel, silk, lace, ivory, crystal.
Leo Laporte [01:48:01]:
And then it ends at 15 because apparently in the good old days nobody made it past 15.
Paris Martineau [01:48:06]:
Oh, wait, no. 16, Ivy, 17 flowers or fern or I guess this is.
Leo Laporte [01:48:11]:
What are you looking at? There's modern, there's alternate. There's flowers, there's stones. The modern anniversary gifts are plastic, cotton and cotton.
Paris Martineau [01:48:20]:
For some reason, years 26 to 29, no traditional gift. But it picks back up at 30.
Leo Laporte [01:48:26]:
In this they figure, you know, if you're going to make it, you got to get all the way to.
Jeff Jarvis [01:48:30]:
Those are the, Those are the seven year rich years.
Leo Laporte [01:48:33]:
The 24th anniversary. Musical instruments. That's a big one. It's not clocks. I apologize. For 31st anniversary, it should be timepieces. You give the gift of transportation on the 32nd anniversary.
Paris Martineau [01:48:49]:
My website says it's a ruby.
Leo Laporte [01:48:52]:
I gotta, I gotta. I got a one way ticket to Muncie for you.
Paris Martineau [01:48:57]:
Okay, buddy, I'm also just trying to the.
Jeff Jarvis [01:49:00]:
Okay.
Paris Martineau [01:49:01]:
It's interesting that the UK's first year anniversary is cotton. The US paper. But second, it's reversed. But also if you're giving. If it's your first wedding anniversary with your new spouse and you give them like a cotton blanket, I think you're done. So.
Leo Laporte [01:49:16]:
I agree. I agree. Flowers and candy, always a good idea.
Paris Martineau [01:49:20]:
Yeah.
Leo Laporte [01:49:20]:
And jewelry.
Jeff Jarvis [01:49:22]:
Basically tidy whiteies. Yeah.
Leo Laporte [01:49:24]:
It's just standing instructions are jewelry, jewelry, jewelry.
Jeff Jarvis [01:49:29]:
Then you reach a point in a marriage where enough jewelry, you run out of ideas. Stop with the jewelry.
Paris Martineau [01:49:34]:
Okay. The 80th traditional.
Leo Laporte [01:49:36]:
How.
Paris Martineau [01:49:37]:
How many people are making it to 80th anniversary?
Leo Laporte [01:49:39]:
You gotta be 100 years old or get married when you're satisfied. Yeah, yeah. How do we get back to AI here? Why don't you guys help me? Dictionary.com is named the word of the year. Are you ready for this? 6 7.
Paris Martineau [01:49:58]:
I saw so many people wearing 67 costumes on Halloween. It was all parents.
Leo Laporte [01:50:04]:
Yeah, because the kids are already.
Paris Martineau [01:50:06]:
Because we were. I was. One of my two parties was a bunch of adults drinking out in a stoop, handing out candy. And every time we'd see parents wearing 6 7, we'd all go 67, which is the hand gesture you're supposed to do. And the parents lost it every time. They were so excited. And the kids look mortified. Which I guess was part of it is.
Paris Martineau [01:50:24]:
It was part of a. It was a parental movement to reclaim 6, 7. So it was no longer funny. And I'm curious to see if it's worked or not.
Jeremy Berman [01:50:32]:
Wow.
Benito Gonzalez [01:50:32]:
Well, that's the secret weapon. That's how you get rid of that stuff, is for adults to get into it.
Leo Laporte [01:50:36]:
Yeah, yeah, yeah. You start saying skibidi all the time and kids stop just saying it.
Benito Gonzalez [01:50:41]:
I mean, Dictionary.com having it as the word that.
Leo Laporte [01:50:44]:
That basically that makes it. It's over, Right. It comes from a rap song, right? Paris.
Paris Martineau [01:50:52]:
No.
Jeff Jarvis [01:50:55]:
She'S a little older than middle school.
Leo Laporte [01:50:58]:
It comes from a skrilla song called Doot Doot six seven, where he repeatedly says six seven in his lyrics. Searches for six, seven began to spike in June, have been on the rise ever since, increasing six or sevenfold. In being named the word of the year, it beat out some tough contenders. This is from Dictionary.com. kiss Cam Tariff.
Paris Martineau [01:51:25]:
Kiss Cam. Is that a cold play reference?
Leo Laporte [01:51:28]:
Kiss Cam. Maybe. Yeah.
Paris Martineau [01:51:31]:
I mean, absolutely right.
Leo Laporte [01:51:32]:
Yeah, that's.
Paris Martineau [01:51:33]:
That's. That's so specific.
Leo Laporte [01:51:35]:
Ter. A trad wife.
Paris Martineau [01:51:37]:
Trad wife would have been a good one.
Leo Laporte [01:51:39]:
Yeah. And the dynamite Emoji which apparently I did not know this has come to signify Taylor Swift and Travis Kelce. Because it's T. What happened to. What don't we give them a name? Like, I don't know, Trailor. Travis.
Jeff Jarvis [01:51:57]:
There was something.
Leo Laporte [01:51:58]:
Swelsie, I guess.
Paris Martineau [01:52:00]:
Swelsie.
Leo Laporte [01:52:01]:
Swelsie.
Paris Martineau [01:52:01]:
Swelsea. Could be it.
Leo Laporte [01:52:02]:
Names don't really mesh well. Swelsey. We're gonna call him Swelsie from now on.
Jeff Jarvis [01:52:07]:
So I just finished reading Stefan Fatsis's Unabridged in which he worked at. No, I don't want to start it again. He worked at Webster. Miriam Webster.
Leo Laporte [01:52:21]:
Fun.
Jeff Jarvis [01:52:21]:
And tried to get words into the dictionary.
Leo Laporte [01:52:25]:
That sounds like fun.
Jeff Jarvis [01:52:26]:
And the history of Webster and all that. So it's an okay book.
Leo Laporte [01:52:29]:
One of my favorite books was the book about the oed. What was it called?
Jeff Jarvis [01:52:32]:
Oh yeah, yeah.
Leo Laporte [01:52:33]:
I really enjoyed it. Tech companies don't care that students use their AI agents to cheat.
Paris Martineau [01:52:41]:
Why would they?
Leo Laporte [01:52:43]:
No, it just means you've got skills.
Paris Martineau [01:52:45]:
I mean there's always targeted advertising, open AI and whatnot.
Leo Laporte [01:52:50]:
During a giveaway of chat, GPT plus to college students said quote, here to help you through finals.
Paris Martineau [01:52:57]:
Yeah.
Leo Laporte [01:52:59]:
Students get free year long access to Google and Perplexity's AI products. In fact, Perplexity even pays refers $20 for each US student that they get to download the browser comment. The one that Amazon's upset about.
Jeff Jarvis [01:53:14]:
Corrupt your fellow students.
Leo Laporte [01:53:16]:
Yeah. Yeah.
Paris Martineau [01:53:18]:
Gotta be weird to be a student now.
Jeff Jarvis [01:53:21]:
Oh yeah.
Leo Laporte [01:53:22]:
In a Facebook ad in early October, Perplexity ad showed a student. The Verge has put this in quotes discussing how his peers use Comets AI agent to do their multiple choice homework. And another ad posted the same day to the Instagram page. On Perplexity, an actor tells students that the browser can take quizzes on their behalf. She says, but I'm not the one telling you this.
Paris Martineau [01:53:49]:
Rough. I mean what are your teachers supposed to do?
Leo Laporte [01:53:52]:
Yeah, I, I, you know, I think just let you do it. You know the. So as you know, I. Every year I look forward to a coding competition, the advent of code and one of the things that's kind of marred it lately is AI. You know, people are. Because there's a leaderboard and he has now an faq and he says the FAQ says should I use AI to solve advent of code puzzles? No. Eric Wastel says if you send a friend to the gym on your behalf, would you expect to get stronger? I think that's a good. That's an excellent analogy.
Leo Laporte [01:54:27]:
You're not going to get smarter students yeah, if you use AI, you can easily use AI to solve these problems, but you shouldn't, because you're not going to get any better.
Jeff Jarvis [01:54:38]:
But you're going to win. Yeah, that's all that matters. Leo, aren't you an American? Don't you understand that?
Leo Laporte [01:54:43]:
Hey, here's some good news. I know many of us have been bemoaning the lack of flying cars, like we were supposed to get flying cars. Joe Rogan's interviewing Elon Musk and Elon says, yeah, we're gonna have. We're gonna unveil a flying Tesla by the end of the year because he's.
Jeff Jarvis [01:55:01]:
So reliable with all of.
Paris Martineau [01:55:02]:
He said that today in November.
Leo Laporte [01:55:09]:
By the way, Tesla announced a new Roadster in 2017. They said it'll be ready by 2020. They still have not delivered it. In fact, Sam Altman and Elon got in a little tussle because Sam says, I want my. My deposit back for my Roadster. To which Elon said, you already got it back. Don't give me that.
Paris Martineau [01:55:30]:
So the boys are fighting.
Leo Laporte [01:55:32]:
The boys are fighting. Musk has been talking. According to a gadget. Musk has been talking about flying cars since 2014. But this is the year, everybody.
Jeff Jarvis [01:55:41]:
It'll fly all the way to Mars, children.
Leo Laporte [01:55:43]:
All the way to Mars. I did have to laugh at this piece from the Baffler, written by Bruce Brace Belden. The hatred of podcasting.
Paris Martineau [01:55:57]:
I loved this piece. There were so many good lines. Hold on. I've gotten some of my favorite in there.
Leo Laporte [01:56:05]:
So he says, I think I know.
Paris Martineau [01:56:07]:
Which one you're talking about, too.
Leo Laporte [01:56:08]:
The editors at the Baffler want me to talk about my job. They want me to humiliate myself in the pages of this magazine. Very well. I am a podcaster, he says. What is a podcaster? Someone who makes money from talking, often by means of selling dick pills. I don't do that part, but I still obscure what I do whenever possible. Now, I have to point out this guy thinks podcasting began three or four years ago with cereal, and he thinks it's all over now, because I guess he doesn't.
Paris Martineau [01:56:45]:
I mean, he describes how one of the first podcasts he listened to is in, like, 2008.
Leo Laporte [01:56:50]:
Okay, okay. But podcasting during. Obama had a wondrous feeling. He writes Americans. The shows that were popular around then, like Serial or Invisibilia, which debuted the following year. The pledge to explore the intangible forces that change shape human behavior, all at a very Eagle Scout approach. I heard it on a podcast People would say in 2015 that if you said, I heard on a podcast, you were trying to sound smart. In 2025, it's better to lie.
Paris Martineau [01:57:23]:
That's true. What were your favorite lines? Is then Covid hit and podcast went through the roof. And now I have the salary of a dermatologist and live in Brooklyn, which is Israel for podcasters.
Leo Laporte [01:57:40]:
It's our home. Our special home. It's true, we do have probably a larger proportion of people from Brooklyn on our shows than any other city. That is probably true. It's very frequent that we will have somebody from Brooklyn on. You're from Brooklyn.
Jeff Jarvis [01:57:53]:
I used to live in Brooklyn.
Leo Laporte [01:57:55]:
Jeff's from New Jersey. But that's, you know, Brooklyn. Brooklyn West.
Jeff Jarvis [01:57:58]:
Roundup.
Leo Laporte [01:57:59]:
Yeah, Roundup. That's right.
Jeremy Berman [01:58:01]:
I don't know.
Leo Laporte [01:58:02]:
I just thought it was. Thought it was funny. And I also.
Paris Martineau [01:58:05]:
Honestly, I'd really recommend this piece. It's a fantastic. It's written by one of the hosts of True Anon, which is a popular podcast. But he kind of weaves in both, like, kind of memoir ish, Writing about his really interesting backstory as like a. A member of the US army who kind of s. Posted his way to popularity, for lack of a better word, then became a podcaster instead of, I guess, going on a different sort of tour of duty, and has ended up in this strange life. But it is also just kind of an interesting reflection in the last, like, 10, 15 years of podcasting, especially in, like, the post pandemic era.
Leo Laporte [01:58:49]:
I think my kind of attitude towards podcasting is colored by the fact that I came from radio and it's just radio over the Internet as far as I'm concerned.
Jeff Jarvis [01:58:59]:
Well, Howard Stern always argued that podcasting isn't radio, and until this, you really work on radio, you don't understand. And he made fun of podcasts for years until now.
Paris Martineau [01:59:07]:
Well, I think that's the thing is part of this, what this is like chronicling is the shift from what used to be considered like these. These highbrow, like, capital P podcasts that were produced, shows that seemed like they were really there to learn to a. Like to this.
Jeff Jarvis [01:59:24]:
This.
Paris Martineau [01:59:24]:
Yeah, this. He literally says, already isolated individuals now worked from home and the only room with only a roommate or dog for company. The sharp rise in loneliness facilitated the shift from highbrow liberal shows to cartoonish hangout sessions with the worst every gender had to offer.
Leo Laporte [01:59:40]:
Boy, that describes this show in a nutshell. This is a cartoonish hangout section session. Yeah, but the thing is, this show has always been that it's.
Jeff Jarvis [01:59:49]:
Since.
Leo Laporte [01:59:52]:
We'Ve been doing that since 2008. So it's just, you know, trends. There's been. There's been real podcasting all this time, and then there's been the hyped podcasting, the celebrity podcast. They're still around, but, you know. Anyway. TikTok has announced its first award show in the U.S. by the way, did.
Leo Laporte [02:00:10]:
Did Trump not meet with President Xi in South Korea last week?
Jeff Jarvis [02:00:13]:
It didn't come up.
Leo Laporte [02:00:14]:
Tick Tock did not come up.
Jeff Jarvis [02:00:16]:
No. Neither did Taiwan. Those are the two things that people thought were going to come up. Tick Tock and Taiwan issues in life.
Leo Laporte [02:00:23]:
Yeah. Because he said, we have a deal. What? It's still in limbo. In other words, there's no. Well, they just. They decided to go ahead. They're going to do a Tick tock of award show. It'll have creator of the year, video of the year, muse of the year, breakthrough artist of the year.
Leo Laporte [02:00:42]:
This is just weeks after Instagram announced its awards program. There will be an actual award show, though, for Tik Tok, including a red carpet, live performances, and a live audience. December 18, the Hollywood Palladium in Los Angeles. It'll be streamed on TikTok. Oh, and on Tubi. To be. I want to call it Tubby, but it's.
Paris Martineau [02:01:09]:
I want. I want to call it Poob. Have you guys heard of Poob?
Jeff Jarvis [02:01:13]:
I don't think that's right for our audience.
Leo Laporte [02:01:16]:
Was that. Wasn't that what Clarence Thomas found in his Coke can? What is Poob?
Paris Martineau [02:01:22]:
This is from. I guess maybe it's a tweet or maybe it's a Tumblr post. It's become like a meme where the tweet just says, have you seen the new show? It's on Tubu. It's literally on Heebie. It's on Pootie with ads. It's literally on Dippy. You can probably find it on Ouino. Dude, it's on Gumpy.
Paris Martineau [02:01:39]:
It's a Fino original. It's on Poob. You can watch it on Poob right now. Log on to poop right now. Go to Poop. Dive into poop. You can poop it. It's on poop.
Paris Martineau [02:01:48]:
Poop has it for you. And I love that.
Jeff Jarvis [02:01:52]:
That's beautiful.
Leo Laporte [02:01:54]:
This is on Tumblr, actually.
Paris Martineau [02:01:56]:
It's one of those phrases that lives in my house that is.
Leo Laporte [02:02:00]:
Is. It's on YouTube and it really is what's going on. You know, it is. We had a bit of a crisis in the laporte. Household on Monday night when. When we discovered that we were not going to be able to see Monday night football because ESPN, ABC, Disney, Hulu have FUBU, which is one of the Disney channels. FUBU, it's on FUBU. They pulled off of YouTube TV and they're not going to.
Leo Laporte [02:02:29]:
I don't. So we've seen these carriage arguments before, right? This one's different because Disney has fubu, espn, Disney plus Hulu, it has four massive streamers. They would far prefer you watch football on their streams and on YouTube TV. And I wonder if a deal is going be to be made, which is going to be a big issue for YouTube TV.
Jeff Jarvis [02:02:53]:
They begged YouTube, Google to put up ABC for election night, and Google said no.
Leo Laporte [02:03:01]:
So who. Who is it you think that is in the driver's seat on this one?
Jeff Jarvis [02:03:05]:
I think it's two monstrous companies. It's just like the shutdown.
Leo Laporte [02:03:10]:
And as always, when the. When the giants battle, it's us little people that get stepped on.
Jeff Jarvis [02:03:17]:
So you couldn't watch any football, huh?
Leo Laporte [02:03:20]:
Well, we. So fortunately, ESPN does not have all football. Most of the football's on Fox and CBS and other networks. But. But Monday Night Football is an ESPN production, so that's. And that's the big one.
Paris Martineau [02:03:34]:
Sports game. For the first time potentially in my.
Leo Laporte [02:03:37]:
Life this weekend, the world Serious.
Paris Martineau [02:03:39]:
The world series, the SiriusXM. It was riveting. I had to have someone explain to me how baseball worked throughout it, because. But once I understood, wow, what.
Leo Laporte [02:03:48]:
By the way, that person had the best night of their life.
Paris Martineau [02:03:51]:
I was. But he did actually.
Leo Laporte [02:03:54]:
He was like, well, let me tell you what's going on here. The runner on third is threatening to run to home.
Paris Martineau [02:04:02]:
He was like, paris, they're not called points, they're called rums.
Leo Laporte [02:04:05]:
And I'm like, best night of his life. Finally, somebody I can tell about baseball.
Paris Martineau [02:04:11]:
I realized that my friend is an inherently uninquisitive person, because I was like, why is it called the count? And he's like, I don't know what you mean. He's like, I've never thought about that before. Idiots count, and it always count.
Leo Laporte [02:04:23]:
No balls, one ball, two balls, a strike, two balls. It's the count. It's three and two. Is the count.
Jeff Jarvis [02:04:29]:
Fairly obvious.
Leo Laporte [02:04:30]:
You're counting, counting how many balls and strikes.
Paris Martineau [02:04:33]:
Yeah, I know, but the count is just a silly name for it.
Leo Laporte [02:04:36]:
Oh, baseball's full of them. The bunt, the suicide squeeze.
Paris Martineau [02:04:41]:
I like that they had to introduce a clock to stop. Stop people from just wasting everyone's time.
Leo Laporte [02:04:46]:
It was so.
Paris Martineau [02:04:47]:
I think that's pretty.
Jeff Jarvis [02:04:48]:
But then, but then, so boring.
Leo Laporte [02:04:49]:
It is really, I will say, the.
Paris Martineau [02:04:51]:
One game of baseball I've ever truly watched. Riveting.
Leo Laporte [02:04:54]:
Well, you watched literally the most exciting game of the year. Did you watch game seven?
Paris Martineau [02:04:59]:
Yes.
Leo Laporte [02:05:00]:
Yeah.
Paris Martineau [02:05:01]:
Literally a bottom of the ninth, bases are loaded situation.
Benito Gonzalez [02:05:04]:
Possibly of all time. The most.
Leo Laporte [02:05:07]:
Possibly.
Paris Martineau [02:05:07]:
It was great.
Leo Laporte [02:05:08]:
I had a lovely experience. I will say this. I was rooting for Canada because I. I am an honorary candidate.
Paris Martineau [02:05:14]:
As was I. It also took me a minute to be like, why are there Canadian teams in baseball, in American baseball?
Leo Laporte [02:05:22]:
But I guess why, why are half the pitchers on the Los Angeles Dodgers from Japan.
Jeff Jarvis [02:05:26]:
Yeah.
Leo Laporte [02:05:27]:
When they come to the mound, they have to bring an interpreter.
Benito Gonzalez [02:05:29]:
Because the Japanese are the best baseball players.
Leo Laporte [02:05:31]:
They're the best.
Jeremy Berman [02:05:32]:
Yeah.
Leo Laporte [02:05:33]:
Look at Ohtani. Oh my God. He's a great pitcher and a home run hitter. But that doesn't happen in the US you're one or the other. You got to pick a lane.
Jeff Jarvis [02:05:43]:
All right, so I think Google's space data centers. Did you see that?
Leo Laporte [02:05:50]:
Yeah. This is not. They're not the only ones. This is, this is. The next big thing is data centers and data centers in space.
Jeff Jarvis [02:06:00]:
I found that interesting.
Paris Martineau [02:06:02]:
I'm just trying to imagine a situation where we lose a data center because it just goes like we're into space.
Leo Laporte [02:06:10]:
Well, there. But see. Oh no.
Paris Martineau [02:06:12]:
My data center collided with all of the space junk. Oh no, my data center collided with the Tesla Roadster.
Leo Laporte [02:06:20]:
Oh, it's way out there now.
Paris Martineau [02:06:22]:
But I mean, wait, well, is that thing still tracking where the Tesla is?
Leo Laporte [02:06:26]:
Oh, yeah. Oh yeah. Let's see. Tesla, it's kind of. I don't know if it's within in radio. I think it's in orbit now. Not of the. Of us.
Leo Laporte [02:06:36]:
It's of the sun. Yeah.
Jeff Jarvis [02:06:37]:
Where is Starman? Starman, where is roadster.com?
Leo Laporte [02:06:43]:
Here we go. Yeah, it's in orbit around it's nearest Venus right now. Yeah. Oh, Venus. It's a. It's 1797-771797-77823 miles from Earth. Moving it away as a speed of 9,388 miles an hour.
Paris Martineau [02:07:10]:
It's beautiful.
Leo Laporte [02:07:11]:
It's moving towards Mars. Yeah. It has achieved a fuel economy of 30,811.9 miles per gallon.
Paris Martineau [02:07:22]:
Great.
Leo Laporte [02:07:23]:
If the battery were still working. Remember Starman is listening to David Bowie's Space Oddity over and over and over again. It would have been 768,000 times by now.
Paris Martineau [02:07:35]:
Space Odyssey in one ear and is there life on Mars in the other ear while the stereo plays Hitchhiker's Guide to the Galaxy.
Leo Laporte [02:07:44]:
Wow.
Benito Gonzalez [02:07:44]:
But honestly though, all the radiation probably killed all the electronics already by now.
Paris Martineau [02:07:48]:
Yeah, yeah.
Jeff Jarvis [02:07:49]:
Plus there's no sound in space, so.
Leo Laporte [02:07:52]:
Yeah.
Paris Martineau [02:07:53]:
Your telescope would have to be 42,000ft in diameter to resolve the upper stage from Earth.
Leo Laporte [02:08:03]:
Seven years, eight months out there. Wow. But back to Google.
Jeff Jarvis [02:08:09]:
Yes.
Leo Laporte [02:08:09]:
You know, I mean, this is the thing. Steph's. I didn't know this, but apparently Starlink loses one or two satellites a day reentering the Earth's orbit because. Because SpaceX has made it so cheap to launch these things. Relatively cheap to launch these things. They just keep. They know they're going to decay and they just put more up, putting up more faster than they are losing them.
Jeff Jarvis [02:08:32]:
But where does this. Does this orbit Earth or does this orbit the sun?
Leo Laporte [02:08:35]:
No, no, it or the Earth. It has to be close, otherwise, you know, you don't. It's probably low Earth orbit, I would guess. I haven't looked at this yet, but because you, you won't, you don't want a lot of latency. It's going to be sending data down.
Jeff Jarvis [02:08:49]:
Right.
Leo Laporte [02:08:50]:
Five or six years. They expect the lifespan up there because the radiation will.
Jeff Jarvis [02:08:55]:
Yeah, they're not sure what the radiation will do to the chips.
Leo Laporte [02:08:58]:
Yeah, but that Google is not the only one I think is either Japan or China has been planning to do this also.
Jeff Jarvis [02:09:03]:
It takes away problems of cooling it.
Benito Gonzalez [02:09:06]:
Well, you'd think so, but it does different problems. Yeah, it creates.
Leo Laporte [02:09:09]:
So you would think so because they're in space where it's very, very, very cold. Cold, right. But because it's a vacuum, heat is not conducted off, it has to be radiated off. So actually it isn't as easy to cool it as it would be on Earth. You have to have, you know, special technologies to cool it. But of course there's a lot of sun. Google calls the project Project Sun Catcher. They're going to launch two test satellites in 2027, each carrying four TPU units.
Leo Laporte [02:09:44]:
And then as space transport comes down in price, they think this will become economically viable in 10 years, 2035.
Jeff Jarvis [02:09:52]:
They took chips to UC Davis and used a particle accelerator to irradiate the processors and simulate the years of solar exposure in space.
Leo Laporte [02:10:02]:
Right.
Jeff Jarvis [02:10:03]:
They held up quite well.
Leo Laporte [02:10:05]:
Well, there you go. Well, watch out because AI according to Anthropic is becoming introspective. This is such bs.
Jeff Jarvis [02:10:17]:
Wait, you're the Sandman you think that all this is true?
Leo Laporte [02:10:21]:
No, I think this is more anthropomorphization. On Wednesday, Anthropic published a paper titled Emerged last Wednesday, not this Wednesday. Emergent Introspective awareness in Large language Models in some experimental conditions, Claude, appears to be capable of reflecting upon its own internal states. See? Or is it just repeating words that are next according to probability? You're ascribing kind of an intent to random words. I just don't. Doesn't make sense. It turns out though, the paper comes from a computational neuroscientist and the leader of Anthropics model psychiatry team, Jack Lindsay wrote, Our results demonstrate that modern language models possess at least a limited functional form of introspective awareness. That is, we show that models are in some circumstances capable of accurately answering questions about their own internal states.
Leo Laporte [02:11:28]:
And probably inaccurately as well, be my guess. I just feel like that's more anthropomorphism. And it's also, of course, Anthropic's interest to gather. They're alive, they're thinking.
Benito Gonzalez [02:11:41]:
And wait, what is the they there? Is it like the individual chatbot you're talking to? Or is it like all the chatbots is one person? Or like, what. How does that work?
Leo Laporte [02:11:52]:
It's a machine.
Jeff Jarvis [02:11:53]:
Yeah.
Leo Laporte [02:11:53]:
So it's one machine you're typing at.
Benito Gonzalez [02:11:56]:
The one machine that I'm on is the. Is the.
Leo Laporte [02:11:58]:
Is feeding back words.
Jeff Jarvis [02:12:00]:
Yes.
Leo Laporte [02:12:00]:
That sound like it's thinking about itself.
Jeff Jarvis [02:12:03]:
We caught it thinking, we thought, we caught it contemplating. But for all of that, Anthropic might have a better business than.
Leo Laporte [02:12:14]:
They're making money.
Jeff Jarvis [02:12:15]:
Open AI. They project $70 billion in revenue, 17 billion in free cash flow by 2028.
Paris Martineau [02:12:22]:
They project, based on what I project, that I'll have $70 billion in revenue.
Leo Laporte [02:12:30]:
This is their most optimistic growth forecast.
Paris Martineau [02:12:33]:
That's also my most optimistic growth forecast.
Jeff Jarvis [02:12:38]:
I, you know, therefore, on corporate companies rather than on public adoration.
Leo Laporte [02:12:46]:
Yeah, there's an interesting conflict here because on the one hand, there's no question that companies want to use AI. They're paying for AI. Individuals are paying for AI. I have 4 or 5 20amonth accounts because that's you. But. But I love this stuff. I'm using it all the time.
Paris Martineau [02:13:04]:
Which one's your favorite right now?
Leo Laporte [02:13:06]:
Depends what I'm doing. I really like Instinctive.
Paris Martineau [02:13:10]:
Favorite it. What?
Leo Laporte [02:13:11]:
Well, I. For coding, Claude, code is easily my favorite. I used to really like perplexity, and I still think perplexity might be the best. But they've soured a little bit on the company itself. So I, I use chat GPT5 a lot. I, you know, I have that, that Common Lisp GPT I, I created many moons ago, probably a whole year ago when they had first had custom GPD's, maybe two years ago. And, and I've updated that a little bit to include, you know, more information and to use the latest chat GPT5. And it's, it's really useful for asking questions.
Leo Laporte [02:13:47]:
So I use it quite a bit, you know, when I do, when I, I'm getting ready for the advent of code December 1st. So a number of us, Paul, also known as Chocolate Milk Mini Sip in the Discord, and I are doing another coding challenge for. That's for the month of November, called everybody code codes. And it's. The problem is, you know, and I think this is true for anybody who doesn't code. Randall Schwartz, who's a pearl wizard and former host of Floss Weekly, told me that if you've got it, if you're going to keep using be a coder, you got to use the language two or three hours a day or you'll lose fluency. Like any language, you know, you got to keep speaking it. Reaching over to get my.
Leo Laporte [02:14:33]:
So often as I'm working, I will, I will refer to books like this and I'll go, okay, what is that? What is the syntax for that command? But now it's much easier for me to just ask Chat GPT. I've got this book and many others fed into it and I'll just say, hey, I want to. What's the command to do this? I want to do this. How do I do that?
Jeff Jarvis [02:14:51]:
Why don't you just follow Jeremy's example and he's vibe coding.
Leo Laporte [02:14:55]:
Well, I see I don't want to vibe code it because I want to actually do the coding because it's like.
Jeff Jarvis [02:14:59]:
Going to the G. Here's a guy who's doing, doing the. The real serious stuff.
Leo Laporte [02:15:03]:
I know, it's pretty cool.
Jeff Jarvis [02:15:04]:
He's using language. Just English, yeah. It's philosophy majors. When Paris and I both graduate with our PhDs in Philosophy.
Leo Laporte [02:15:10]:
And as you know, Anthony Nielsen's being using Claude for research for guests. He generates a wonderful research preci. He did it for our last guest. That it makes it very easy to, you know, understand what they're going to talk about and get questions that I wish I'd had that my entire career would have been hugely useful. So I use a little bit.
Paris Martineau [02:15:36]:
Enjoying Claude. Lately I've been Comparing Claude and ChatGPT.
Leo Laporte [02:15:40]:
Both the Pro versions for images, I use chat.
Paris Martineau [02:15:43]:
I will say, even just putting in this AI introspection study and asking Claude what it said, because I think I'd seen somebody on Twitter Blue sky post the their screenshot of Claude's thinking on this same introspection. And I think theirs was like, oh, no, I can think, or something like that. Am I gonna get a funny response like that? And no, I've trained Claude too, that it literally in its thought process, like, do not be cute or cloying in your response. Yeah, it annoys me when it tries to be cute or use emojis.
Leo Laporte [02:16:17]:
This is the Image I had ChatGPT generate. You know, you can ask it, well, what do I look like? And it said, well, give me a headshot. I said, no. And I said, okay, well, here's what you look like. Podcaster hero Leo laporte. And it comes with accessories. Why do headphones.
Paris Martineau [02:16:32]:
Why do you have Dune behind you?
Leo Laporte [02:16:34]:
I don't know. Well, maybe the sand. I don't know.
Benito Gonzalez [02:16:36]:
And town.
Leo Laporte [02:16:37]:
And townation. Townation yet. Sci fi. I must have asked it. I don't know. I have a cast iron. I have a cast iron pan and a basting brush attached to my coat.
Jeff Jarvis [02:16:48]:
Always handy to have both.
Paris Martineau [02:16:49]:
Both.
Leo Laporte [02:16:49]:
Yeah.
Benito Gonzalez [02:16:50]:
Oh, it's a mashup beach without them. It's a mashup of a cartoon. And the. The toy. The toy meme, everyone.
Leo Laporte [02:16:57]:
Yeah, the action. Because I had it. I once do that. Yeah. So. So basically it. It says, I can draw you, but to make it you. You, not the guy who vaguely podcasts.
Leo Laporte [02:17:09]:
I need one clear photo, front facing, good light, no heavy filters. Head and shoulders is perfect. And by the way, because I once asked it for a stipple portrait, it said, I'll. I'll default to the stipple portrait if you don't pick a style. Isn't that it? So it.
Jeff Jarvis [02:17:23]:
It.
Leo Laporte [02:17:24]:
It is remembering everything to make it you. You.
Jeff Jarvis [02:17:27]:
It speaks like that.
Leo Laporte [02:17:28]:
Yeah, it's pretty. Pretty.
Paris Martineau [02:17:31]:
Not guy who vaguely podcasts.
Jeff Jarvis [02:17:33]:
Yes.
Leo Laporte [02:17:34]:
Yeah, yeah, I. I think they're very. Look, I know how it works. It's very hard, I understand, not to anthropomorphize and say, wow, this is like a real person I'm talking to. I understand that. And that is problematic, for sure. So what are you doing with it, Paris? What do you like to do with it?
Paris Martineau [02:17:55]:
What have I done recently? I'm kind of doing an impromptu road trip this weekend, and so I asked both Claude and Chat GPT to analyze my Route and I don't know, highlight any places that think I should stop between here and Vermont. Oh, I am. This is unrelated to them, but it was already on my list as I'm going to the American Museum of Tort Law, which I've been wanting to do for years.
Leo Laporte [02:18:21]:
Wow, that'll be gripping.
Paris Martineau [02:18:23]:
It's actually gonna be so gripping.
Jeff Jarvis [02:18:25]:
Paris, Paris, Paris, Paris, Paris. On Saturday.
Paris Martineau [02:18:28]:
I want to go to the printing museum.
Jeff Jarvis [02:18:30]:
Museum outside.
Paris Martineau [02:18:32]:
It's. I don't have the time for it. I really tried Paris. I know, but I do. I did mention this to my two graphic design a couple and they're all like, we've got to go. So I'm gonna get amazing. I'm gonna go with them.
Jeff Jarvis [02:18:46]:
Okay, good.
Paris Martineau [02:18:47]:
Okay. You gotta look at their merch because they've got both T shirts that have flaming Ford Pintos on it as well as one of the flaming rats. And I'm gonna get a lot of it. It's a Ralph Nader sponsored wow Museum. Don't take the tour. Don't spoil it for me.
Leo Laporte [02:19:03]:
Oh, don't look. Don't look. So tort law is what? Lawsuits, right?
Paris Martineau [02:19:09]:
Yeah, it's like product liability.
Leo Laporte [02:19:10]:
Look, this won't spoil it for you. Here's the umbrellas in the doorway.
Jeff Jarvis [02:19:15]:
Because you don't want to poke someone with an umbrella and then to get sued.
Paris Martineau [02:19:19]:
I mean. Yeah, it's lawsuits. It's. Well, it's law that. Yeah, it's relating to liability. There's stuff like. It's liability law. Like you've got tobacco cases, Ford Pinto, stuff with asbestos.
Leo Laporte [02:19:34]:
Actually, this sounds like a great museum.
Paris Martineau [02:19:36]:
Actually going to be. I legitimately have been wanting to go for years, but I've never. Connecticut.
Leo Laporte [02:19:43]:
Winstead, Connecticut.
Paris Martineau [02:19:44]:
It's like two hours.
Leo Laporte [02:19:46]:
Isn't that a great name for Connecticut? I'm Winstead.
Jeff Jarvis [02:19:50]:
You can also stop off off at Mark Twain's home, which is in Hartford. Hartford, yeah, just up the road. I mean, listen, it's pretty fun. They also.
Paris Martineau [02:19:58]:
My oyster.
Jeff Jarvis [02:19:59]:
You will see the. The page machine that bankrupted him in the basement.
Paris Martineau [02:20:05]:
I am. One of the reasons I'm going is I'm going to a famous puppet museum at some point as well as.
Jeff Jarvis [02:20:16]:
You know what.
Leo Laporte [02:20:16]:
You're a character, young lady. You are a character. Puppets and torts.
Jeff Jarvis [02:20:22]:
Tales to tell.
Paris Martineau [02:20:23]:
Puppets. Torts. Gaelic festival of death. You know, we love to see it.
Leo Laporte [02:20:32]:
Puppets, torts and skorts coming up.
Paris Martineau [02:20:35]:
Well, not s. Because it's going to be cold.
Leo Laporte [02:20:38]:
Yeah, no squirts. Ladies and gentlemen, our picks of the week are Coming up momentarily. But first, a word from our sponsor. You're watching intelligent machines brought to you by Monarch. Oh, I love Monarch. I use Monarch all the time. Wouldn't it be nice to kind of know to feel confident and organized in your finances with Monarch? Monarch is an all in one personal finance tool that takes your entire financial life and brings it together in one clean interface on your laptop. You can I have it on my phone right now just for our listeners.
Leo Laporte [02:21:14]:
Monarch is offering 50% off your first year. Just use the code I am monarch.com and do start at the website. That's the best way to set up Monarch. It's the easiest. And then you can use it on other devices. But, and that's the nice thing is that your entire financial life is available to you no matter where you are. Monarch is built for people with busy lives. And if because you're busy, you've put off organizing your finances, Monarch is for you.
Leo Laporte [02:21:43]:
Monarch does the heavy lifting. It's very simple. I link all my accounts. Just took a few minutes. They do it very securely. And then you're going to get clear data, visualizations, you know, graphs. You're going to get smart categorization of your spending. I really like that feature where it knows it does budgeting for me.
Leo Laporte [02:22:03]:
I don't have to think about it. So I know exactly what I've spent on each category, how much I spent eating out each month, how much I spent on, on capes each month. You know, that kind of thing. What's my hat budget for this year? Real control over your money. You'll never need to touch a spreadsheet again. No more entering by hand. I used to do this back in the day. That's what you used in a computer.
Leo Laporte [02:22:25]:
You would get your checking account statement, you'd enter in each transaction. You don't have to do that anymore. You link the accounts, you're done. And you always know exactly what your net worth is. You know where your money's going. It's not just another finance app. It's a tool not only used by people like me, but real professionals. Financial experts love and use Monarch.
Leo Laporte [02:22:46]:
Wall Street Journal named it the best budgeting app of 2025. I almost don't want to call it a budgeting app. That's just one of the many things it does. Forbes said it's the best app for couples. You know, they have a really nice feature where you can invite your partner to have access to your stuff and you control what they can see and can't see it. It is really important for couples to have those financial conversations. Often, you know, people avoid that. This is a great solution named in CNBC's top fintech companies in the world.
Leo Laporte [02:23:17]:
And by the way, you're going to want to follow the Reddit community. A very passionate community, 34,000 users. And it's more than just people talking about about Monarch giving you ideas about how to use it. They're there, they shape how the product is developed. The Monarch people pay close attention. Money can can really become a problem with couples but Monarch brings them together. It gives your partner full access to your if you wish to your shared dashboard, including linked accounts, budgets, goals, spending activity all in one place. No drama, no, you know, no heaviness, just information.
Leo Laporte [02:23:52]:
And it's no extra cost for you to add your partner. You can even give access to your financial advisor also at no extra cost. That's really handy. Don't let financial opportunities slip through the cracks. Use the code I am@monarch.com in your browser for half off your first year, 50% off your first year. Monarch.com the code is immediate. Highly agree. Really a great tool.
Leo Laporte [02:24:19]:
I just opened just now to see what my stock market is doing monarch.com all right ladies and gentlemen, time for Are you asking Paris where to go for pizza in New Haven?
Paris Martineau [02:24:39]:
I'm asking. Yeah. Where should I go?
Leo Laporte [02:24:42]:
Don't ask. AI man, you got where should I go? Experts on his show where should I.
Paris Martineau [02:24:47]:
Go in New have than once I'm.
Leo Laporte [02:24:49]:
There'S no question you should go to Pepe's P E P e's Pepe's tomato pies are the best pies.
Jeff Jarvis [02:24:55]:
If you're in the mood, get the clam pie. Get the clam pie.
Leo Laporte [02:24:59]:
Exactly. If you're in the mood, get the clam and garlic pie. One of the great things is you eat it one evening. It stays with you for days. Pepe's is was founded in 1924. It's the original pizza place. There's another place you should go if you can spend a little more time in New Haven. It's called Louie's Lunch and it's mentioned in the whiff and poof song from the tables Animore to the place where Lou dwells.
Leo Laporte [02:25:25]:
It is the birthplace of the hamburger. Established in 1895. It'll be the best. Look at these. These are the grills on the left.
Paris Martineau [02:25:33]:
Okay, that's. Actually I might do that instead.
Leo Laporte [02:25:35]:
It's. It's better than pizza. Well, no, it's not. Nothing's better than pizza.
Jeff Jarvis [02:25:39]:
But I Think you're going to get complaints for having just said that, Leo?
Leo Laporte [02:25:42]:
It is an amazing. What's funny is they grill it in these grills that have been there since 1895. Gas grills. They're the grilled sideways on toast and they put Velveeta on it. And it's still the best thing.
Paris Martineau [02:25:54]:
I'm excited by this already. I'm excited. I was sold once I saw that their website says birthplace of the hamburger. Sandwich.
Leo Laporte [02:26:02]:
Yes.
Paris Martineau [02:26:04]:
Language I need.
Leo Laporte [02:26:05]:
It's not on a bun, Lou. I would.
Jeff Jarvis [02:26:08]:
Is a hamburger a sandwich?
Leo Laporte [02:26:10]:
Yeah.
Paris Martineau [02:26:11]:
Yeah.
Leo Laporte [02:26:11]:
I mean, it is. The menu is very simple.
Paris Martineau [02:26:14]:
What would a hamburger be if not a sandwich?
Jeff Jarvis [02:26:16]:
It's a hamburger. It's his own.
Benito Gonzalez [02:26:18]:
It's a subcategory. It's a subcategory.
Paris Martineau [02:26:20]:
It's a subcategory of sandwich.
Leo Laporte [02:26:22]:
Look at this. It's on a piece of white bread. It is a sandwich.
Paris Martineau [02:26:25]:
It's like saying a grilled cheese isn't a sandwich.
Benito Gonzalez [02:26:27]:
Okay, then what is a sandwich? Jeff is a blt, not a sandwich. Because it's a. I don't know, whatever.
Jeff Jarvis [02:26:32]:
No, if you're. If you're between slices of bread, it's a sandwich. If you're in a. A bun, it's not a sandwich.
Leo Laporte [02:26:36]:
So Lou is in a bun. This is, by the way, the worst picture. I don't know why it's on their website.
Paris Martineau [02:26:44]:
That horrible is really compelling to me. The fact that the website is terrible is a real feather in my cab.
Leo Laporte [02:26:50]:
It is, it is. Here's the. Here's the menu. You can get. You get burger, potato salad, homemade pie, Poland spring water. Pepsi, Diet Pepsi, Snapple or Fox on Park. Soda. Pepsi, Pepsi.
Leo Laporte [02:27:07]:
No, no, no. Coke, Pepsi, Pepsi, Chibettaker. Chip burger. Chip burger.
Jeff Jarvis [02:27:12]:
But the meat, she has no idea.
Leo Laporte [02:27:15]:
They actually.
Jeff Jarvis [02:27:16]:
John Belushi. Is this another ad in Saturday Night Live? No.
Jeremy Berman [02:27:20]:
Okay.
Leo Laporte [02:27:22]:
Many years ago.
Jeff Jarvis [02:27:23]:
Anyways, I used to go to all the time in Chicago across from the Chicago Tribune in the Second City.
Leo Laporte [02:27:28]:
Oh, that was a real place.
Jeff Jarvis [02:27:30]:
Oh, yeah, absolutely. Yes.
Leo Laporte [02:27:32]:
Great. Anyway, Louie's lunch and then Pepe's pizza. If you. Pepe's usually has a line. There's also John's. A lot of people like John's. I think Pepe's is the original. And Jeff, who is a pizza expert, you agree with.
Jeff Jarvis [02:27:44]:
I say I judged it.
Jeremy Berman [02:27:45]:
I didn't.
Jeff Jarvis [02:27:46]:
It didn't win my contest.
Leo Laporte [02:27:47]:
But who won your contest?
Jeff Jarvis [02:27:50]:
Geno's East Chicago.
Leo Laporte [02:27:51]:
Yeah, I like Chicago style. This is not stakot. This is the opposite of it. Napolitana. This is a Naples style, traditional style pizza. It's incredible. You. They don't actually put tomato sauce on it unless you order it.
Paris Martineau [02:28:07]:
Interesting.
Leo Laporte [02:28:08]:
Yeah. Oh, but. Oh, my God. Let me see if I can find.
Paris Martineau [02:28:13]:
You can't be showing this to me at this hour of the evening.
Leo Laporte [02:28:16]:
I know. This is cruel.
Paris Martineau [02:28:18]:
This is really.
Leo Laporte [02:28:19]:
They don't even. They don't even have it on their menu because you have to know. Know. You've got to know.
Jeff Jarvis [02:28:23]:
Oh, really?
Leo Laporte [02:28:24]:
Well, I don't know. Clam and garlic is the one.
Paris Martineau [02:28:27]:
Pizza. I was eating egg rolls throughout this entire recording, but now I'm hungry.
Leo Laporte [02:28:31]:
So let me tell you about this pizza. They cook it in a wood fired brick oven. It's wood, and it is burned on the bottom. You know how it gets little burnt bubbles on the bottom? It makes it so good. Oh, my.
Paris Martineau [02:28:43]:
You guys are crispy pizza folk or chewy pizza folk.
Leo Laporte [02:28:46]:
This is chewy. It's not crispy, but thin. Thin crust.
Paris Martineau [02:28:50]:
Okay.
Leo Laporte [02:28:51]:
Yeah.
Jeff Jarvis [02:28:51]:
Paris, are you a crispy or a. I'm a crispy. Oh, I'm more sure.
Leo Laporte [02:28:56]:
Crispy. No, no, no, no, no, no.
Paris Martineau [02:28:58]:
I think that I've said this podcast. I've said this view in the podcast. I think that everything should have a little bit of crunch in it.
Jeff Jarvis [02:29:05]:
I mean, then you can't fold it the way a New Yorker does.
Paris Martineau [02:29:09]:
Well, that's the thing is. I guess it's fine. I. Yeah, you do fold it, but it kind of like snaps. Snaps.
Jeff Jarvis [02:29:14]:
Yeah, and then everything drips through it. No, it's not.
Leo Laporte [02:29:16]:
Oh, that's because you grew up in Florida. I'm sorry. This is not okay. You. You. We gotta. We gotta fix this.
Paris Martineau [02:29:22]:
I. Most of the pizzas I eat and enjoy are not crispy. But I'm gonna say if it's. When. When people say chewy, I think of, like a thicker pizza, and I don't.
Jeff Jarvis [02:29:31]:
No, it doesn't have to be.
Leo Laporte [02:29:32]:
No, no, no. This is thin. The crust. The crust is chewy. The crust has a wonderful chew to it and a crunch.
Jeff Jarvis [02:29:39]:
I want the kind of artsy pizza that I. Brick oven or. Or pizza.
Paris Martineau [02:29:43]:
I guess I live near a lot of fancy pizza places and I. I eat well.
Leo Laporte [02:29:48]:
No fancy pepes.
Jeff Jarvis [02:29:51]:
All right, Paris, have you had John's Paris.
Leo Laporte [02:29:56]:
Next door to this famous sandwich shop they call Salt Hanks. Have you ever been there?
Paris Martineau [02:30:00]:
Oh, no. Oh, yeah. Yeah. Back in the day. It's fine. Yeah, Yeah.
Leo Laporte [02:30:04]:
I got to come. I got to just fly out for a.
Paris Martineau [02:30:06]:
Why are you hang out with us?
Jeff Jarvis [02:30:08]:
We should fly out, do the Amazon.
Paris Martineau [02:30:11]:
Week, and then get our little studio and then we could all. What if we all did a show in person?
Leo Laporte [02:30:15]:
We did that once, remember, Jeff? We did it at.
Jeff Jarvis [02:30:19]:
Yes, we did.
Leo Laporte [02:30:21]:
Yeah, it was with Gina. Right.
Paris Martineau [02:30:24]:
I could bring props. We could do prop comedy.
Leo Laporte [02:30:27]:
That would be fun. That's very tempting. Where can we get a studio? Okay. Anybody?
Paris Martineau [02:30:32]:
What if we did. What if we did a New York City live show?
Leo Laporte [02:30:35]:
Oh, I would love to do that.
Paris Martineau [02:30:37]:
That'd be so fun.
Leo Laporte [02:30:38]:
We get five or six people. It'd be so amazing.
Paris Martineau [02:30:41]:
It'd be great.
Leo Laporte [02:30:42]:
We could. We could sell out Lou's lunch.
Paris Martineau [02:30:46]:
More than Lou's lunch.
Leo Laporte [02:30:50]:
All right, Paris, your pick of the week.
Paris Martineau [02:30:52]:
For once it's reversed between me and Jeff. I've got like four picks the week and Jeff only has one. Leo's immediately gone to the strangest one, which is another thing. I was thinking of stopping on my trip, which is this thing called the dog Chapel at Dog Mountain in Vermont, which is. In 1998, a folk artist died and came back to life five minutes later. He, upon recovery, stated that the near death experience had a profound effect on me as an artist. He realized that he had to build a chapel, one that celebrated the spiritual bond we have with our dogs. And it would be open to dogs and people, people of any faith or belief system.
Paris Martineau [02:31:33]:
It's apparently a chapel just filled with notes from people about their dogs.
Leo Laporte [02:31:38]:
I like to sign up front that says, welcome all creeds, all breeds, no dogmas allowed.
Paris Martineau [02:31:45]:
Kind of cute, right?
Leo Laporte [02:31:46]:
Yeah. I think this sounds. It's very sweet when you get to these places though, because you've done this before. Do you get there and go, okay.
Jeff Jarvis [02:31:53]:
Okay, I'm here now.
Paris Martineau [02:31:55]:
I get there, I go around. I probably going to spend a good 15, 20 minutes reading all the different notes. I'll sit in there, appreciate the space, look around a bit, and then carry on my merry way.
Leo Laporte [02:32:06]:
You know what's only eight miles away? The American Society of Dowsers. Oh, the national headquarters. What is a dowser? Oh, see this guy, he's holding some twigs in his hands. Actually he looks like he's holding plastic rods. But normally you would do do this with, you know, like hickory twigs. And he walks around, he helps you find where to dig your. Well, his.
Paris Martineau [02:32:31]:
Oh, I'm very interested in that.
Leo Laporte [02:32:32]:
Yeah.
Jeremy Berman [02:32:33]:
The miracle water work.
Leo Laporte [02:32:34]:
Yeah. Well, it said sometimes they call it water witching. You use a device usually kind of like forked sticks. And as you walk around, suddenly it starts twitching and points down at the Source of the water and that's where you dig the well.
Paris Martineau [02:32:49]:
Oh, I love that. Yeah, I will go to that as well.
Leo Laporte [02:32:53]:
Yeah. 2, 000 people from all across the United States are dowsers. American Society of Dowsers has its annual convention there every year.
Paris Martineau [02:33:04]:
That's delightful.
Leo Laporte [02:33:05]:
There's a small next to the parking area you can use to practice your dowsing.
Paris Martineau [02:33:11]:
What a wonderful world this is.
Leo Laporte [02:33:13]:
It is. It is.
Paris Martineau [02:33:16]:
So Webster's.
Jeff Jarvis [02:33:17]:
Webster's is in Springfield, Mass. I think. Historic building. The American Antiquarian Society.
Leo Laporte [02:33:28]:
Are you using the Atlas Obscure to plan this trip? That's where the dog chat.
Paris Martineau [02:33:31]:
Yeah, it's one of some. The main. One of the things I look.
Leo Laporte [02:33:35]:
It's a pretty cool website.
Paris Martineau [02:33:36]:
It's a very cool website. They've got a lot of like interesting things on there. Just sort of like strange stuff to stop by that I plug in whenever I'm. Especially a trip like this that I decided like two days in advance.
Jeff Jarvis [02:33:49]:
What's the destination in. In Vermont?
Paris Martineau [02:33:52]:
A bread and puppet theater, which is a famous puppet museum and puppet theater and 150-year-old barn that I've always wanted to go to because I've long admired their work and art and kind of their. It's like almost like a kind of neo futurist wow approach to stuff and I don't know, seemed like the time to do it. The other pick of the week is I have got a master list of Nick Cage films online.
Jeff Jarvis [02:34:29]:
Have you already do any reruns?
Paris Martineau [02:34:32]:
They can be reruns if you want. I've watched. I watched Face Off. I've seen Matchstick Men so far. I need to figure out what I'm gonna watch tonight after we get done with this podcast. But I've got all of the films I think can and should count for it in this list if you want to nickvember along. It's 126 films and great. I don't know.
Paris Martineau [02:34:56]:
It's a lovely endeavor.
Leo Laporte [02:34:58]:
I am now that I know and should I not share this but now that I know your. Your handle on letterboxd. You can follow you because I want to. I want to know what's the best nickvember movies to see. What's. What are you watching tonight?
Paris Martineau [02:35:16]:
I don't know. I haven't decided yet. I might watch Red Rock.
Leo Laporte [02:35:19]:
I didn't even know he was in Brew Breaker.
Paris Martineau [02:35:22]:
Yeah, that he's uncredited in that. Oh, that was a bit of a. I. I've gone through a couple different lists and I am including. I'm making some editorial judgments in my list that are even, as you should, a small feature. Or it's like, maybe not his participation, but it's about him in some way like that. That can count as a Nick member pick. But Brew Breaker is technically his first film appearance.
Leo Laporte [02:35:52]:
Oh.
Paris Martineau [02:35:53]:
But uncredited as. Where is he?
Leo Laporte [02:35:57]:
So you blink and you'll miss him, in other words.
Paris Martineau [02:36:00]:
Yeah. I believe he's like a guy in a car somewhere.
Leo Laporte [02:36:02]:
If I want to see the new one that he did, which was kind of a parody of himself. I mean, I did see it. I really liked it.
Paris Martineau [02:36:08]:
Unbreakable, or the. What's it called?
Leo Laporte [02:36:12]:
It's a funny name.
Paris Martineau [02:36:14]:
The unbelievable bearable weight of massive talent.
Leo Laporte [02:36:17]:
Yes, he. It is quite funny and good. I. He's. He's making fun of himself, which is.
Paris Martineau [02:36:23]:
Really excited because this is the first in November where a Nick Cage movie is being released during the month.
Jeff Jarvis [02:36:32]:
Whoa.
Paris Martineau [02:36:35]:
I'm kind of a scaredy cat. But given. Given everything, I've got to go see it in theaters. I'm gonna go see the Carpenter's Son, which releases on November 14, which is a horror movie with. Where Nick Cage, I think, plays Jesus's uncle.
Leo Laporte [02:36:51]:
But maybe I was gonna say, if he's the carpenter's son, it's Jesus's baby boy.
Paris Martineau [02:36:57]:
Yeah.
Leo Laporte [02:36:59]:
Which very few people know about.
Paris Martineau [02:37:03]:
The. The only thing there's no.
Leo Laporte [02:37:07]:
It's kind of the untold story of the whole thing. Yeah.
Paris Martineau [02:37:10]:
It features Nicolas Cage as the Carpenter, so I guess he's Jesus. FKA Twigs as the mother.
Jeff Jarvis [02:37:17]:
Carpenter would be Joel. Joseph.
Paris Martineau [02:37:18]:
Joseph.
Leo Laporte [02:37:19]:
No. Jesus was a carpenter.
Paris Martineau [02:37:21]:
Wasn't he a carpenter?
Leo Laporte [02:37:22]:
Yes.
Jeff Jarvis [02:37:23]:
Right.
Paris Martineau [02:37:23]:
Some guy named Noah Jupe as the boy. And that's it. They did. The only other details in the Wikipedia are that during filming, Cage was reportedly attacked by a swarm of bees in one of the caves intended to be a filming location.
Leo Laporte [02:37:38]:
Are Karen and Richard in the movie, too? I thought it was the. It was the. It was the Carpenter's biopic.
Jeff Jarvis [02:37:47]:
No idea what you're doing. Sorry.
Paris Martineau [02:37:48]:
Yeah.
Leo Laporte [02:37:48]:
Have you ever heard of the Captain and Tenille? All right, thank you, Paris, for those picks. Letterboxd. Can we say who you are on letterboxd, or is that a secret?
Paris Martineau [02:37:59]:
Yeah, follow me. On letterboxd. I'm the Void.
Leo Laporte [02:38:02]:
The Void. The Void.
Paris Martineau [02:38:04]:
I'm not. I don't. Not responsible for. I use crass language on my letterboxd, and many of the jokes will not make sense to you guys, but those are my top four favorite movies right now. All of which rule.
Leo Laporte [02:38:20]:
Jeff Jarvis, your pick of the week.
Jeff Jarvis [02:38:22]:
Simple one. Since doing age verification in the UK, Pornhub's UK visitors are down 77%.
Leo Laporte [02:38:31]:
So it works well.
Jeff Jarvis [02:38:33]:
Yeah, but this is.
Leo Laporte [02:38:35]:
Or maybe they have the same amount of visitors, but they're using VPNs and appear to be coming from somewhere other than.
Jeff Jarvis [02:38:41]:
Plus they. The report hub argues that people are going to the places that aren't trying like they are.
Leo Laporte [02:38:46]:
Yeah, that's a good point.
Jeff Jarvis [02:38:48]:
And that there are plenty of other.
Leo Laporte [02:38:50]:
Places you can go that don't adhere to the law.
Jeff Jarvis [02:38:53]:
Cyber News counted more than 10.7 million downloads of VPN apps in the UK across 2025.
Leo Laporte [02:39:01]:
Wow.
Jeff Jarvis [02:39:03]:
They said, where's the other number I wanted? There's something like 24. So there's some huge number of porn sites and they can't do them all. But pornhub is visible, so they're the ones who are under the thumb. It's a violation of people's freedom of.
Leo Laporte [02:39:20]:
Don't say it. Don't say it. Jeff Jarvis. He is the professor of Emeritus of Journalistic Innovation at the Craig Newmark Graduate School, City University of New York. He's also at Montclair State University and SUNY Stony Brook. Author of the Good Gutenberg, Parenthesis, Shut Me up and magazine. Thank you, Mr. J.J.
Leo Laporte [02:39:43]:
appreciate it. Appreciate you. My friend Paris Martineau is a investigative reporter at Consumer Reports, where she specializes in food safety but covers many other topics. And she is excellent at making spooky sounds. We will see you both next week. We do Intelligent Machines every Wednesday right after Windows Weekly. That's 2pm Pacific, 5pm Eastern, 2200 UTC. Next week, the founder of Intelligent Internet II, Ahmad Mostak.
Paris Martineau [02:40:26]:
They finally made one.
Leo Laporte [02:40:28]:
Yeah, in a couple, actually. Is that. Is that. No, no, I think I'm wrong. We're recording that Kevin Kelly's.
Jeff Jarvis [02:40:38]:
Kevin Kelly's next week.
Leo Laporte [02:40:39]:
Yeah. Oh, I'm excited about that. I love Kevin Kelly. Great journalist, writes for Wired, but he's been around for a long time. He was associated with Stuart Brandon, the Whole Earth Catalog. Jimmy Wales is coming up, the founder of Wiki. Looking forward to talking to him. Dr.
Leo Laporte [02:40:58]:
Anthony Vinci, the fourth intelligence revolution. Robert Seeger, Pliny the Liberator's Discord moderator. He's an expert on prompt injection and jailbreaking. C.J. trowbridge from Claude. We have some great guests coming up, so we thank you so much. Yeah.
Jeff Jarvis [02:41:21]:
Good work, guys.
Leo Laporte [02:41:22]:
Yeah. And thanks to Jeremy Berman for joining us. Very excited about the work he's doing at reflection AI 2pm Pacific. We'll be back Next Wednesday you can watch us live on YouTube, Twitch, Facebook, LinkedIn, X.com and Kick. Of course, if you're in the club, and I hope you are, you can watch us in the club. Twit Discord. That's a great place to hang out. Lots of fun people in there, lots of smart people.
Leo Laporte [02:41:45]:
Friday is our AI user group. And of course, as Paris mentioned, don't forget, we are going to be returning to the Corn maze as we continue our DND adventure. Paris will be there. I'll be there. Paul Thurot, Jonathan Bennett. Jacob Ward. The horror in the cornfield is November 17, 2pm what would you escape?
Paris Martineau [02:42:07]:
Who's to say?
Leo Laporte [02:42:08]:
We, you know, we. We barely killed that plow and wasn't there like a.
Paris Martineau [02:42:14]:
There was a. There was a scythe that I killed in one hit and then everybody else really struggled with the plow.
Leo Laporte [02:42:20]:
She has amazing dice skills. She's really good.
Paris Martineau [02:42:23]:
So I'm rolling those dice like nobody's business.
Jeff Jarvis [02:42:26]:
You're not convincing me to play games. You here.
Leo Laporte [02:42:29]:
Oh, it's so much fun. I. I can't wait. That's. That's coming up. We're going to do more of that stuff. That's why you join the club. Of course you get ad free versions of our shows.
Leo Laporte [02:42:36]:
You're supporting the work we do, which I think is really important. But at the same time, there's a lot of fun too. TWIT tv. Club Twit. If you have not yet joined Club Twit, please, we want to have you in the club. After the fact, on demand versions of this show add support, of course, at TWiT TV iM. You can also watch on YouTube and you could subscribe in your favorite podcast player. Audio or video or both.
Leo Laporte [02:43:00]:
And if you leave us a Knife's review, maybe Paris will read it next time on Intelligent Machines. Thank you everybody for being here. Have a great week. We'll see you next time. Bye bye. I'm not a human being.
Paris Martineau [02:43:13]:
Not into this animal scene. I'm an intelligent machine.