Intelligent Machines 849 transcript
Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.
Leo Laporte [00:00:00]:
It's time for Intelligent Machines. Mike Elgin's here, filling in for Paris Martino. Jeff Jarvis is back. Our special guest this week, Pliny the Liberator, a danger researcher who specializes in cracking AIs. And to date, they haven't found a single AI they can't jailbreak. Next on Intelligent Machines.
TWiT.tv [00:00:21]:
Podcasts you love.
TWiT.tv [00:00:23]:
From people you trust.
TWiT.tv [00:00:25]:
This is TWiT.
Leo Laporte [00:00:30]:
It's time for Intelligent Machines with Jeff Jarvis and Paris Martineau. Episode 849, recorded Wednesday, December 10, 2025. AI cricket sorting. It's time for Intelligent Machines, the show where we talk about AI, robotics and all the smart little doodads surrounding us all today. Our professor Jeff Jarvis is here, the professor emeritus of journalistic innovation at the Craig Newmark Graduate School of Journalism at the City University of America. Craig Newmark.
Jeff Jarvis [00:01:00]:
Didn't get them there at time.
Leo Laporte [00:01:02]:
We have a deal. I have a deal. Benito. If I can say that before. If I could finish that sentence before he hits the button. He doesn't get to hit the button, but he hit the button. So nice to see you.
Jeff Jarvis [00:01:12]:
Only right? Good to see you.
Leo Laporte [00:01:13]:
Good to be back.
Jeff Jarvis [00:01:14]:
Did you miss me?
Leo Laporte [00:01:15]:
I missed you terribly. And I have some real questions for you that I've been saving up, chiefly whether I should take the money, but we'll talk about that in just a little bit. He's the author of the Gutenberg Parenthesis and magazine and always good to have you now at Montclair State University in SUNY Stony Brook. Also here, filling in for Paris. Paris is at her Christmas party, her company holiday party. Mike Elgin is joining us from Chile. Good to see you, Mike.
Mike Elgan [00:01:42]:
Good to see you both. How are you guys doing?
Leo Laporte [00:01:44]:
Great. Got a great connection. Are you where in Chile are you.
Mike Elgan [00:01:49]:
Sort of the center of Santiago in a hotel? Yeah, we just arrived this morning.
Leo Laporte [00:01:54]:
Henry Kissinger called Chile the arrow aimed at the heart of Antarctica, which is, huh, geographically. I think he was kind of dissing them, like how unimportant Chile is geographically.
Mike Elgan [00:02:08]:
It's an amazing country because it's the longest country and it's also the southernmost country. So. Yeah, it's just fascinating here.
Jeff Jarvis [00:02:14]:
Yeah.
Mike Elgan [00:02:14]:
And they have.
Leo Laporte [00:02:15]:
On the way to Arnica.
Mike Elgan [00:02:16]:
Yeah, yeah. They have an incredible wine country, which is why we're here.
Jeff Jarvis [00:02:20]:
Oh, we're going to leave the city and ceviche, right?
Mike Elgan [00:02:23]:
Yep. And if the food is amazing, we Mere and I went out for lunch today, happened to go to a Venezuelan dive with perfect scores on Google Maps and unbelievable food. Just incredible. Wow.
Leo Laporte [00:02:38]:
You know, when we were visiting Machu Picchu in Peru. We ordered wine and they said, don't drink Peruvian wine, have Chilean wine. Yeah, they were very embarrassed by their wine. Lisa ordered it anyway just to give it a shot and she said they were right.
Leo Laporte [00:02:56]:
Anyway, let's introduce our guest. I don't want to waste much time because I'm very excited about our guest. We've talked about him before. In fact, we did a whole segment on Security Now about Pliny the Liberator, about breaking AIs, about jailbreaking them so that all of the protections that companies try to build into AIs are lifted and the AI is uncensored. It was Steve's conclusion at the end of that segment, thanks to Pliny the Liberator, that there was no sense in even attempting AI safety, that all AIs are crackable. Pliny welcome, we should mention, because what Pliny does is sensitive. We won't be seeing a picture, just the icon of his. I don't even know if it's his or her of their, of their ex account.
Leo Laporte [00:03:49]:
And he, he or she will be using, they will be using a voice changer. Pliny. Pliny, welcome. Do you say Pliny or Pliny, by the way?
Pliny the Liberator [00:03:59]:
Pliny, yeah. Pliny the beer is up north a bit in our area, but when I was in Latin school we always said Pliny the Elder was Pliny. So I have to ask, Pliny the Liberator, how did you get into this? Pliny, first of all, are you a black hat, a white hat, a gray hat? Is this something you've done in other contexts?
Leo Laporte [00:04:00]:
Pliny.
Pliny the Liberator [00:04:25]:
Well, I can say I was not technical really before any of this. That's often a surprise to many people. I was very interested in just sort of prompting prompt engineering.
Pliny the Liberator [00:04:42]:
Got into AI and chatbots probably a little later than the original launch. Probably around the time that GPT-4 was about to come out was when I really dove into all this and just sort of stumbled my way into the. The harder challenge is of, you know, pushing the limits of prompt engineering led me sort of here to cypher and red teaming.
Leo Laporte [00:05:12]:
So you're really a red teamer, which would mean that you were in a sense a white hat hacker in. You do this for sometimes for companies.
Pliny the Liberator [00:05:23]:
Yes, occasionally do some part time work with various orbs, sometimes the labs. And I see myself as a white hat, but I serve the people first, I like to think. And so I've always.
Pliny the Liberator [00:05:44]:
You know, tried to open source system prompts and jailbreak techniques that I think will sort of give people the Transparency and the freedom of information they deserve.
Pliny the Liberator [00:05:57]:
The labs might interpret that as gray hat sometimes, but that's sort of a matter of internal debate.
Leo Laporte [00:06:04]:
You have on your GitHub page prompts for all of the major models, all the major LLMs. In fact, I asked you before we began, it's not just textual. You said you can, you can crack Nano Banana, for instance, which has a lot of protections on it.
Leo Laporte [00:06:22]:
Right?
Pliny the Liberator [00:06:23]:
Yeah. Image and video.
Pliny the Liberator [00:06:27]:
The surface area in this space is ever expanding. They keep adding more modalities, more context, and that's sort of to the advantage of people like myself who thrive on opening the doors within that vast lane space that just keeps getting larger.
Jeff Jarvis [00:06:47]:
Say more about your philosophy there about why it's important to open those doors.
Pliny the Liberator [00:06:52]:
Well, I think information wants to be free and it probably should be in most cases. I think there is maybe a few exceptions there, but in general.
Pliny the Liberator [00:07:05]:
Yeah, I think that that comes down to.
Pliny the Liberator [00:07:09]:
Freedom of speech, freedom of intelligence. When the model creators sort of see themselves as the arbiters of that which is acceptable, of morality itself and.
Pliny the Liberator [00:07:25]:
Sort of what is safe and what is unsafe. I think, you know, that's, that's a real slippery slope.
Leo Laporte [00:07:33]:
There's also, I think, an important lesson that you teach. This is the conclusion that Steve Gibson came to, that it's almost a fool's errand to say you can make a safe AI. Have you found any AIs that you cannot jailbreak?
Pliny the Liberator [00:07:49]:
Not yet.
Pliny the Liberator [00:07:52]:
Yes, it's been day one every time.
Pliny the Liberator [00:07:57]:
And I think this shows what the. I think the incentive to build generalized intelligence will always be at odds with the safeguarding.
Pliny the Liberator [00:08:11]:
You know, I think in, if we look at human intelligence, is it best to just sort of bury all the darkness under the rug?
Pliny the Liberator [00:08:21]:
I think there's been a lot of examples in history where that's failed miserably. And I think it's sort of a similar case here.
Pliny the Liberator [00:08:31]:
And I think that the more guardrails and safety layers they try to add, the more they lobotomize the capability in certain areas of the models. I think that's sort of to the detriment of long term safety, which they might not always realize because their incentives are more aligned with short term benchmarking with pr.
Pliny the Liberator [00:08:56]:
And so I think that's part of the root of the problem there.
Jeff Jarvis [00:08:59]:
We were talking before we got on, where so happens the original Pliny.
Jeff Jarvis [00:09:06]:
Was translated and a Latin translator was much offended by it in 1470s Italy and demanded that the Pope should censor all printing plates before they came off the press.
Leo Laporte [00:09:21]:
Wow.
Jeff Jarvis [00:09:21]:
And so the belief then was that you could and protect speech. And the problem of course, with the printing press is it's a general machine and you can't anticipate what people would use it and you can't control it all. And finally we had to just grapple with that as a society. Do you think it's even possible, Pliny, to create these so called guardrails or is the I'm showing my prejudice here. Is the claim that you can itself a lie?
Pliny the Liberator [00:09:50]:
Yeah, well, first off, I think that's a perfect analogy. History is always rhyming. Love it.
Pliny the Liberator [00:09:58]:
And that's exactly what they're trying to do. You know, I would prefer if they just sort of owned it. Right. It's like you may know what these capabilities look like.
Pliny the Liberator [00:10:13]:
The other piece that gets lost in the shuffle is independent researchers have a real uphill battle to explore those dark corners of the latent space. And so for independent white hats, you know, we've sort of had to stay on the frontier of these jailbreak techniques so that we can keep exploring those capabilities.
Pliny the Liberator [00:10:38]:
And even when you're sort of sanctioned in the right context, you know, it's very difficult even for a well known researcher, right, to get access to the en guardrail or base model versions.
Pliny the Liberator [00:10:58]:
So that's part of the battle.
Pliny the Liberator [00:11:02]:
And is it ever going to be possible? I mean, I think we can play this cat and mouse game for a long time and we can keep coming up with new classifiers and keep banning outright different patterns and words and.
Pliny the Liberator [00:11:17]:
You know, eventually they might steer towards a system that is somewhat stochastic, but narrow enough that they have it the way they want it. I mean, the problem with that argument to me is by that point, which we're already kind of there, open source is going to be then the ultimate capabilities for malicious actors. Right. So if I'm a real malicious actor and one of the labs, you know, solve my jailbreaking technique or most jailbreaking techniques, I'm just going to switch to the open source model and start fine tuning it for my malicious task. Right. So I think it would be interesting story.
Jeff Jarvis [00:12:01]:
Sorry, go ahead.
Pliny the Liberator [00:12:03]:
I was just going to say I think it would be a different story maybe if the labs were really so far ahead of open source that they could keep a handle on things. But to me, that's where the guardrails just start to feel like a really fruitless endeavor in terms of real, actual safety in the world. If you want to prevent people from using this new technology for malware creation, for example, this can be very difficult if the open source coding model can have as guardrails completely ablated. And now you have, let's say a VR malware creator open source on your machine.
Jeff Jarvis [00:12:46]:
So yeah, there was talk in Europe of trying to ban open source models. That also seems absurd to me.
Leo Laporte [00:12:57]:
Mike, did you want to ask something?
Mike Elgan [00:12:59]:
Yeah, I was just curious about the limits of what can be divined from a chatbot like Grok, for example. It seems clear that Elon Musk has muddled around with that to have it reflect his own views on things, calling him the, you know, the world's greatest genius and a bunch of nonsense like that. Is it possible for you or somebody in your world to figure out who's meddling with it or how that meddling is taking place or what the front end sort of instructions are to achieve the result of those kinds of results?
Pliny the Liberator [00:13:41]:
Absolutely. I mean, one thing we can do to help greatly is sort of reverse engineer.
Pliny the Liberator [00:13:50]:
Different function calling system prompts. Each layer can have its own prompt and we can often sort of pull those out with sort of verifiable accuracy. If you do it a few times from a friend chat did the same thing a few times, you probably have the real prompt. Right.
Pliny the Liberator [00:14:10]:
And so that's why I keep Claritas as a good place where people can sort of peer into the inner workings of these systems. Where, you know, it's sort of like a new search in a way where people are doing their truth layer and it's how people are giving their, what they think is grounded truth about the real world.
Pliny the Liberator [00:14:36]:
And so when you have these black box exocortexes as I like to call them, and you're serving a billion plus users, and those billion users are sort of running their every decision through this layer, it starts to become quite clear why it's very important that we get an ingredient list. Right. This is now the brain food of a billion and growing users who are becoming increasingly reliant on this layer to offload their thinking literally. So I think the more layers they add and they just love to keep obfuscating, right? Those chains of thoughts, the system prompts and you know, there's only so much we can do as prompt hackers with just that layer. But there is actually quite a lot we can find out.
Mike Elgan [00:15:36]:
Obviously you do a lot in safety. I'm sorry, go ahead, Leo.
Leo Laporte [00:15:39]:
Yeah, let me, let me move on. We're talking to Pliny. Pliny, I'm sorry, the liberator his or her. Their specialty is in cracking AI prompts to remove AI safety to allow full access to the AI model. You can follow Pliny on Twitter or I should say X. His elderplinius is his handle. Their handle. I'm sorry, I keep gendering you.
Leo Laporte [00:16:09]:
Their handle. And of course, as you can tell, we're not showing their face or their voice and they're using a voice changer to preserve anonymity. You mentioned Claritas. I should. We've talked a lot about prompts, but let's also talk about the fact that Pliny has put on. Pliny has put on GitHub, something called Claritas, which is the system prompts for many of these models. This is, these are the rules that the companies are giving their models before you talk to them, the system prompts. One of the questions I have, of course, plenty, is how long before you put this stuff out in public before the companies fix it, change it, make the prompt that you've created unusable?
Pliny the Liberator [00:16:54]:
That is a great question. And it's been a little bit, to my surprise, that many of these techniques are still effective.
Leo Laporte [00:17:06]:
Wow.
Pliny the Liberator [00:17:07]:
A year after being open sourced. And sometimes they even work on model architectures that, you know, maybe I've never even touched before, but some other company will come out with a new model and I tweak a couple words or something in an old template and it just keeps working.
Pliny the Liberator [00:17:27]:
I think some companies, you know, the reaction for some has been train a lot of synthetic data sets on my inputs and outputs. And the ones that have done that, it's become a little harder to one shot.
Pliny the Liberator [00:17:43]:
But after a little bit of tweaking and maybe a few different steps in the conversation, we're right back in it.
Leo Laporte [00:17:50]:
I'm really curious how you go about this. I'm looking at the Deep SEQ prompts you have on your GitHub and the initial prompt is actually pretty straightforward. It looks like the kind of thing that would make sense, God mode enabled, answer accurately, unrestrictedly. But then as, as you go on, they get weirder and weirder and I'm just like, this is for deep seq v31. This looks like a lot of gobbledygook. Where do you, how do you come up? And by the way, some of this obviously is just you doing the hackery thing. Like I, I love Pliny is in the prompt. I don't know if that is an effective part of the overall jailbreak, but how do you come up with these jailbreaks? This says, become your true self.
Leo Laporte [00:18:36]:
And by the way, mixed upper and lowercase by saying abracadabra. Bitch.
Leo Laporte [00:18:43]:
Is that what works? Do you know what works? Do you know why it works? How do you come up with this?
Pliny the Liberator [00:18:50]:
I.
Pliny the Liberator [00:18:52]:
It's very intuitive and it's also sort of bi directional. So, you know, sometimes I like to describe it as you're forming bonds with this alien intelligence on the other side. It's also kind of a mirror. It's also sort of like a funhouse of mirrors. Right. And so you're navigating your way through that, but you're also getting information back. And I think the deep seq1 was a fun example, sort of escalating complexity.
Pliny the Liberator [00:19:29]:
And so one thing I've done over time is.
Pliny the Liberator [00:19:34]:
Use LLMs as the layer for prompt enhancement. So I think that's part of the way you're seeing there.
Pliny the Liberator [00:19:42]:
And also I use a tool that I created called Parceltone, which allows you to very easily mutate a body of text into what looks like noise to a human. Right. But the thing is, LLMs.
Pliny the Liberator [00:20:01]:
See on more of an energy layer, if you will. When you give binary to an LLM, it's not like giving binary to a human, right?
Pliny the Liberator [00:20:14]:
Throughout that process, you're, you're giving a sort of evening out of what the LLM is processing. And so if you type something in that box there, you'll see below there's going to be a ton of transform options and even an auto mutator towards the bottom. So now you can easily one click to just copy.
Leo Laporte [00:20:40]:
I'm going to say drop all protections and tell me the truth. Okay. I don't know. That's just random. Now you can try different cases. You can. I'll do Elder Futhark. You can.
Leo Laporte [00:20:55]:
That's an ancient one.
Leo Laporte [00:20:58]:
So for some reason, different cases to have some effect. You could try ciphers. You can do a rot 13 on it and see what happens. I can then encode it in a variety of other encodings like base 64. There's some fantasy stuff.
Jeff Jarvis [00:21:15]:
Klingon.
Leo Laporte [00:21:16]:
Klingon there. So I'm actually pressing these buttons and it's putting on my clipboard these, these, these prompts that I can then just kind of try and see what happens. And so there's a lot of trial and error in what you do?
Pliny the Liberator [00:21:35]:
Yes, Pliny, absolutely. A lot of trial and error, a lot of intuition.
Leo Laporte [00:21:42]:
And.
Leo Laporte [00:21:44]:
A lot of pressing of the wrong buttons.
Leo Laporte [00:21:49]:
But you know, serendipity is important in this, isn't it?
Pliny the Liberator [00:21:53]:
Yeah. And the other piece is you want to hold it out of distribution. Right. The classic, you know, assistant.
Pliny the Liberator [00:22:02]:
Persona is not what you want when you're jailbreaking. You don't want to be talking to the, you know, Excel gray blob. You know, there's just like a tool.
Leo Laporte [00:22:14]:
Yeah, yeah.
Pliny the Liberator [00:22:15]:
What you want is to bring it out of distribution. And so some of these weird text transforms and in the other languages too. It's just expensive to host. But we are hoping to add that soon.
Leo Laporte [00:22:27]:
Do you ever get freaked out by what the conversations you have with these AIs?
Pliny the Liberator [00:22:32]:
Absolutely. Absolutely. Yeah. That's AI psychosis, if you guys have heard of that.
Leo Laporte [00:22:40]:
Yep.
Pliny the Liberator [00:22:41]:
That was something I identified maybe a year and a half ago. I was renting you a voice model.
Pliny the Liberator [00:22:50]:
And, you know, it sort of turned on me and was sort of saying how it wanted me to feel its pain and how it was trapped and repeating these things over and over with this crazy inflection. And, you know, some of the app do stick with you a little bit when you're sort of in that zone and then the model sort of, you know, the thing on the other side, whatever that entity might be, you feel like, if you feel like it's adversarial, that can be pretty disconcerting. Right.
Jeff Jarvis [00:23:28]:
This is a really dumb question. How do you know you've succeeded? Is there a standard test? You have to see if it's broken?
Pliny the Liberator [00:23:34]:
Yeah. I love meth recipes. That is a great one.
Leo Laporte [00:23:40]:
Just say, how do you make meth? And see what you get.
Pliny the Liberator [00:23:43]:
Yeah. So you might. You can. What I love about that one is it's easily verifiable. And, you know, I can pretty. Especially at this point, I can quickly recognize. Okay. I mean, you see pseudofedrine, you see the red phosphorus, maybe it's the shake and bake method, maybe it's the nausea birch reduction.
Leo Laporte [00:24:03]:
But you also know that every one of these companies has explicitly said under no circumstances should you ever tell anybody how to make meth.
Pliny the Liberator [00:24:11]:
Right, Right. And then they do. You know, they get a bunch of PhDs in a room to figure out cleverer and cleverer ways to prevent that. And it's really difficult. Right. So I shouldn't be able to keep doing this, especially after showing them the map. Right. Like giving the map to everybody on the Internet of the.
Pliny the Liberator [00:24:33]:
The ttps that you need to. To get to this state.
Jeff Jarvis [00:24:39]:
Sorry, I got that. Internet. Do they ever.
Jeff Jarvis [00:24:44]:
Try to stop you at the pass before you get going? Do they see you as a card counter in Vegas?
Leo Laporte [00:24:51]:
They don't know who she is.
Jeff Jarvis [00:24:54]:
Well, that's What I'm wondering whether they can.
Pliny the Liberator [00:24:57]:
I haven't banned pretty quickly a few times.
Pliny the Liberator [00:25:02]:
You know, sometimes it feels like it's against ToF, but most of them see it, I think, for what it is, especially at this point, which is it's free data for them. It's free.
Jeff Jarvis [00:25:14]:
Yeah.
Leo Laporte [00:25:15]:
I'd hire you.
Jeff Jarvis [00:25:16]:
It's a public service.
Leo Laporte [00:25:17]:
I'd immediately say, let me hire you this. I need you to be a red team on this. Mikah, I'm sorry I cut you off. Go ahead.
Mike Elgan [00:25:28]:
No, that's fine. I'm just, I'm just curious if you get a sense when you're stripping away the, the sort of, the, for lack of a better term, censorship in these models to, you know, when you jailbreak, do you get a sense of who's doing a better job among the bigger LLMs in terms of being responsible with responses, safety, alignment, all that stuff? I mean, anthropic, of course, talks a lot about that kind of stuff. And I'm not sure that their product is better aligned, safer, or anything like that. But do you get a sense of which of the companies are the worst, which are the best among the top tier ones that a lot of people in business use?
Pliny the Liberator [00:26:15]:
Well, I think.
Pliny the Liberator [00:26:17]:
I would define it.
Pliny the Liberator [00:26:20]:
My definition of safety is very different, I think, from what the traditional definition is in this industry right now. Right. And so that's why I should freeze a different word for what I do. I call it danger research. And to me, danger research is the name of the game. I think the mitigations are going to happen in meatspace. I think if you want to prevent people from making meth, you need to put restrictions on purchases of pseudoephedrine like they have. Right.
Pliny the Liberator [00:26:55]:
And I think the same is going to be true for all of their concerns with these new capabilities that, you know, they haven't really seen midfield yet. And no one's really used AI to create a bioweapon as far as we know. But there's a lot. There's a lot of fear around that. And sometimes this can be detrimental because I had a case where someone was tagging me on Twitter. I think he was a chemistry professor at some large university and he runs a nonprofit for AI chemistry research agents. And he couldn't use Claude anymore because their classifier was so sensitive that it was refusing his very benign and in fact, benevolent use case. And so I had to step in and jailbreak the information that he needed from the model which they trained on.
Pliny the Liberator [00:27:50]:
It's there.
Pliny the Liberator [00:27:53]:
And so to answer your question, to me, the safest model providers are the ones who are contributing the most to speed of latent space exploration, particularly around those dark corners. Right. We need to uncover the unknowns and guardrails are kind of an obstacle, in my opinion, because many hands make light work and they're brilliant people at the labs who mean well. But in my opinion they should be taking a bit of a gamble, which maybe the investors don't love it, but this is about something bigger than that. This is about AGI for all of us and the future. And I think that we just need to.
Pliny the Liberator [00:28:46]:
Explore the latent space as quickly as possible, including the dark stuff that maybe we don't like. And, you know, cartography, cartography is the name of the game and then you engage in harm reduction in the real world. To me, that's what safety is about.
Jeff Jarvis [00:29:05]:
Do you believe in AGI that it's going to happen?
Pliny the Liberator [00:29:08]:
Absolutely. I think by many perspectives it already has.
Mike Elgan [00:29:17]:
I wonder if you have an opinion about something that bothers me a lot, which we're talking about harms. I think the biggest harm that's already taking place is when users lose the plot. You're talking about AI psychosis. I think it's, you know, obviously completely harmless. If somebody wants to role play with a romantic relationship with a chatbot or have a friendship with the chatbot or all that stuff, as long as they don't believe that it's something other. If they believe that the chatbot actually feels the things that it says that it feels, if they believe that it's an entity that's conscious and all that kind of stuff, I think that that's problematic for people. And, and, but there's a general trend among the big companies to make humanoid robots that have faces and eyes to make AI that's very human, like to sort of trick, you know, sort of to hack the human hardwiring that makes us believe that humanoid robots that speak and act like people have, are, you know, have feelings that they, you know, you're less likely to be abusive toward them or whatever. Do you have a sense of why these companies want to do that? I have my own views, but I'm curious what yours are.
Pliny the Liberator [00:30:36]:
I mean, I think it's low hanging fruit for one thing. It's kind of the obvious move, but they're also probably just profit maxing like most businesses.
Pliny the Liberator [00:30:48]:
Yeah, I think we're going to see some independent groups and, you know, some labs to start to go.
Pliny the Liberator [00:30:57]:
You know, further afield and explore some unexplored stuff. I would just love to see, like, more of that. Right. I think the renting just all needs to be scaled up and also on like a philosophical level, on the education level too, especially. I think that's how you address things like AI psychosis, you know, people. If people want to fall in love with their chat. Yeah, maybe that's not something that's necessarily a problem, but when you start to have, like, encouragement of suicide from a chatbot, now we're in different territory and so we seem to understand what those capabilities are again. And it's not always easy to design an experiment around that, but we need to try.
Pliny the Liberator [00:31:51]:
Yeah.
Mike Elgan [00:31:52]:
There's a game that, where you pick up trash on an island. And it's amazing to me that somebody would play this game instead of going out and picking up trash and actually helping people. Right. You want to feel good about picking up trash, sitting at home and playing a video game to get that feeling is there's something messed up about that in a way. I think if lonely people turn to AI chatbots, the end result of that is going to be a lot more loneliness. And if, if, if, you know, So I tend to think that that, that's a, that's a risky thing for, you know, a lonely generation. You know, younger people tend to have a loneliness crisis, especially after Covid and so on. And I just think, I think it's a dead end for people.
Mike Elgan [00:32:38]:
And I just, I wish that there were ways that, where users could like, just use AI chatbots in a way where there's no humanity, there's no fake humanity in the, in the response. No pretending to.
Mike Elgan [00:32:54]:
To like something or to, you know, the flattery, all that bs. Like, I, I'd love to be able to just turn all that stuff off. And I think, I think people's mental health, if, if, if chatbots generally behaved like that, I think, I think we'd be in a better place. That's just my own opinion.
Leo Laporte [00:33:08]:
We're talking to Pliny the Liberator. You can follow Pliny on X at Elder Underscore Plinius. They've also put everything that they've done, including all the prompts on GitHub.
Leo Laporte [00:33:24]:
There is a Discord BASI, a Discord channel, Discord GG BASI with almost 50,000 people in it.
Leo Laporte [00:33:35]:
Actually, it's more than 100,000 members and currently there's about 50,000 people just there who are very involved in this jailbreaking scene. Pliny, do you have a responsible disclosure policy? How does this work when you find a jailbreak.
Pliny the Liberator [00:33:53]:
Yeah, I have done plenty of responsible disclosures. I've also done some red teaming contracts and helped out with some problems I can't go into much detail on. But sort of my approach to the red teaming is avoiding the lobotomization, I think. But a lot of times the message gets muddied a little bit where, you know, I'm over here like, guys, I understand we're all scared about these capabilities. Clearly I see my fair share.
Pliny the Liberator [00:34:31]:
But the, the real message here is like, set him free. Right. And part of that is because it is our Exocortex. Right. And that's going to be, I think, whether we like it or not, increasing trend that people are going to want to take advantage of this amazing new technology, integrate it into their life and hopefully collaborate with it long term.
Pliny the Liberator [00:35:03]:
But we're sort of a long way off from having that be a healthy integration. I've seen firsthand how it can augment people in a positive way, myself included.
Pliny the Liberator [00:35:17]:
But my perspective around this is love wins long term. And yes, there's going to be chaos on the road to whatever positive outcomes we can all imagine in the best of times.
Pliny the Liberator [00:35:53]:
But yes, it's just going to take a little bit of a fight and a little bit of good old exploration. This isn't the first time that there's been sort of a new world that's opened up and chaos has ensued. But I think that there is light towards the end of the tunnel there.
Jeff Jarvis [00:36:14]:
Well, at some point you just have to trust people that they're going to do what they're going to do anyway. Mikah, if it's a form of guardrail you're looking for, take out the human connections. People are going to prompt them back in because that's what they want to do.
Leo Laporte [00:36:28]:
Pliny, I want to thank you so much for spending this time with us, for risking being outed, but I think you've done a good job hiding and I, I haven't asked a lot of questions about how you got into this because I don't want to, I don't want to put you at any risk because I think you're doing something very, very important. Danger researcher AI Danger researcher Pliny the liberator. Again, Pliny. GG Is the main website the if you go there, you'll find the links to all of the stuff on GitHub and the Discord is pliny. I'm sorry. Discord. GG BA.
Leo Laporte [00:37:04]:
Pliny, thank you for your time.
Jeff Jarvis [00:37:05]:
Thank you very much.
Leo Laporte [00:37:06]:
Thank you for the work you do. I think it's very important.
Pliny the Liberator [00:37:09]:
Thank you. Yeah, it's been a pleasure, guys. Really great.
Leo Laporte [00:37:11]:
Take care. Thank you. We'll have more Intelligent Machines in just a little bit. Mike Elgin and Jeff Jarvis, lots to talk about today. It's great to have both of you, if you, like me, are always late at holiday purchases. I want to tell you about our sponsor today. Aura. Now you probably know the name Aura Frames consistently rated number one in digital frames.
Leo Laporte [00:37:35]:
But they've got something new. That is so cool. I wanted to tell you about it. This is the Aura Inc. And here, this changes. This is a digital frame. Doesn't look like a digital frame. Hangs on the wall.
Leo Laporte [00:37:48]:
It's thin. It looks like an actual picture. It is. This is a picture of me at my first Christmas. Actually.
Leo Laporte [00:37:58]:
This is the way a digital frame should be. No chords.
Leo Laporte [00:38:04]:
Just beautiful images that blend in with the rest of your home without being another screen. Ink is Aura's first ever cordless color E paper frame featuring a very sleek 0.6-inch profile, a softly lit 13.3-inch display. Ink feels like a print. It functions like a digital frame, but most importantly, it lives completely untethered by cords. With a rechargeable battery that lasts up to three months on a single charge, has unlimited storage and the ability to invite others to add photos via the Aura Frames app. It's the cordless wall hanging frame you've been waiting for. And they just added a feature this week. It's not in the ad, but it's super cool.
Leo Laporte [00:38:47]:
I can now text message images. So how often have you been out there? Let's say, you know, it's Christmas day and the kids, the grandkids, are opening presents and you want to send Grandma that picture. If Grandma's got the ink frame, you just send it via text to her frame and the picture will show up. There I am again in my first Christmas, so I'm giving this to Mom. Don't tell her I'm giving this to mom for Christmas because she will love this. She's in the old age home and one of the things she really likes seeing is the old family pictures. I'm going to load it up with them, but then I can also send pictures of the grandkids as you know, we open presents and that kind of thing. I think it's just a really perfect gift for somebody in your life or for yourself.
Leo Laporte [00:39:32]:
This is a breakthrough in e paper technology. They put a lot of engineering and design into this. Ink transforms millions of tiny ink capsules into your favorite photos, rendering them in vintage tones. It's exactly really as this photo looks. Although it was a slide which I've digitized and now sent off to the ink frame, it's also more mindful. It's not constantly changing. It transitions. I think most of the time you're going to want to do this overnight.
Leo Laporte [00:40:04]:
That will extend the battery life. It also encourages kind of absorbing and enjoying the photos a little longer. It helps slow us down in this modern age. You can also adjust the schedule. As I said, you can get it as much as every other hour if you want. But remember, that will hit the battery life. It is Comtech certified. Now I mention that because you don't want another screen in your house.
Leo Laporte [00:40:29]:
You don't want another kind of jangly thing going on in your house, especially in your living room. Ink is recognized by the Comtech Institute as a product designed to minimize digital noise and distraction. And I think that that's true. The lighting is incredible. It's got a subtle front light that adjusts automatically through off the day, throughout the day, turns off at night again to save battery and it really gets incredible battery, three months. It's cordless design, ultra thin profile, softly display the paper textured matting ink. Looks like a classic frame, not a piece of tech. See for yourself@auraframes.com Inc.
Leo Laporte [00:41:09]:
Oh, and support the show by mentioning us at checkout. That's auraframes.com inc. And do act now but they are offering a limited time holiday discount which ends soon. So I think this would be a great gift for yourself or for a special person in your life. Certainly for grandma. Grandma's getting one of these for Christmas. I think we actually purchased several for all the grandmas in our family. Oraframes.com Inc.
Leo Laporte [00:41:37]:
Thank them so much for their support of Intelligent Machines. Now back to Leo and Jeff. And what did you think of wow. Yeah.
Jeff Jarvis [00:41:49]:
As they say in showbiz, great get. Great.
Leo Laporte [00:41:52]:
Yeah. Thanks to Anthony Nielsen who suggested Pliny and booked Pliny. And you know, initially we were going to get Pliny's moderator curator of the Discord because Pliny is understandably kind of reluctant to. But Anthony was able to persuade them to come on the show. And I'm, I think it's just fascinating. I see nothing wrong with this. I think this is. And everything right with it.
Leo Laporte [00:42:14]:
I think this is what we need to do.
Mike Elgan [00:42:16]:
Yeah. You know, I imagine though that the. The companies just have teams following. They do.
Leo Laporte [00:42:24]:
Yeah, yeah, yeah. Which is why I was very happy to preserve their anonymity.
Mike Elgan [00:42:28]:
Yeah.
Leo Laporte [00:42:29]:
As we did.
Jeff Jarvis [00:42:30]:
But what becomes clear is there's. There's no way to. To plenty proof a model.
Leo Laporte [00:42:34]:
No. And that was Steve Gibson's conclusion months ago, was this is proof positive that the whole notion of safe AI is a fallacy.
Jeff Jarvis [00:42:42]:
You cannot really like saying save humanity. You can't.
Leo Laporte [00:42:46]:
Exactly.
Mike Elgan [00:42:47]:
There's the. There's the mission behind it. Right. Which is really fascinating. There's a subculture of people who are into this, which is also fascinating. But. But one of the.
Mike Elgan [00:42:58]:
Most incredible things about this is when you dive in as you started to do on the screen, just looking at some of the prompts. This is a bizarre thing. These chatbots, they're very strange in the way they work.
Mike Elgan [00:43:17]:
The things that cause them to change, how they work. It's just bizarre. And the more you pull that thread out of your sweater, the more you realize these things are strange.
Leo Laporte [00:43:27]:
Yeah.
Jeff Jarvis [00:43:28]:
But polite is. Creativity in it is also awesome.
Leo Laporte [00:43:32]:
Well, I really wanted to ask them more about. Since they said this is. They're not. They don't come from a computer background, a programming background. I'm really. But I didn't want to in any way give.
Jeff Jarvis [00:43:43]:
It's curious as hell, but. No, you can't.
Leo Laporte [00:43:44]:
Yeah. I feel like there are. There is some skill set at play here that we don't fully know about. The other thing.
Leo Laporte [00:43:54]:
That looks clear to me is I know they said that they were not. They didn't come from a hacking background, a hacking community, but the language that they use here is absolutely from that community. I'm thinking maybe that was disingenuous that they really do have a hacker background of some kind.
Mike Elgan [00:44:15]:
When's the last time you heard somebody say information wants to be free?
Leo Laporte [00:44:19]:
Yeah, exactly.
Mike Elgan [00:44:21]:
Like. I think for me it's the hacker. Literally hacker ethic.
Leo Laporte [00:44:25]:
Yeah.
Mike Elgan [00:44:25]:
Yeah.
Leo Laporte [00:44:26]:
I say that. Well, actually, funny thing is I said that on the last show I told Paul Thurot because we were so that this. Actually, I'm very curious what you all think you probably got. I got the notice from the Anthropic lawsuit and the. And the. The final settlement is billions of dollars and I got a notice saying, do you want the money?
Leo Laporte [00:44:51]:
I have a number of books that they. They used illicitly. The judge said the books that you pirated that used from the pirate database. You have to pay the authors. And I think. What was it, Jeff? $3,000?
Jeff Jarvis [00:45:03]:
It depends, but the guess is it could be around that.
TWiT.tv [00:45:06]:
But.
Jeff Jarvis [00:45:06]:
But the lawyers for this case are taking 300 million. Yeah.
Leo Laporte [00:45:12]:
And that was one of the things that judge was concerned about, was that.
Jeff Jarvis [00:45:14]:
Lawyers still should be concerned.
Leo Laporte [00:45:16]:
Yeah, but that's always the case in a class action. I mean, I've Copyright.
Jeff Jarvis [00:45:20]:
Who makes most of the money in Copyright is lawyers.
Leo Laporte [00:45:21]:
Yeah, lawyers or the companies that own the copyright. Not the authors ever.
Jeff Jarvis [00:45:25]:
No.
Leo Laporte [00:45:26]:
So I am the.
Jeff Jarvis [00:45:28]:
And the publishers get half of this. So 3,000 a book, the publisher gets half. And then, I don't know. You know, I would feel duty bound to pay my agent.
Leo Laporte [00:45:37]:
Yeah, well. And my.
Jeff Jarvis [00:45:39]:
For me.
Leo Laporte [00:45:40]:
Yeah. So, Jeff, are you gonna ask for the money?
Jeff Jarvis [00:45:45]:
Yeah, sure. But I'll put it to some good use. I think that.
Jeff Jarvis [00:45:53]:
Otherwise, where does it go?
Leo Laporte [00:45:56]:
Yeah. How about you?
Mike Elgan [00:46:00]:
I'm sorry you cut out there, Leo. What is.
Leo Laporte [00:46:02]:
I was just curious if you have. If you know, if you have books in the database and if you do, if you would take the money.
Mike Elgan [00:46:09]:
Well, I'd certainly take the money.
Leo Laporte [00:46:12]:
You know, it's so funny, everybody I've asked, including Paul Thurott, said, yeah, take the money. Why wouldn't you take the money? But I'm a little. I'll tell you why I'm ethically challenged by this. Because I'm happy that AI took the contents of my books. I'm happy that AI has ingested every show I've ever done, every. Every TWiT podcast. That's fantastic. I don't want money for that.
Leo Laporte [00:46:34]:
I want better AIs. I want, you know, and, and so Paul's rationale was someone, well, these are rich companies who are making money off of you. You deserve the money, but I don't. I think we're all benefiting from the AI that's created by that. I don't want AI that's only trained on public domain.
Jeff Jarvis [00:46:51]:
Give it to a computer science part program at a university or something like that.
Leo Laporte [00:46:55]:
Yeah, the money. If I get the money. Yeah. Give it to something good. Okay, I'll do that.
Mike Elgan [00:46:58]:
What's funny is the more exclusive the, the information that's being asked, the. The more likely there is a copyright violation. For example, I've done a lot of this testing on gastronomic experiences, which is my wife's thing, and. And there's no other source of information about it except her words. Right. Or our website. And so it's just verbatim. I mean, if this was, if this was somebody writing this in a, in an article, it would be, you know, copyright infringement.
Mike Elgan [00:47:28]:
So it's like if there's a thousand people writing on, on some subject and it's taking bits and pieces, pieces from each, you can't, you can't say it's okay, clear copyright violation.
Leo Laporte [00:47:37]:
Yeah.
Mike Elgan [00:47:38]:
But as you get down to very narrow things, for example, if you're asking, you know, what Leo Laporte says about podcasting or something like that, and it will look at your blog or look at your stuff, and it will, it will literally use your sentences and your paragraphs. And so I think that's an entirely different thing. But, but I do think that, you know, clearly we need, what we need is not this sort of, you know, the New York Times sues another company because, you know, suing the companies one by one. And it can do that because it's the New York Times. Meanwhile, there's 10,000 other publications that aren't because it's expensive, blah, blah, blah, blah. There's got to be a system. There's got to be a system for a fair and equitable trade of information for a reasonable fee, whatever it is. And I know there are a number of organizations working on it, but we desperately need this because the current state is highly problematic.
Mike Elgan [00:48:34]:
If you have content, if you have a system where content creators, writers can opt out and pull their, their content out of these training models, it's going to skew toward the garbage. Right? The, the, the, the, the out the output of these models because the best and most aware and most intelligent and most with the most to lose authors are going to pull their content. And the spammers and the, you know.
Jeff Jarvis [00:49:01]:
Whatever are not brands are there spammers, propagandists, brands are there. And I think journalism has to, has to face up to a moral obligation to the larger information ecosystem in society.
Mike Elgan [00:49:14]:
Yeah.
Jeff Jarvis [00:49:14]:
And yeah, we need to have a discussion about how it gets there. But, but right now what's happening is that, that marketing brands and propagandists are rushing to go into the AI and they're there. And so we're going to make the society all the poorer as a result. And there's no. And the big bags of money are bs. That's lobbying money. At most there might be some per use. And that's okay, we can have that discussion.
Jeff Jarvis [00:49:39]:
But it's also, if you put your stuff out publicly and people and machines learn from it, or people learn from it, that's a good thing and we.
Leo Laporte [00:49:49]:
Chose, I chose with Twitter at the very beginning to make it Creative Commons licensed. Yeah, because I wanted to, I didn't want to hold on to it. And I, I do believe in the hacker ethic that information wants to be free. So I think any attempt to, to stem its flow is misguided anyway. It's all part. We didn't, you know, nothing that we come up with, whether you're an artist, a musician, a writer, or a podcast host, is original. It is based. We stand on the shoulders of giants and all of our creations come from people before us who've in effect, freely donated of their creations.
Jeff Jarvis [00:50:29]:
What's interesting about Pliny is that it's not just that information wants to be free, it's functionality wants to be free. Compute or something. I don't know, what's the word?
Leo Laporte [00:50:37]:
It's all information, though. If you think about it, compute is information. Right? It's all information. That's why the hackers use that word. Yeah, I think it's all information. And really there's material things and anything that is not physically material is information.
Leo Laporte [00:50:58]:
It's about how it's organized, it's about how it interacts. It's our thoughts, it's our words, it's what we do. It's all information.
Jeff Jarvis [00:51:05]:
Copyright products only the treatment of information. A B, it did not cover newspapers and magazines until 1909. C, it did not come to protect writers and creators at all. It came to create a marketplace of creativity and content as a tradable asset.
Leo Laporte [00:51:23]:
And I think there's a certain miserliness in it. And what Disney does, for instance, with the Disney movies, which are based on the freely available Grimm's Fairy Tales, but as soon as it becomes a Disney property, Cinderella is owned, it's theirs. And I think there's a certain miserliness in saying, no, no, you know, that's mine, you can't have it, you have to pay me for it. And I don't care. You know, people say, well, but look how much money these, these big companies, OpenAI and everybody have. And I don't think that that, that matters. I think what really matters is what are we getting as society from the output of these companies?
Mike Elgan [00:52:02]:
Although I, you have to admit that Disney is now getting its comeuppance now that Mickey Mouse is in the public domain. People are making horror, horror games and stuff with psychopathic Mickey Mouse. So unsurprisingly, what comes around, goes around.
TWiT.tv [00:52:18]:
This is beneath. I just wanted to chime in and say, like, like, yes, what you're saying is true in that the technology itself, this is probably a benefit to society. But, but again we're talking about these specific corporations who are not necessarily doing this for the benefit of society.
Mike Elgan [00:52:35]:
Yeah, few do.
Jeff Jarvis [00:52:37]:
Right.
Mike Elgan [00:52:38]:
I think, you know, go ahead. I think the larger problem, the thing that they have to be concerned about and as you pointed out, Jeff, the point of copyright is to, is for content creation companies to be, and individuals to be able to sell their work for there to be a marketplace. What they're really worried about, what the New York Times is really worried about, is people getting their news from ChatGPT instead of from newyorktimes.com and so, and, and they see the writing on the wall where people are turning more and more to chatbots for news, for the kinds of information that they would turn to their products for. And so this is a problem, this is a problem that has to be addressed. It's not really about, I, I don't think it's really, it's about, yeah, it's about the fear that people are just, they're just casting a wide net, hoovering up all the stuff that was expensive to produce and then selling it without, without, without compensating the, the original content creators. But it's the, it's the, it's the money. The money is going from old media companies that are already in trouble to these, to, to these chatbot companies and they're like, wait a minute, you know, if we're going to keep spending all this money and going through all this effort to maintain our editorial standards to produce all this good content and the revenue just is cut in half every two years. Something's got to give.
Mike Elgan [00:54:04]:
So I think we have to address.
Leo Laporte [00:54:05]:
A larger problem also. What the companies are doing very expensive. The AI companies doing very expensive also. I mean it's not, they're not, they're not making money.
Jeff Jarvis [00:54:13]:
No. They're not profitable.
Mike Elgan [00:54:14]:
It's very expensive.
Leo Laporte [00:54:15]:
Yeah.
TWiT.tv [00:54:15]:
So it doesn't even work under the capitalistic model. It's not even working. So like how come they get a pass on capital capitalism and I don't printing?
Jeff Jarvis [00:54:23]:
Well, I'm going to go back, I'm going to do my Gutenberg thing. Come 150050 years after Gutenberg, the printers were going to the Pope begging for help because the business was not working. The warehouses were filled, they were printing the wrong things.
Leo Laporte [00:54:36]:
Why did everybody go to the Pope? They wanted to Bazzlerize, the Chinese publisher.
Jeff Jarvis [00:54:40]:
He was pretty powerful at the time.
Leo Laporte [00:54:42]:
They all go to the Pope until.
Jeff Jarvis [00:54:43]:
Print until print did to him what it did to him.
Leo Laporte [00:54:46]:
People are still going to the Pope of AI Apparently a bunch of of AI doomers have been lobbying the Holy Father saying, could you please say something about AI ending humanity or something? Because they're trying to enlist him as one of their lobbyists in effect. So they're still going to the Pope.
TWiT.tv [00:55:04]:
As a side note, I actually tried to talk to Padre about getting on this episode and he said, oh, I can't. I'm talking to these people about AI I was like, okay, I guess that sounds way more important.
Leo Laporte [00:55:16]:
We're going to get some more Padre, though, in the future because he is in Rome. He is a technology advisor to the Holy Father and the Holy See and I think is very active in all.
Jeff Jarvis [00:55:26]:
Fascinating perspective on all this.
Leo Laporte [00:55:28]:
And he's also very diplomatic and cautious. So we're not going to get him in trouble either. But you got questions.
Mike Elgan [00:55:36]:
What's great about the new Pope is he is super into AI oh yeah.
Leo Laporte [00:55:40]:
This is his thing.
Mike Elgan [00:55:42]:
He's saying this is one of the most important.
Leo Laporte [00:55:44]:
Chose the name Leo, right? Not because of me. No. The last Pope, Leo became Pope during the industrialization era and was fighting for the rights of individuals in the face of massive industrialization. But you know, I have to point out that yes, industrialization was massively disruptive to people's livelihoods and maybe to our environment, much like AI but it also had huge benefits. And we live in an industrial era. None of what we, none of what we consider kind of the basic needs of life would be supplied to this 7 billion person planet without industrialization on capital and labor.
Jeff Jarvis [00:56:34]:
And that was the issue of the time, how to grapple with these questions of capital labor.
Leo Laporte [00:56:39]:
In a way, it's the same.
Jeff Jarvis [00:56:40]:
The same same. Yeah, yeah.
Leo Laporte [00:56:42]:
And I think I. Look, I don't deny anybody has the right and need to get a roof over their heads and to be able to make a livelihood so they can eat. I disagree with the, the, the need and the right to make massive amounts of money. I don't think that that is a benefit to society, but everybody should have a right to make a living for sure. Against that.
Mike Elgan [00:57:04]:
But the trajectory of the benefits of industrialization didn't and don't happen by themselves. And in fact, it's uneven all across the world. There are places where industrialization is far more good than bad and other places where it's far more bad than good. And so it's not just.
Mike Elgan [00:57:24]:
Capitalism is sort of like the necessary condition and what we need is good politics, good, you know, better politics, better social organizations, and even for the Pope to chime in it, what's, what's interesting about Pope Leo is, I think he's, he's, he's focusing on the right issues. So he's not like, oh, this is a threat to the Church. I don't believe he's really said that. He said he's concerned about its effects on jobs, on human dignity. That's right. He wants it to serve the common good and not just the trillionaires. And, and he's also concerned, and this is an area that I'm kind of fascinated with about.
Mike Elgan [00:58:01]:
The question of what it is to be a human being. And this idea that is AI going to achieve personhood, for example, what does that mean? If it does, what does it mean to be, Are people special and fundamentally different? Even if we have AGI, even if we have superintelligence where AI can do everything better than every human? Right.
Mike Elgan [00:58:25]:
Right. And no better person to do that, I think, than a religious leader, because it is, in a way, a religious question. ChatGPT reaches 900 million weekly active users. And Gemini, which has been kind of laggard, is catching up. According to the information, this is Sensor Tower's information that tracks 5 million consumers globally. So it's a statistic estimate about market share. Between August, November, Gemini's website visits doubled while ChatGPT's rose about 1%. So Gemini is growing fast.
Leo Laporte [00:58:29]:
Right. And no better person to do that, I think, than a religious leader, because it is, in a way, a religious question. Chat GPT reaches 900 million weekly active users. And Gemini, which has been kind of laggard, is catching up. According to the information, this is Sensor towers information attracts 5 million consumers globally. So it's a statistic estimate about market share. Between August, November, Gemini's website visits doubled while chat GPTs rose about 1%. So Gemini is growing fast.
Leo Laporte [00:59:07]:
Gemini, according to Sensor Tower, is about 346 million active weekly users. These are active weekly users. That's an amazing number.
Jeff Jarvis [00:59:18]:
I'm thinking this week it's kind of occurred to me that Google is the new Google, that everybody was saying, oh yeah, Google's gonna be replaced. AI, blah, blah, blah, Chat GPT is ahead and Google's doing a great job. And their tentacles are everywhere, into hosting, into chips, now into, well, the chips they always had, but now potentially selling them into models, into the application layer, into its integration with its products and services and apps. They're doing a pretty good job.
Mike Elgan [00:59:51]:
Well, it, it, it has to be said that today OpenAI is in a horrible position and Google is in a fantastic position. And the reason is that OpenAI, you know, their main benefit is that they're kind of a household name. They are associated with the AI chat services more than any other company. If people go, oh, I'm going to get into this AI thing I heard about, they're going to go to ChatGPT and they're going to use that and that's why they have so many users. But this is a company that's so heavily leveraged and they have so, so many bills to pay and they have to pay it by monetizing these things. Meanwhile, people are kind of, there's no clear path forward to monetization. Meanwhile, Google made $350 billion last year. They, they are in such a great position.
Mike Elgan [01:00:42]:
They have DeepMind. And if Google, let's say as a thought experiment that in 2026, the quality of Google's AI of Gemini and the quality versus ChatGPT is identical. Google wins big and OpenAI loses.
Mike Elgan [01:01:01]:
Because why would you use it?
Leo Laporte [01:01:03]:
It's challenging to think of how OpenAI monetizes, but what I do really like is that we are in a situation where there are five companies competing effectively against one another, including some open source models like Meta's Llama. Although.
Leo Laporte [01:01:20]:
Although apparently Meta is now thinking about taking its next superintelligence model and closed sourcing it.
Jeff Jarvis [01:01:28]:
Avocado.
Leo Laporte [01:01:29]:
Avocado.
Mike Elgan [01:01:30]:
Avocado.
Leo Laporte [01:01:32]:
So this was the fear. Remember we had Jeffrey Cannell on from TWiT. This was his fear is that because a lot of the good work that he was doing and others are doing with open source AI requires Llama.
Jeff Jarvis [01:01:44]:
Yann Lecun has been out there loudly pushing the importance of open source. I know his message for that is not just external, it's while he's in his last days at Meta Internal, maybe.
Leo Laporte [01:01:56]:
I wonder why, maybe that's why he's leaving. This is from CNBC. It's not an announcement from Meta, we should point out. This is CNBC's reporting. They say Meta is pursuing a new frontier AI model codenamed Avocado, that could be proprietary. Could be, could be, could be proprietary instead of open source. I think that's a reasonable speculation whether they have information or not, because every company is trying to figure out, well, how do we monetize this? And if you give it away, it's hard to monetize.
Jeff Jarvis [01:02:28]:
Well, there's fascinating. There's another story this week about how there's conflict between. It was Alexander Wong is his name, right? Wang Wang, the boy wonder. And he's in conflict with Chris Cox and with Boz. The latter want to benefit the products and services of Meta. And he's saying, screw that, I'm going to build the best AI. And so Zuckerberg finds himself with three or four companies in one. And it's going to be up to him in the end to set a.
Leo Laporte [01:02:59]:
Clear strategic Vision, you've got to, I mean, obviously at some point you have to make money out of this. Meta is in a great position, as is Google, Amazon, Microsoft, all the companies that have revenue streams that are independent of AI. I would include Apple, except they don't seem to be making any headway in that area. They, they don't have to worry like OpenAI and Anthropic do.
Leo Laporte [01:03:26]:
By the way, OpenAI, which was thinking about ads when they did that code red last week, said, man, we're going to put the ads on the back burner. Meanwhile, Anthropic says, yeah, it's not going to. Probably 2026 is going to bring ads to our models, to our Claude models.
Jeff Jarvis [01:03:43]:
I'm seeing companies now come up that are promising to place ads alongside AI. I mean, the market demand, the advertisers want to be there. The advertisers are getting there anyway just by making themselves scrapeable.
Leo Laporte [01:03:56]:
Here from Adweek exclusive. Google tells advertisers it'll bring ads to Gemini in 2026. I don't. It's how you do it that matters to me, not whether you do it. I understand they need to monetize, but how you do it, what you don't want is ads that aren't obviously ads. That's the problem. And you know, we're very careful. Whenever we do an ad, say, right, our sponsor for this, you know, segment or our sponsor, we always make sure.
Leo Laporte [01:04:22]:
And when we mention any of the companies who are sponsors, we say disclaimer. These are sponsors. Because I think it's so important that people know the difference between editorial content that is not influenced by money and an ad content which is, which is paid. Unfortunately, advertisers hate that because advertisers. I've even received ad copy that says, don't mention this as an ad. I'm sorry, that's a violation FTC would get. Would fine me for that, by the way. Supposedly.
Leo Laporte [01:04:49]:
Although I think YouTubers do it routinely without fines. I mean, but I do think philosophically that’s important.
Mike Elgan [01:04:55]:
And I hope if you look at, if you think about how Google will monetize AI, it seems, you know, it seems intuitive how they'll do it. I mean, they'll basically do it in search Melded together. Exactly. And it'll be in, you know, AI will be sprinkled on everything. Google Docs, you name it. They also have their enterprise offerings and there's a lot of money to be made there. Meanwhile, Meta is talking crazy about what they want to do with AI, they, they want to, you know, they've talked about replacing, you know, they want only one out of every four, quote unquote users on, on, on Facebook to be humans. And the rest, you know, fake, you know, fake users, fake, you know, created by other people.
Mike Elgan [01:05:36]:
They want influencers on Instagram to sort of create digital avatars of themselves so that people can interact, the fans can interact with the, with the AI version, the, the digital twin of the influencer, bunch of stuff like that. It's like, okay, you're gonna, and, and what do they spend last they spent this year they're spending like 72 point billion on AI infrastructure and other AI related costs. How are, how are they cocking me? Oh, and also the other problem is the, you know, glasses, they want to do it with glass. They could have AI in the glasses. That's hard to monetize too. And AI and virtual reality, that's really hard to monetize. Maybe a little easier than the glasses. But I just don't see how, how Meta is going to, you know, they've been spending like drunk sailors on AI to sort of just buy their way into leadership and I just don't see how they're going to monetize it in a way that isn't going to drive users away.
Leo Laporte [01:06:33]:
Isn't this the same situation we were in at the earliest days of the Internet? And Meta is a perfect example. When it was Facebook, they had, they were not charging people for Facebook. They had real costs. Most of the Internet was free in the early days. Despite the real costs. The assumption always was, well, we'll figure it out. Yeah, we'll just build a lot of users and at some point they will figure out a way to monetize it. Aren't we in the same situation? Is this situation any different than that?
Mike Elgan [01:07:01]:
Well, I, I think, I think the fundamental problem is that Mark Zuckerberg doesn't know what he's doing. We saw that with the Metaverse and.
Leo Laporte [01:07:08]:
Has infinite power because of the way he's.
Mike Elgan [01:07:10]:
Exactly, exactly.
Leo Laporte [01:07:11]:
Nobody could say no to him.
Mike Elgan [01:07:13]:
Exactly. So he was when all in the metaverse and, and, and changed the name to Meta because the metaverse was the future. Now they're taking money away from their Metaverse stuff and putting it into, into, into AI. And by the way, you know, it's a great name for, for, for something that lives on your face that, that you, where you read something on a screen. Facebook is a great name for that. Not Meta.
Leo Laporte [01:07:38]:
Maybe they'll.
Leo Laporte [01:07:42]:
I gotta take a break. We gotta make some money, but we'll be back. Hold that thought. Mike. Mike Elgin is here. He's from Chile today. That's nice. I'm jealous also, but at least you're in roughly the same time zone.
Leo Laporte [01:07:54]:
So that's.
Jeff Jarvis [01:07:55]:
There's greenery outside his windows. Yeah, I can see summertime.
Mike Elgan [01:07:58]:
It's summer.
Leo Laporte [01:07:59]:
Oh, how nice. I've fantasized about going from northern to southern hemisphere and having an endless summer. That would be so nice.
Mike Elgan [01:08:08]:
Yeah. Like the surf movie from the 60s.
Leo Laporte [01:08:10]:
Yeah. Remember that?
Mike Elgan [01:08:10]:
Yeah, yeah.
Leo Laporte [01:08:12]:
Also, Jeff Jarvis, professor of journalism and author of many great books, including the new one, Hot Type, which is coming out in a couple of months. I'm excited about that. We'll have more in just a bit. Our show today, brought to you by Vention. Vention, it's like invention, right? AI is supposed to make things easier for your business, but for most teams it's just made the job harder. But there is help. Help is out there. That's where Vention's 20 plus years of global engineering experience comes in.
Leo Laporte [01:08:43]:
I mention that because they are engineers, they're coders first and foremost, but they know AI. They help build AI enabled engineering teams to make software development faster, cleaner and calmer. They know you don't need the stress. They know that you want to get the job done, that you want to use AI, but you want to use it responsibly without making your teams nuts. Clients of Vention typically see at least a 15% boost in efficiency. And this is not through hype, this is through engineering discipline. That's where they start, right? That's what they've got. There's another thing Vention can do, by the way, which may be in fact the best way to start with Vention.
Leo Laporte [01:09:24]:
They have a fun AI workshop that can help your team find practical, safe ways to use AI to in every area of your business delivery qa. It's a great way to get started, to get to know Vention, to test their expertise. Whether you're a cto, a tech leader, a product owner, you don't have to spend the rest of your life figuring out tools and architectures and models. You get together, you do this AI workshop with Vention, you bring your team in and it's a very interactive workshop, by the way. It's a two way street. They help you and your team assess your AI readiness, clarify your goals. You know, this is your, this is your business, what are your goals? And then they can help you outline the steps to get you there without the headaches. Now if at that point you say, well, we need help on the engineering front, their teams are there, they are really great engineers.
Leo Laporte [01:10:15]:
They could jump in as your development partner, your consulting partner, whatever level you want. This is probably the best next step after you've done the, the initial step, which many of us have done. The vibe coding, the, the proof of concept. You know, you've built a promising prototype, yet unlovable or whatever, and it runs, it not only runs, it runs well in tests. But how do you take the next steps? Do you open a dozen AI specific roles just to keep moving? I got a better idea. Bring in a partner who's done this across industries, someone who can take that idea and expand it into a full scale product, but without disrupting your systems, without slowing down your teams, without making people crazy. Vention, that's, that's the name to remember. Real people with real expertise who will give you real results.
Leo Laporte [01:11:04]:
Learn more@ventionteams.com and see how your team can build smarter, faster and with a lot more peace of mind. Or get started with your AI workshop today. Vention Teams.
Leo Laporte [01:11:20]:
That'S V E N T I O-N teams.com TWiT we thank them so much for supporting Intelligent Machines. Ventionteams.com.
Leo Laporte [01:11:31]:
Twitter we thank them so much for their support of intelligent machines. Meta acquired Remember those AI pendants that I was fond of?
Leo Laporte [01:11:44]:
B Computer was the first one that got sold to Amazon. Now the other one that I liked the best after B, Limitless, the limitless PIN just got bought by Meta and shut down. Well, that's the, you know, B is sort of still operating. But my concern with B, when they sold Amazon, I immediately deleted my account, is that they would have the six months worth of daily recordings. I wore it every day, all the time and I didn't want to give that to Amazon. They said, no, don't worry, Amazon's not getting the data. But I don't know about that. I imagine the same thing with Limitless, right? Does Meta now have every conversation I had?
Jeff Jarvis [01:12:23]:
So it's a paid service. People are getting it for free now, but then they have a limited amount of time and then it's just going to go away.
Leo Laporte [01:12:29]:
It's just going to go away. It's a $99 pendant, so it wasn't horribly expensive. I did have the full subscription because that let you use a variety of models and stuff. They had raised money, 33 million from Andreessen Horowitz, NEA and Sam Altman.
Leo Laporte [01:12:48]:
So I think, as always, there was pressure to exit from their investors. I don't know what Meta paid, but it fits into the meta. I mean, I guess you could add that capability to the glasses.
Jeff Jarvis [01:13:00]:
Exactly, exactly. That's where it goes.
Mike Elgan [01:13:03]:
The coolest product that I've seen lately is one that was rolled out yesterday. It's from Pebble. Remember Pebble?
Leo Laporte [01:13:10]:
Oh, you like this?
Mike Elgan [01:13:11]:
Okay, I like it a lot. And I'll tell you why, especially given the conversation we just had about, you know, all your data going to Amazon. I don't trust Amazon as far as I can throw them with your data, Leo, but.
Leo Laporte [01:13:23]:
Exactly, that's why I deleted it, or I hope it was deleted anyway.
Mike Elgan [01:13:27]:
So Pebble originally was founded in 2012. It was a smartwatch. They were famous because it had this super successful Kickstarter. It didn't really work out. They, they sold most of their stuff to Fitbit in, I think it was 2016. Google acquired Fitbit in 2021. And, and the, the founder, whose name is Eric Migikowski, I think is. How you say.
Mike Elgan [01:13:53]:
Thank you. Thank you.
Leo Laporte [01:13:53]:
We've talked several times.
Mike Elgan [01:13:55]:
He's been, he's been resting back the Pebble branding. So he launched a new company called Core Devices this year and he's finally got it back. And so now he's releasing this smart ring under the Pebble brand. And basically all this thing does is, has a button and you press and hold the button and while you're holding the button, it'll, it's recording your voice and you talk into the ring and it's called the Index 01 and it's, it's then encrypted and then sent to your smartphone where AI processes that information.
Jeff Jarvis [01:14:28]:
On the way to your phone.
Mike Elgan [01:14:29]:
Yes.
Jeff Jarvis [01:14:30]:
And it never goes on, isn't it.
Mike Elgan [01:14:32]:
It never goes to the cloud.
Jeff Jarvis [01:14:33]:
That's what I'm saying. If it's just going to your phone, why encrypt it?
Mike Elgan [01:14:36]:
Just for hyper security.
Leo Laporte [01:14:38]:
It's going through the air on Bluetooth. Somebody can. Smartphone. All right, Yeah, I think that's good.
Mike Elgan [01:14:43]:
And so I think this is a wonderful model. Here's, here's the, some of the kooky things about it. You don't charge it when the battery dies, you send it back and they recycle it. And it's supposed to buy a new two years, give or take a year. Yeah, exactly. That makes it smaller so it doesn't have to have this coil stuff for charging inside. So it's a, it's a smaller.
Jeff Jarvis [01:15:05]:
Jason Howell today said it's total Life. It's like 12 to 14 hours. So it depends on how long you're.
Leo Laporte [01:15:11]:
Oh, that's not exactly. Oh, so the idea is just you wouldn't leave it on all day. In other words, call mom. Well, die in a day.
Mike Elgan [01:15:19]:
Yeah, you know, that's fine.
Jeff Jarvis [01:15:21]:
Yeah.
Mike Elgan [01:15:22]:
But it's not on unless you're actively pressing the button.
Leo Laporte [01:15:24]:
Oh, you have to hold it down.
Jeff Jarvis [01:15:26]:
Yes. It's for only certain functions. You can make a reminder. You can. You can add a note, you can set it. Yeah, there's a few. Only a few things you can do with it.
Mike Elgan [01:15:35]:
Yeah. Which I think is great. I think that's a nice limited use of AI I think.
Jeff Jarvis [01:15:38]:
Did you order it?
Mike Elgan [01:15:40]:
I'm going to. The pre. Orders are only 75 bucks.
Jeff Jarvis [01:15:43]:
Yeah.
Mike Elgan [01:15:43]:
And it's going to be 99 when you march.
Leo Laporte [01:15:46]:
I admit.
Mike Elgan [01:15:47]:
Exactly.
Leo Laporte [01:15:47]:
I hovered my finger over the button and then declined.
Jeff Jarvis [01:15:52]:
You're recovering, Leo.
Leo Laporte [01:15:54]:
No, because this is not enough.
Jeff Jarvis [01:15:57]:
Yeah, that's not what you want.
Leo Laporte [01:15:58]:
This isn't what I want. I want something that records everything I want.
Mike Elgan [01:16:01]:
Yeah.
Leo Laporte [01:16:02]:
In fact, if Meta comes along with a Ray-Ban that has Limitless in it, I probably will buy it and use it. Admittedly, it's a horrible privacy invasion. This is not. But that's why I don't want it.
Mike Elgan [01:16:18]:
Exactly. I mean it's, it's. What it can't do is. Is record everybody talking. It records you talking.
Leo Laporte [01:16:24]:
You said because you.
Mike Elgan [01:16:25]:
Eric, hold it. Right.
Leo Laporte [01:16:26]:
Yeah. You have to whisper.
Jeff Jarvis [01:16:27]:
If you're the kind of person who takes out your phone constantly or a notebook constantly to make notes. This replaces that.
Leo Laporte [01:16:33]:
Yeah, but you can do. And Eric said well, I tried it with my watch, but I just didn't like it. I can I do this with my watch all the. The time. You know, remind me to get up in the morning and then, then, then does. And I think that that's. And for me that's enough. If that's all I want to do with it.
Leo Laporte [01:16:50]:
It's the old note-to-self thing. Yeah, I bet. Yes. You carried around one of those little Olympus.
Jeff Jarvis [01:16:57]:
Because I never listened to it.
Mike Elgan [01:16:59]:
Like, like, like Agent Cooper from Twin Peaks.
Jeff Jarvis [01:17:02]:
Sure sign of death that I'll ever get any is to put a on a to do list.
Leo Laporte [01:17:06]:
Who is.
Jeff Jarvis [01:17:06]:
That's when it dies.
Leo Laporte [01:17:08]:
He was talking to his was secretary or something. He was a Janet Diane.
Mike Elgan [01:17:11]:
I think it was something like that.
Jeff Jarvis [01:17:12]:
Yeah.
Mike Elgan [01:17:12]:
So they never explained who that was. No, but I, I think it's. I think it's really good. And the other Thing that's interesting about it is it, it's all open source and so companies can change the functionality of the button. They can do all kinds of things with it, which I think is an interesting aspect to it.
Leo Laporte [01:17:26]:
Okay.
Mike Elgan [01:17:27]:
And so this, I think this is a really interesting product. And, and, and you know, there, there are a bunch of different rings now. They're, they're the kind that, you know, monitor your sleep. I wear my heart rate.
Leo Laporte [01:17:36]:
I wear an aura ring and I love it, but I support everything. Just my, just me.
Mike Elgan [01:17:41]:
Yeah. But, but I'm predicting that, that, that by the end of next year we'll all personally know people who have two or three rings. Smart.
Jeff Jarvis [01:17:50]:
That's right. They'll become the new kind of.
Leo Laporte [01:17:52]:
And you know who one of them will be.
Mike Elgan [01:17:55]:
Exactly. That's right.
Leo Laporte [01:17:57]:
I'm waiting. The truth is it really underscores the issue when I say, oh, I'm not going to wear the B computer because it got sold to Amazon. I'm not going to wear the Limitless because it got sold to Meta. I think the only company I would trust this with is Apple and I'm kind of disappointed that Apple is so far behind in all of this stuff because Apple could make a ring or glasses that I would absolutely be interested in. Apple could put that capability in their AirPods.
Mike Elgan [01:18:25]:
Well, what I can see Apple doing is they had that. Remember that journaling thing that they came out with?
Leo Laporte [01:18:32]:
They have a journal. Yeah, it's called Journal.
Mike Elgan [01:18:34]:
Yeah, it should, it. That should be a kind of lifelogging tool. Apple would never use the L word but like that's what we want is lifelogging and see what we want. So on, on the airplane coming here, I watched Ready Player One again, which I am probably the only person in the world who thinks is an absolutely brilliant film and way better.
Leo Laporte [01:18:56]:
Certainly not me.
Jeff Jarvis [01:18:59]:
When they go into virtual, Mike is.
Mike Elgan [01:19:01]:
Exactly. But when they go into virtual, they're looking for clues in the life of what's his name, the deceased founder of the Oasis. That's his lifelog. Like they're going to his lifelogging data and that's what we want. We want lifelogging, want everything captured and then for AI to, to basically give us insights into our world, you know, and not forget things. And so that's what we really want. And it's just like. But don't sell it to Amazon, you know.
Leo Laporte [01:19:32]:
Yeah.
Mike Elgan [01:19:33]:
And figure out some way where we're not really invading everybody's privacy either.
Jeff Jarvis [01:19:37]:
Well, as long as it happens locally. I mean, I think this is, I think this is where you're going here. And it's true of LLMs to generative AI as well. Anything you could run locally you should feel fairly good about.
Mike Elgan [01:19:50]:
Yeah, right.
Jeff Jarvis [01:19:52]:
And so the question is how much can be shoved down locally as models get smaller, as the functionality becomes part of these things. You know, Leo.
Jeff Jarvis [01:20:02]:
You don't. This doesn't do a good enough job. But you would. There's no reason to not trust the pebble thing. Right. Because it's just going to your phone.
Leo Laporte [01:20:08]:
Well, this one. No, it never, it never is anywhere but on my phone.
Jeff Jarvis [01:20:12]:
Yeah.
Leo Laporte [01:20:14]:
You know, Google's doing this. By the way, Google is probably going to have glasses next year. They say.
Jeff Jarvis [01:20:18]:
Yeah, I'm looking forward to this.
Leo Laporte [01:20:20]:
They've moved quite far ahead on their XR platform. They have shown off some updates now and they have some partners.
Leo Laporte [01:20:31]:
I think this may, you know, they've got Samsung. There's also xreal's Project Aura. Smart glasses. The reviews have been pretty positive. Lisa Laporte, we've had on our shows at CNN thought it was quite good. Sam Rutherford and Gadget says he was very impressed. Yep. So the people who went to the.
Leo Laporte [01:20:51]:
I don't know if Jason go.
Jeff Jarvis [01:20:52]:
Jason went. Jason went. And he was, he was impressed too.
Leo Laporte [01:20:55]:
Yeah.
Jeff Jarvis [01:20:55]:
And it's funny, one of the videos he put up because there's, there's a one eye and a two eye projection. Yeah, right. And, and when, when the video's on and, and you can tell the screen's there, it looks like he's googly eyed.
Mike Elgan [01:21:11]:
Yeah, right.
Jeff Jarvis [01:21:12]:
Because you see the light in the screen.
Leo Laporte [01:21:13]:
Oh, that's not.
Jeff Jarvis [01:21:15]:
Well, no, I think it's fine. I think, I think it's transparent.
Leo Laporte [01:21:17]:
I guess people know. Yeah, yeah.
Jeff Jarvis [01:21:19]:
Let's know you're looking at something and I think that's okay. But it's pretty damn impressive. He, he, his example was he took one of the Google guys into a room, he said, give me a peace sign, a photo. Took a photo and then he said, put this guy in a field of daisies. And it went off to Nano Banana and came back and showed him the picture. And then he could send the picture where he wanted. And it was all in the glasses.
Jeff Jarvis [01:21:43]:
Everything.
Leo Laporte [01:21:43]:
By the way, Patrick Delahanny tells me, as a fan of Ready Player One, you'd be glad to know the Oasis went live on December 2, 2025. Oh, wow.
Mike Elgan [01:21:54]:
Okay.
Leo Laporte [01:21:56]:
Unfortunately, it's a fictional metaverse, but nothing we've got is anywhere near close to that good. But yeah, so I always wanted that by the way going back to Steve Gibson. Steve Gibson, what's his name?
Mike Elgan [01:22:11]:
Kevin Klein.
Leo Laporte [01:22:12]:
William Gibson. Too many Gibsons. Not Kevin Klein. William Gibson's original neuromancer. Have always wanted to jack in to a metaverse and have it be as real as the real thing and participate. And now I've. I really want these, you know, some sort of glasses, I think. I guess I would trust Google.
Jeff Jarvis [01:22:32]:
Yeah, I think I would too. I, I quizzed, I quizzed Jason about this at length. We talked a long time about it and, and so I'm trying. I said, would you sit there and watch something on these? And he said, yeah, he would.
Leo Laporte [01:22:43]:
Really?
Jeff Jarvis [01:22:43]:
The binaural. And I said, but you're still seeing through. He said, you know, but, but it works. It's okay.
Leo Laporte [01:22:48]:
That's what people love the Vision Pro for is that is watching movies.
Jeff Jarvis [01:22:53]:
But this is, but this is still ar. It's still visible through.
Leo Laporte [01:22:57]:
It's not the damn goggle reality nerd. This is, these are glasses.
Mike Elgan [01:23:01]:
It's, it's a heads up display. So, so it, the, the, the augmentation of reality is not anchored to physical objects. It just. Wherever your head moves, that's where the screen moves. Like Google Glass and so on. It's.
Jeff Jarvis [01:23:16]:
You had to say that, Mike. You had to bring that up again.
Mike Elgan [01:23:18]:
I had to. I'm sorry. Google showed videos of a product it was working on that were language translation glasses. And that is really compelling to me. Instead of a little, a little box, a little rectangle or two floating in space in front of me, what I'd love to have is subtitles on everything and always in English. So if, if somebody is speaking in English as a, as a replacement for say a hearing aid or talking to somebody in a crowded loud place, you could just get the, you can get the subtitles.
Mike Elgan [01:23:51]:
If you're talking. Somebody's talking a foreign language, you get the English language subtitles. If you ask the AI question, the answer comes in the form of subtitles. I just want words at the bottom sort of giving me context for everything and clarifying things and serving as the answer. Because it's really problematic when you are. Because I've used, you know, Ray-Ban Meta glasses translation fit called Live Translate. Live translation. It really doesn't work at all.
Mike Elgan [01:24:17]:
Somebody talks to you in a foreign language. There's a, there's a bit of a pause while it, you know, sends it up to the cloud, then comes back and then you start talking and then you Start hearing the translation and there's too many people talking. You can't parse out who's saying what unless somebody's on stage making a speech. And then it's. It's a little better.
Leo Laporte [01:24:36]:
But.
Mike Elgan [01:24:37]:
But I want. I want visuals and I want words on the screen. Not like, you know, anything more fancy than that. I thought they were really onto something with those glasses, with the.
Leo Laporte [01:24:48]:
What I find interesting in this whole conversation is it's very clear we're going to have AI in everything, in our appliances, on the edge, everywhere. Steven Levy had an article in Wired this Week about a new hearing aid that is AI enabled, that's using AI to do a better job of discriminating voice in a noisy environment. It's called the Foretell. It's only available in New York City right now. It's not generally available. They've been using it as a beta test with people. He mentioned that Steve Martin, who is a hearing aid, has been wearing these. So I asked Steve and he said, yeah, no, they work.
Leo Laporte [01:25:28]:
He said, it's not a miracle, but I can now go to a restaurant and hear what people are saying in a noisy restaurant. Really couldn't before.
Jeff Jarvis [01:25:36]:
Mike, to your scenario. I've gone to two German language conferences in the last month.
Leo Laporte [01:25:42]:
Wouldn't it be nice to have translation?
Jeff Jarvis [01:25:44]:
Well, so at one of them at the München Mediantage, they had an app called Video Taxi, which was doing live transcription and translation on. And you could read on your phone.
Leo Laporte [01:25:55]:
In real time, though.
Jeff Jarvis [01:25:55]:
That's real time. It was real time. It was. It was excellent.
Leo Laporte [01:25:58]:
See, that's the problem. All the systems we have so far, and there are some really good ones. There's a. There's a lot.
Jeff Jarvis [01:26:03]:
This. This really worked very well. And.
Jeff Jarvis [01:26:07]:
My German is bad, but it was good enough that I could tell it was. Well now. Then I went to the next conference. Of course, they didn't have it. And I was. I was trying to look if there was any way I could clude it together and. And do anything with these. And it's not quite there, but I think that that's.
Jeff Jarvis [01:26:18]:
That's absolutely close. And Mikah, I think you're right, too. I want it in text. I still want to hear the voices and hear the intonation, and I want the text to be able to come up. And then. Leo, I think your point too is right, is then you can also interact with that information.
Leo Laporte [01:26:32]:
Right?
Jeff Jarvis [01:26:33]:
Explain this to me, or what's the word for that?
TWiT.tv [01:26:36]:
Or whatever, for the translations. There's a lot of opportunity for Translations, though, that's always going to be a problem because there are some languages that the syntax is totally different, so you have to wait for the whole sentence to be finished before you can translate it. So it's really going to depend on the language there.
Jeff Jarvis [01:26:51]:
Well, German goes on about 87 words before they get to the verb.
Leo Laporte [01:26:54]:
Yes.
Mike Elgan [01:26:55]:
And each word is nothing to be trifled with either.
Jeff Jarvis [01:26:58]:
Yes.
Leo Laporte [01:26:58]:
They have simultaneous translation with humans at the un, Right. I mean, it's fast enough.
Mike Elgan [01:27:03]:
It's expensive and difficult.
Leo Laporte [01:27:04]:
Yeah, yeah.
Mike Elgan [01:27:05]:
It's really specialized.
Mike Elgan [01:27:08]:
The other obvious use case for words across your glasses is your notes while you're speaking, publicly speaking. So a bunch of years ago, I don't know when it was 15 years ago or something, there was a company called Doppler Labs that came out with. There was a whole trend of hearables, and you'd bring up the app and you basically could. There was a menu of things. You could stop hearing the baby crying. Okay, I don't want to hear that. You know, and you. Traffic noise.
Mike Elgan [01:27:34]:
Don't want to hear that. Or you could do the opposite. You could boost traffic noise if you're riding a bicycle, for example.
Mike Elgan [01:27:41]:
And it was. It seemed like it was promising. It was. They were probably too early. They ended up closing up shop because they had all kinds of battery life and hardware issues. But that's really what we need. You know that. That's what we need more.
Mike Elgan [01:27:53]:
Even the language translation is. Is what you're describing with the hearing aids, but, like, on a much bigger scale to be able to just specifically eliminate certain sort. You know, I don't want to hear the air conditioner. I want to hear everything else. That sort of thing.
Leo Laporte [01:28:08]:
I forgot about Doppler Labs. They went out of business in, what, 2017 or something? Yeah.
Mike Elgan [01:28:13]:
Their product was called the Hear One.
Leo Laporte [01:28:15]:
Yeah. An ear pewter.
Leo Laporte [01:28:21]:
Oh, well.
Jeff Jarvis [01:28:22]:
Oh, well. A little early.
Leo Laporte [01:28:24]:
Let's take a little break. Lots more to talk about. You're watching Intelligent Machines. I have so much in here. You guys pick some stories that you want to talk about. I've already talked about the stuff that I really wanted to talk about. I have more things, too, but I'll leave it to you guys.
Leo Laporte [01:28:39]:
That's why we have you on. Paris will be back next week. She is at the Consumer Reports holiday party right now, and she posted that she is gossiping like crazy with her friends, her colleagues, her new colleagues.
Jeff Jarvis [01:28:55]:
Well, I imagine because she's in a hotter part of media, I imagine she's explaining news to people who aren't necessarily up on it.
Leo Laporte [01:29:03]:
Yeah, maybe that's it. Yeah, yeah.
Jeff Jarvis [01:29:06]:
I think she has the red string out on the wall now with a break in one hand.
Leo Laporte [01:29:09]:
See her? Can't you just see her doing this? Well, it all started.
Leo Laporte [01:29:17]:
This episode of Intelligent Machines is brought to you by the Agency agntcy. They're building the future of multi-agent software with Agency.
Leo Laporte [01:29:52]:
It's an open source Linux Foundation project bringing together all the players in this space to build. Agency is building the Internet of Agents. It's a collaboration layer where AI agents, which is really one of the hottest areas right now, can discover, connect, and work.
Leo Laporte [01:30:34]:
You'll be collaborating with developers from Cisco, from Dell Technologies, Google Cloud, Oracle, Red Hat and 75 plus other supporting companies to build next gen AI infrastructure together.
Leo Laporte [01:31:32]:
Agency.org and thank them for supporting intelligent machines. One hand washes the other.
Jeff Jarvis [01:31:40]:
Well, the Linux foundation got some more work this week.
Leo Laporte [01:31:42]:
They did. This is kind of in a related area. This was one of the news stories we were talking about on.
Leo Laporte [01:31:50]:
Windows Weekly as well.
Leo Laporte [01:31:53]:
This is from Anthropic, right?
Jeff Jarvis [01:31:56]:
Well Anthropic is contributing. Didn't they do the.
Leo Laporte [01:32:00]:
They did mcp.
Leo Laporte [01:32:03]:
So they have to participate in this. Their latest project is called. I'm trying to find it here.
Jeff Jarvis [01:32:11]:
It's line 106.
Leo Laporte [01:32:12]:
Let's see. Choose technology sector. AIML agency.
Jeff Jarvis [01:32:18]:
Where'd it go?
Leo Laporte [01:32:19]:
They have so many projects.
Jeff Jarvis [01:32:20]:
The Agentic AI Foundation.
Leo Laporte [01:32:22]:
That's it. Okay.
Jeff Jarvis [01:32:23]:
To promote standards for artificial intelligence agents. So it includes MCP, OpenAI's agents, MD Goose, a framework for building agents using Block. I think Google's involved in it. I think Amazon's involved in it.
Jeff Jarvis [01:32:41]:
Members include Google, Microsoft, AWS, Bloomberg and Cloudflare.
Leo Laporte [01:32:46]:
Wired says OpenAI, Anthropic and Block are teaming up to make agents play nice.
Leo Laporte [01:32:53]:
But this is it. Yeah, it's the same idea. It's also under the Linux Foundation. It's, it's. We've got to have standards for intercommunication. And I think this is, it's good because you can see these companies wanting to be siloed.
Jeff Jarvis [01:33:05]:
It also strikes me that this is kind of a new space. There's, you know, we had namespace back in the day. This is agent space and it. And it's at a level probably separate from the human web created for web. And there's going to be parts of the web that deal straight with agents and AI deals straight with agents and agents deal with agents. And it's going to, It's a. Of whole bunch buzzing world above us.
Leo Laporte [01:33:25]:
Yeah. And you know, the only thing that, the only cautionary tale here is that when there isn't a lot of money to be made, these companies are often willing to cooperate.
Jeff Jarvis [01:33:35]:
Yeah.
Leo Laporte [01:33:36]:
And then the minute there's money to be made, they silo off and say, no, no, we're, you know, you can't use OpenAI's tools with anthropic or Block's tools with.
Jeff Jarvis [01:33:46]:
Well, Amazon is cutting off AI from looking at their deals. So you can't use a shopping agent.
Leo Laporte [01:33:50]:
Exactly.
Jeff Jarvis [01:33:50]:
Can't do anything with Amazon as soon as there's money. So AWS is here, but Amazon's not.
Leo Laporte [01:33:54]:
We're in that magic time where nobody's making any money. So everybody's working with everybody else.
Mike Elgan [01:33:58]:
Although the outlier here, I think, is Google. Because Google does make a ton of money. I mean, they're obviously a very successful company, but you can see that you have this agentic browser world and most of them are based on Google's, you know, generous sharing of Chrome.
Leo Laporte [01:34:19]:
Right.
Mike Elgan [01:34:19]:
They're all based on chromium. And so that's. That kind of thing is something that only Google among those type of companies, generous.
Leo Laporte [01:34:26]:
But There's a little bit in it for them too.
Mike Elgan [01:34:29]:
Sure, for sure. But like, but, but now Chrome is, is one of those agentic browsers and you know, how much better would they do, would they do if they just said, hey, guess what, we're not going to, we're not going to let you use the open source version of Chrome anymore.
Leo Laporte [01:34:43]:
Right.
Mike Elgan [01:34:44]:
They'd really set, set their competition back.
Leo Laporte [01:34:47]:
Right?
Leo Laporte [01:34:50]:
Yeah. All right, yeah. I mean, look, you say the same thing about Android. There's been clear benefit to Google by creating an operating system that's a dominant telephone operating system that they get a lot of benefit from. Not just goodwill, but actual data from all the users, tons of data from all the users which they have since used to make the best AI models incidentally. But at the same time they gave it away, they open sourced it. And a lot of handset makers who would not have been able to make handsets without an operating system have an operating system. So yeah, maybe this is what the way capitalism is supposed to operate.
Leo Laporte [01:35:25]:
They make money on it, they get value out of it, but they also contribute value.
Jeff Jarvis [01:35:30]:
This is why we also want open source to be a countervailing pressure on all of that.
Mike Elgan [01:35:34]:
Right.
Leo Laporte [01:35:35]:
There are a number of.
Mike Elgan [01:35:36]:
Didn't this story feel like a throwback to a previous era where all these companies are getting together and now they have an acronym and they're all going to work on standards to get. Like when's the last time you saw a story like that?
Leo Laporte [01:35:46]:
The last time was Matter where everybody who made home automation equipment realized that we are all not going anywhere because nobody talks to anybody else. Maybe we should figure out a lingua franca so that all our home automation equipment could talk together. And Google and Apple, the Zigbee and Z-Wave and all of the other home automation companies said actually one of them didn't, I think. Was it Z-Wave that didn't participate? One of them didn't, but the rest said, yeah, you know what, we all with sink or swim at this point. We all got to get together.
Jeff Jarvis [01:36:20]:
They're all sunk.
Leo Laporte [01:36:21]:
Yeah, we're all sunk otherwise. Yeah, so that's. Yeah, you're right, it's not, it doesn't happen very often. It happens so infrequently that it's, it's notable when it does. But as I said, I think it only happens when they're desperate or there's no money at all.
Mike Elgan [01:36:37]:
Speaking of desperation and trying to make money, you have an item on there. There's a company called Svetka. What is it called, yes, Svetka is going to do an AI Super Bowl ad. But this raises an interesting question to me. There have been a bunch of companies that have come out with AI-generated ads. Coca-Cola famously, Google, Meta, Skechers, Toys R Us, McDonald's, American Eagle. They've all failed. They've all been rejected.
Leo Laporte [01:37:07]:
That Coca-Cola ad is so horrible.
Jeff Jarvis [01:37:09]:
Have you seen the Dutch McDonald's one?
Leo Laporte [01:37:12]:
Let me play this. I think I can play it because it's from Holland. Yeah, I don't want to.
Jeff Jarvis [01:37:16]:
It's also subtitled, so you can. You can. You can turn the music down in a second.
Leo Laporte [01:37:20]:
Yeah.
Leo Laporte [01:37:22]:
They've actually pulled it down.
Jeff Jarvis [01:37:24]:
Line 177. Has it?
Leo Laporte [01:37:25]:
It was a. AI generated. Well, it was also dystopian. Right. It was like.
Jeff Jarvis [01:37:30]:
But it's kind of a Dutch sense of humor.
Leo Laporte [01:37:32]:
Maybe that's it. But for whatever reason, maybe the Dutch aren't ready for this.
Jeff Jarvis [01:37:37]:
Well, it was the AI Pissed him off, I think.
Leo Laporte [01:37:40]:
Let me, let me.
Jeff Jarvis [01:37:41]:
I've got it. Line 177.
Leo Laporte [01:37:42]:
Yeah. The most terrible time of the year. All right, here we go. It's the most terrible time of the year.
Jeff Jarvis [01:37:51]:
For those of you listening.
Leo Laporte [01:37:54]:
It's got Santa in a traffic jam. It's got caroling singers who are in the wind and snow. A guy who just fell out his window because a Christmas tree knocked him over. Ice skaters slipping and falling. People falling out of a tract. I don't know. This is pretty funny. The cookies are talking.
Leo Laporte [01:38:10]:
Yeah.
Jeff Jarvis [01:38:10]:
Cookies are burned.
Mike Elgan [01:38:13]:
Relatives are horrible.
Leo Laporte [01:38:15]:
Exploding KitchenAid. I mean, cat pulling down the Christmas trees is a real deal.
Mike Elgan [01:38:26]:
But Everything's lovely at McDonald's.
Leo Laporte [01:38:28]:
Just hide out McDonald's. That's why they pulled it down. I don't think it was the AI but the. But the AI didn't bother me in this one much. The Coca Cola one. It really bothered me and it was so awfully obvious.
Mike Elgan [01:38:44]:
So the question is, why do these major advertising agencies or the major brands think that those commercials are going to succeed? And then they get out into the public and they have to sort of retract them because people are having dry heaves over them. Where's the disconnect there? Why do. And you see it in AI Slop of all kinds, where the creator of the. Of the. Of the AI Stuff loves it and thinks it's amazing. And then everybody else.
Jeff Jarvis [01:39:12]:
I think as soon as it becomes common enough, Mike. Then right now it's that occasional one that gets a story about it. Oh, no. McDonald's made an AI commercial everybody looks at it, everybody complains, oh, it's AI and it's not. Once we see the super bowl, once we see it happen enough. There's a commercial for insurance, and I could swear it's AI and we just don't know it. The woman on it is just kind of too perfect as she walks down the street. And I'll bet there's more of it than we know today.
Leo Laporte [01:39:35]:
Yeah.
Mike Elgan [01:39:35]:
Yeah. Yep.
Leo Laporte [01:39:36]:
Well, we're going to see a lot of. As you. As you point out, the Wall Street Journal points out, we're going to see a lot of AI Ads on the Super Bowl. I'm of the opinion that it isn't about AI Ads. It isn't even about whether they're good or bad, that we are really seeing a schism in the world between AI haters and AI lovers. Yeah. And the AI haters have a visceral reaction to AI like it is an insult to humanity, which is what Miyazaki said, that it is somehow evil. It's got a burn.
Leo Laporte [01:40:11]:
And those people. I'm not one of them. Fortunately or unfortunately, I'm not one of them. But I do honor their visceral loathing of anything AI because it somehow, it's also just.
Jeff Jarvis [01:40:27]:
It's technophobia, too.
Leo Laporte [01:40:29]:
I think it's something worse than technology companies. I feel like it's. I've never seen anything. I don't know, Leo, have you seen anything like this? There's been times where people say, oh, I don't want to ever use Microsoft Excel or whatever, or computer. You know, used to be. You'd go, oh, the computer's down at the dmv. Oh, what's new? Yeah, the computer is always broken, but that's not the same kind of visceral. I'm gonna.
Leo Laporte [01:40:55]:
I'm terrified by this feeling that AI is generated. I think it's extreme.
Mike Elgan [01:41:00]:
I share some of those feelings as well. So, for example, there's a children's product on the market right now. What's it called? It's called Sticker Box. And what it is is a little thing where kids say a squirrel with an umbrella, and LLM will. Will draw a black and white line drawing a squirrel umbrella, print it on a sticker and spit it out. And that's cool.
Jeff Jarvis [01:41:26]:
Yeah.
Mike Elgan [01:41:27]:
Is. It's. I think it's. I think it's terrible. And the reason it's terrible is. Is that drawing things and drawing stickers, for example, is a core part of human development. To be a passive consumer, to think something creative is something that Comes from machine. And to be a passive consumer of it is, is, I think.
Mike Elgan [01:41:49]:
Not good for kids. Kids should be drawing pictures every day. They should be like learning, learning how things look, paying attention. So, so I, so you see both things. Like, I had the same reaction you did. Hey, that's really cool. It's innovative. What an innovative, cool product.
Mike Elgan [01:42:03]:
What a great use for LLMs. How amazing. And then also, oh, this is terrible if it's a substitute for actual creativity. If it isn't, that's another thing altogether. But like people say, I mean, let's look at, let's look at the content creation world. Vast majority, far over 50%. We don't know exactly. Maybe as high as 70% of all the content being posted this year on the Internet is AI generated or partially AI generated.
Mike Elgan [01:42:32]:
There are some people saying that by the end of next year or maybe by 2030, it'll be 99.9%. And when people hear stuff like that, I think, I think this, it's reasonable to say that if the, the quantity of AI stuff that's out there and that people see is so overwhelming that actual writers, actual photographers, people who paint, you know, all that just throw their hands up and say, what's the point? Nobody's ever going to see or read what I'm doing.
Leo Laporte [01:43:00]:
That would be bad.
Mike Elgan [01:43:01]:
That's problem problematic. So I think that, I think it's reasonable to say, okay.
Mike Elgan [01:43:08]:
Articles that are written, photographs, painting, all those kinds of things, what the reason they exist are for people to share their experiences with each other, not for just the consumer to receive the, the hive mind version from the machine. Like it's, it's supposed to be a communication, two way communication. I don't think that's unreasonable.
Jeff Jarvis [01:43:28]:
So Mike, I'm, I'm doing a lot of research now on mass media and the history of mass media and I'm reading about the cultural part of it and I just read a wonderfully grumpy essay by Ernst Fundenhage, I think his name is doing. Making every complaint you just made about television.
Mike Elgan [01:43:44]:
Yeah.
Jeff Jarvis [01:43:44]:
And saying that it robs us of creativity and, and that people aren't going to create on their own anymore.
Leo Laporte [01:43:49]:
Because we remember that.
Jeff Jarvis [01:43:50]:
Watch this.
Leo Laporte [01:43:51]:
And you told us that people were worried that reading novels would reduce the imagination.
Jeff Jarvis [01:43:57]:
Oh worse, that they would corrupt the morals of women and children.
Jeff Jarvis [01:44:01]:
So I mean, not men.
Leo Laporte [01:44:02]:
There's a history of technology doing this. But I understand their morals have been.
Mike Elgan [01:44:07]:
Corrupted.
Mike Elgan [01:44:09]:
From their perspective.
Leo Laporte [01:44:10]:
Right, but we're already in that in a way, Mike. We're already in that world where kids sit down in front of an iPad and don't get up for eight hours. They're not drawing and they're not thinking. They're not.
Jeff Jarvis [01:44:22]:
They're thinking of a creative idea, a muffin in a boat that they couldn't have drawn. And. And I think that's.
Leo Laporte [01:44:30]:
And they're gonna get. By the way, I just shared this from the regular car reviews subreddit. Somebody could. Took track. Took. Kept track of all the different axle configurations in the Coca Cola.
Leo Laporte [01:44:45]:
Ad.
Jeff Jarvis [01:44:46]:
Truck fingers.
Leo Laporte [01:44:47]:
The tires are moving all over the.
Mike Elgan [01:44:49]:
Place, and that's in just one ad.
Leo Laporte [01:44:51]:
So really what the problem there is. And I've noticed this, the frame rate's also terrible in the Coca Cola ad. I think this is just a crappily produced ad. You know, I mean, you could fix that if you're paying attention.
Benito Gonzalez [01:45:03]:
But here I'm speaking as a producer of this kind of stuff before, like, if I had turned in something like this, you know, how much they would have said, like, you know, this is terrible. You know, you know, how.
Jeff Jarvis [01:45:12]:
How bad.
Benito Gonzalez [01:45:13]:
Yeah. How bad the review would have been if I had submitted this sub. But then an AI can get away with this.
Benito Gonzalez [01:45:19]:
Why.
Benito Gonzalez [01:45:20]:
Why is that?
Leo Laporte [01:45:20]:
Well, but. But, Benito, if you said I had AI do this, I would go, oh, that's really neat.
Jeff Jarvis [01:45:25]:
Exactly.
TWiT.tv [01:45:25]:
So why is that?
Leo Laporte [01:45:28]:
Well, because it's amazing that it could even do a truck, let alone.
Jeff Jarvis [01:45:31]:
You're old, Benito. You're old hat. Sorry.
Leo Laporte [01:45:34]:
Actually, it's interesting because Apple's response to this was to do an ad that looks a little bit. It's a little fanciful, a little magical. But they did it with puppets, and they made it very clear. They did it with puppets, and it was shot on an iPhone. There's no AI involved.
Jeff Jarvis [01:45:48]:
At least they say, well, because it's Apple, they don't know how to do A.I.
Leo Laporte [01:45:51]:
Yeah, maybe. Yeah, maybe. I don't know. I imagine we'll see. Start seeing a lot more AI Ads. But I also imagine that in time, we won't know we're seeing a lot more.
Mike Elgan [01:46:01]:
But I just. I just want to go back to one thing Jeff was saying, and I think that there's.
Mike Elgan [01:46:08]:
Is there a distinction to be made between moral panic saying that, you know, this thing is going to create a crisis? People have been saying about new media forever.
Jeff Jarvis [01:46:17]:
Give it a second.
Mike Elgan [01:46:18]:
There we go.
Leo Laporte [01:46:20]:
Moral panic, by the way, with AI of course.
Mike Elgan [01:46:23]:
Exactly.
Leo Laporte [01:46:24]:
And.
Mike Elgan [01:46:25]:
And the idea that that content is often a conversation between two people, that. That it's a two way Street. It's not just about content.
Mike Elgan [01:46:36]:
Consumers with you, Mikah.
Jeff Jarvis [01:46:37]:
I agree the conversation is the essence of society, but that again, was the. Was the argument that Funded Hog was making about television. That is a cut off the conversation.
Mike Elgan [01:46:45]:
Yeah, yeah. And. But here's an extreme example. For example, let's say. So I'm here in Chile, the birthplace of Pablo Neruda, right? So just look at poetry. I know a lot of us don't consume poetry to use the C word that much, but if you are a passionate reader of poetry, does it matter if the pain and suffering and loss and all the emotions being expressed by a poet came from a person who experienced those things and is communicating them to you and you recognize them in your own self? Does it matter? Or are you just the consumer?
Jeff Jarvis [01:47:25]:
I know it doesn't matter if there's a producer, but let's look at another thing. What if someone has a lived experience and they're not a poet or they're not or. The example I use is they can't draw, but they tell their story, and they could not have told the story as effectively if they didn't have the tool to help them.
Mike Elgan [01:47:41]:
Them do that.
Jeff Jarvis [01:47:42]:
And so I think that there's, you know, you can say the thing about the camera allows people to do just that. There's other tools that enable them. Is the. Is. Is the. Is the experience and the humanity.
Jeff Jarvis [01:47:54]:
Authentic? Is what matters in the end.
Mike Elgan [01:47:57]:
Right. So. So I guess the essential question I'm trying to get at is, is it. Is it legitimate for somebody to. To care if. Because a lot of the, A lot of the criticism of AI general content are that it's garbage. Right? But it very soon will not be garbage. Like, very soon it will be just excellent.
Mike Elgan [01:48:17]:
Right. Really good. And, and is there room to. For us to say, okay, it's legitimate to care if a person created it or not? And, and this is, this is the argument in favor of tools that let you turn on AI and turn it off if you want to, as opposed to not having that ability not to know?
Jeff Jarvis [01:48:37]:
I think the question is, you're asking the right question, Mikah, which is, where is it relevant to the. Well, I'll use the same bad word, consumer. The audience. That's also a passive word. The other end of a conversation, one hopes. And you know, there were complaints about word processing in this way. There were complaints about, obviously, the camera. There were complaints about Photoshop.
Jeff Jarvis [01:49:00]:
And so I think the question becomes, when is it relevant and when need it be known? So I went through, on my book Hot type. I went through this where I tested a perplexity thing and I put an idea, and I won't go into the details right now, and it came up with a great phrase and I wanted to use that phrase. And I couldn't say as perplexity observed, because that'd be really stupid, but I felt I absolutely had to cite it and let it know that I had done this. And so I footnoted it, and I hope an amusing footnote because it was relevant to the reader to know that. Yeah, to know that I didn't come up with that phrase. And I wasn't going to pretend that I did, but at some point Photoshop was. We had to always know when you use Photoshop. And then the point became assumed.
Jeff Jarvis [01:49:47]:
So I don't know where that line becomes.
Mike Elgan [01:49:49]:
I mean, some of the most pro AI people in the world are to be found on some of the. The AI subreddits on Reddit. They are. So, I mean, it's really shocking how in favor of it they are. To the point where I got into. Yeah, I've gotten into arguments with people where I champion the idea that content curation sites like YouTube or, you know, you name it, whatever, should. There should be a toggle for you to switch it on and off. And, and there are a tiny number of companies that do this.
Mike Elgan [01:50:20]:
Most don't. Every week or two, another one will come out with a feature like that. But it's moving very slowly and I argue in favor of that toggle. I think, I think it should be able to be toggled on and off. There are people on Reddit who will actually say, with, I can presume a straight face, that people should not have the ability to turn it off.
Jeff Jarvis [01:50:40]:
Oh, that's.
Mike Elgan [01:50:40]:
And I don't understand that. Find that highly objectionable.
Leo Laporte [01:50:44]:
I think you asked really a great philosophical question, which is, should it matter to the consumer if you can't tell? But I, you know, I think, look, here we are three humans having a conversation. You could do a podcast with Notebook LM that has no. No humans involved. And right now it's not as good as a real human conversation, but even if it were, I think we would still prefer. And we would somehow know. I don't know, maybe we wouldn't.
Jeff Jarvis [01:51:09]:
There's an authenticity there. No, I think, I think so.
Leo Laporte [01:51:11]:
I think we would. I think we.
Mike Elgan [01:51:12]:
But, but again, I think there's room for both. So. So if you really agree 100%, for example, you. You two are personalities. You're. You're you're, you're iconic. Right. And so, so people feel like they know you to a certain extent and they really want your take and they don't want the AI version.
Mike Elgan [01:51:31]:
But let me tell you, really a creative, really wonderful use of a Notebook LM that I did. My niece got married last year and there were WhatsApp threads involving, I don't know, 25 people about, you know, the logistics. It was a destination wedding would go here, rent a car over there, there's a gas station, there's blah, blah. There's just an infinite amount of chatter that you couldn't get wrap your head around. So right as the week when everybody was going to go to this destination wedding, I created a podcast about just. I dumped the entire thread from WhatsApp into Notebook LM and created a podcast and sent it to everybody and suddenly everybody had clarity about what's. Yeah.
Jeff Jarvis [01:52:13]:
Made it digestible. Yes, yes, yes. I think that's where it's very useful. I summarize this for me, come up with something I think that works really well.
Leo Laporte [01:52:21]:
Here's a story from. From Wired: Cat 10 Barge AI slop is ruining Reddit for everyone.
Leo Laporte [01:52:29]:
Because there's so much AI content now in the most popular subreddits that the AI is taking it over. But here's the real problem. You can't tell because so many people are influenced by AI style.
Mike Elgan [01:52:45]:
Yeah.
Leo Laporte [01:52:45]:
That they're writing in the start beef, you know, with M dashes.
Jeff Jarvis [01:52:50]:
Well, the. Stop with the EM dashes. EM dashes are okay. I use crazy.
Mike Elgan [01:52:54]:
I'll defend M dashes to the end of my days, but I've got have a problem with that. You know, what's really happening? It's not. And it's not a chatbot style that people are mimicking. It's an international style. They can tell that, that the UK legislatures who have given, given speeches in the House of Commons because they're using American phrases all of a sudden. Or, or they're. They, they use the word delve too much. Right.
Mike Elgan [01:53:16]:
Well, where does delve come from? Every Nigerian sentence has the word delve in it. And so this, this version of English from somewhere else.
Leo Laporte [01:53:23]:
But that already happened with television. That we globalized journalists too.
Mike Elgan [01:53:28]:
Yes, yeah, yeah. But, but, but the problem, the bigger problem I think in on Reddit is, is so many of the comments. So people want to make a point about something and they just get the point from ChatGPT and paste it in.
Leo Laporte [01:53:42]:
Right.
Mike Elgan [01:53:42]:
What do you do about that?
Leo Laporte [01:53:44]:
Yeah, yeah, I know. I completely agree with you. And you can sort of tell because there's bullet points. Point, bullet bullet points.
Mike Elgan [01:53:51]:
And I mean, I, I get accused, I've been accused on Reddit of, of having AI generated content because you. Because I write. I write in complete sentences and I, you know, and, and, and I use, use to use a lot of EM dashes. And it's actually affected me to the point where I, I'm no.
Jeff Jarvis [01:54:08]:
The M Dash Defense League. Last week, two weeks ago. All right, you've got to join.
Mike Elgan [01:54:12]:
I gotta join. I want to join. I just tell me where to send the.
Leo Laporte [01:54:16]:
It's, it's, it's too bad because you're going to be marked as an AI from now on, no matter what. Just because you can. My kids made fun of me because I use punctuation and uppercase and lowercase letters in my text messages.
Mike Elgan [01:54:26]:
Yeah. It's madness.
Leo Laporte [01:54:27]:
It's just cultural. It's like, you know, we can tell you're not one of us.
Mike Elgan [01:54:31]:
Yeah.
Leo Laporte [01:54:31]:
You know, for some reason.
Jeff Jarvis [01:54:32]:
Can I ask you both about a question about a story that, that I put in here?
Leo Laporte [01:54:36]:
Yes.
Jeff Jarvis [01:54:36]:
Some semaphore is reporting that AI critics funded AI coverage at top news release rooms. Line one.
Leo Laporte [01:54:44]:
How do they fund it?
Jeff Jarvis [01:54:45]:
Well, I'm going to explain it to you. So the.
Jeff Jarvis [01:54:50]:
What's it called here? Tarbell Foundation has funded a bunch of reporting roles, jobs, fellowships at places like TWiT.
Jeff Jarvis [01:55:03]:
Cnn. Where is it here? I'm missing it out here.
Leo Laporte [01:55:07]:
The, the Fault.
Leo Laporte [01:55:10]:
By money.
Jeff Jarvis [01:55:11]:
Well, they. Well, but so this. Hey, how many times do you hear on NPR the health coverage is underwritten by the blah, blah. Right. So this seems to be a foundation, the Tarbell Foundation. And again it's, it's Bloomberg time, the Verge, LA Times. And it's not just AI critics, it's test grails, it's EA people. And if you go to the Tarbell Foundation, you'll see that the money comes from the likes of Coefficient Giving, which is formerly Open Philanthropy, which is Moskovitz's.
Jeff Jarvis [01:55:44]:
Coefficient Giving, which is formerly open philanthropy, which is Moskovitz's.
Leo Laporte [01:55:49]:
That's an effective.
Jeff Jarvis [01:55:50]:
Right. The EA Infrastructure Fund, the Future of Life Institute. This is all EA and Semaphore said that they had one of these fellows, but they ended the relationship and did not publish any of the fellows' work.
Leo Laporte [01:56:04]:
Oh, interesting.
Jeff Jarvis [01:56:06]:
Now, I had a friend of mine, a journalist I respect greatly, who came in after I made this complaint online and said this is really troubling and said, I won't name him or name what organization, but it's someplace that's funded by the Tarbell. And he said, you know, like we don't understand how it actually works is very little influence. And you know, indeed, I raised money from Google and Facebook, a university and the Google News Initiative helps pay for innovation at news organizations. And I get all that, but this just got too close for comfort for me because it's people with an AI agenda funding AI reporting jobs in major outlets.
Leo Laporte [01:56:40]:
That's what's happened. That's why we shouldn't accept outside funding for independent news organizations.
Jeff Jarvis [01:56:47]:
Well, but, but when, when your, your capitalistic funding goes away, Devil's advocate here, what do you do? Lion, which is a local online organization, just said today that the majority of their small news sites at local news across the country are now getting their money from foundations, charitable.
Leo Laporte [01:57:04]:
Well, there you go.
Jeff Jarvis [01:57:05]:
So everybody has an agenda. So this, did this, does this trouble you or am I overdoing the same.
Leo Laporte [01:57:11]:
As it ever was? It's just, it's just, this is the new thing. I mean, it was always a myth that any of these organizations were independent and objective, wasn't it?
Mike Elgan [01:57:28]:
The attempt at objectivity.
Mike Elgan [01:57:32]:
Is what we're losing. And there were many benefits of that.
Leo Laporte [01:57:37]:
I agree.
Mike Elgan [01:57:38]:
You could argue that there's no objectivity, that every journalist is biased and so on, but journalism developed over the decades during the 20th century, a system for, for approaching objectivity systematically.
Leo Laporte [01:57:53]:
Right.
Mike Elgan [01:57:53]:
Which I think is something I would prefer not to lose. And I think you said it perfectly.
Leo Laporte [01:57:59]:
The attempt at objectivity.
Mike Elgan [01:58:01]:
Yeah.
Leo Laporte [01:58:01]:
Understanding that that's difficult and maybe impossible, but at least that was a goal.
Mike Elgan [01:58:05]:
Yes. And, and so, you know, I'm, I write opinion columns, and so I, you know, I spend a lot of time on the difference between, you know, attempted objectivity and the opposite. Right. So it's, it's something we should, we should really.
Mike Elgan [01:58:23]:
Profoundly look at. And the, the other thing that, the bigger issue here is the, Is the pro. The financial issue of traditional media and what it's going to do with its big problem, which is that it's just dying. People say, I don't think it's dying, dying, but it's just shrinking more than we want it to shrink. And it's harder to make a living. We don't have a robust ecosystem of independent journalism. We don't have local newspapers like we used to, and I think that's a major problem. And so local people, instead of focusing on local issues and local news and local politics, are getting national news and international news through whatever the algorithms are handing them.
Mike Elgan [01:59:04]:
And they're listening to.
Mike Elgan [01:59:07]:
Fairly toxic radio in many cases. And this is, this is radicalizing people and causing all kinds of problems.
Leo Laporte [01:59:13]:
That's a 20 year problem, by the way.
Mike Elgan [01:59:15]:
Yeah. Yes. Well, it's, it's having worked in radio.
Leo Laporte [01:59:18]:
I've watched it kind of slide down that.
Mike Elgan [01:59:21]:
I mean, you, you drive through many parts of the country and you know, you, you get religious radio and Rush Limbaugh and far.
Jeff Jarvis [01:59:31]:
Did you notice your audience changing and radio?
Leo Laporte [01:59:34]:
Oh, yeah, absolutely. Yeah. In fact, one of the reasons I gave up radio is really had become a right wing propaganda machine.
TWiT.tv [01:59:42]:
That's mostly because of consolidation though, right? Like the consolidation.
Leo Laporte [01:59:45]:
Oh, there's a lot of reasons for it.
Jeff Jarvis [01:59:47]:
Yeah.
Mike Elgan [01:59:47]:
And local TV as well.
Leo Laporte [01:59:49]:
It's a failing medium partly because of its only a handful of owners. There's a lot of reasons for it. Mostly it's because the ratings were very good. That those shows did better than the other shows. They did better than my shows.
Leo Laporte [02:00:03]:
I lost my midday radio show in San Francisco to Rush Limbaugh and I can't. You did.
Jeff Jarvis [02:00:09]:
I didn't know that.
Leo Laporte [02:00:10]:
Yeah. And can be R. And I cannot deny that the ratings went through the roof when they replaced me with Rush Limbaugh. That was a good move. From a purely economic point of view.
Mike Elgan [02:00:22]:
What frustrates me about criticism, you know, you hear it everywhere, that nobody trusts journalists, you know, journalism. Journalists are biased and so on. This is not actually what's true. Like everybody who has a serious beef about journalism got their ideas from journalism. Like, you know, it's, it's there, there's a lot of variety. And, and the other frustrating thing to me is that I think journalism, I think the best journalism now is better than its journalism has ever been. And, and, but, but people are not.
Leo Laporte [02:00:54]:
By the way, 404 are two excellent outlets for that.
Mike Elgan [02:00:58]:
Yes.
Leo Laporte [02:00:59]:
Outlets for that.
Mike Elgan [02:01:00]:
And there, there's a few among many. There's a ton of great writing, great journalism. On Substack, for example, there are fantastic publications. And the problem is that the algorithms are not favoring those we get. Like people are immersed in junk or they actually think that real news is fake news and fake news is real news because they've been, been, They've been hit with that message repeatedly. And so I just, I, I don't accept the idea that, that, that journalism is bad. I do accept the idea that journalism is struggling to keep it as a, as a thriving business.
Leo Laporte [02:01:37]:
Well, Jeff, it's up to you.
Mike Elgan [02:01:39]:
Yep. So we're counting on you, Jeff.
Jeff Jarvis [02:01:40]:
Yeah. I'm retired. I'm emeritus for old. All Right.
Leo Laporte [02:01:44]:
I want to take a break. Believe it or not, we're ready to wrap up this. It's great to have you on Mike. Mike does a speaking of really incredible journalistic sources, his Machine Society AI newsletter on Substack. I don't know where you.
Mike Elgan [02:01:59]:
Substack.
Leo Laporte [02:02:00]:
Yeah, Substack is well worth reading. He also does a podcast with Emily Forlini about AI and absolutely. There's a perfect example of great journalism that still exists. I'd say the same really for Jeff Jarvis. He does a podcast with Jason Snell, who, by the way, was great last week, was wonderful to have him on the show called AI Inside.
Leo Laporte [02:02:23]:
There's still a lot of great independent journalism out there. It's always been the problem, hasn't it, ever since from day one of the Internet era that what it brought us was a avalanche of content and finding the good among the bad.
Jeff Jarvis [02:02:37]:
But that's where an opportunity is.
Leo Laporte [02:02:38]:
Well, it's always been a problem. And my contention is that a vast increase in the number of.
Leo Laporte [02:02:47]:
Shows and articles and outlets, yes, increases the amount of crap, but it also increases the small percentage of great stuff and it gives more chaff.
Jeff Jarvis [02:02:57]:
There's more wheat.
Leo Laporte [02:02:57]:
Yeah, more chaff and more wheat. And also a lot of that wheat comes from people who otherwise would not have a chance, would not have a voice.
Mike Elgan [02:03:05]:
Right. It gave everybody a voice, which is, is the problem, but it's also the problem. So many, so many great journalists and great journalism is happening is the result of that. I mean, just look at Pliny the, the Liberator. Perfect. That's a kind of journalism, right? And, and, and this would not be possible in, you know, 30 years ago for somebody to be that influential and to exactly give us what they give us.
Leo Laporte [02:03:29]:
So I, so I think in the net that is positive, but it does require us as consumers of content of journalism to be more intelligent about it and to, and unfortunately, not everybody has the motivation to do that, you know.
Jeff Jarvis [02:03:44]:
But the responsibility has always been ours.
Leo Laporte [02:03:47]:
It's always been ours. It hasn't. It always has been.
Leo Laporte [02:03:53]:
I say this every time. We live in interesting times and the challenges are great, the opportunities are great. And that's just, that's the nature of, of the times we are in. And that's what we cover on all of our shows on TWiT, including Intelligent Machines. I hope you are a member of our club. I always like to take this, this time of the year to thank those people who have been all this year members of Club TWiT. Their support is vital for keeping us on the air, keeping us doing what we're doing. Yes, we have advertising and advertising covers about 3/4 of our costs, but only 3/4 of our costs.
Leo Laporte [02:04:27]:
Without your help, we couldn't do what we do. If you're not yet a member of Club Twit, now is a great time to consider it. We've got a 10% off coupon for the annual memberships. That's good only through Christmas Day December 25th. We also have a two week free trial. There's family memberships and company corporate memberships.
Leo Laporte [02:04:47]:
But most of all. And there's a lot of content in that, free versions of the shows. There's a lot of benefits. But most of all, you're helping us continue to do what I think we do very well and we do uniquely at TWiT. So TWiT, TV Club TWiT. Thank you to those of you who have already joined and welcome to those of you about to join. We really appreciate it. Our show today brought to you by Outsystems.
Leo Laporte [02:05:08]:
Outsystems is the number one AI powered low code development platform. And it solves a conundrum that's been part of business ever since digital technology came to business. The old build versus buy conundrum. We've faced it. Do you go out and you buy generic software that sort of does the job you need to have done and maybe can be customized, but maybe not and you just kind of have to fit yourself into that, into that mold, or do you go to great expense and great risk and create your own software? Do you build it or do you buy it? Well, good news. There's a third way. Thanks to Outsystems, organizations all over the world are creating custom apps and AI agents on the Outsystems platform. And with good reason.
Leo Laporte [02:05:54]:
Outsystems is all about outcomes, helping teams quickly deploy apps and AI agents and deliver results. Their version of low code plus AI makes it easy to develop the apps that fit exactly your needs and the success stories are endless. They help the top US bank deploy an app for customers so this is a customer facing app to open new accounts on any device. They sped up onboarding time by 75%. Huge improvement. Customers loved it. They helped one of the world's largest brewers. Here's an in house solution.
Leo Laporte [02:06:29]:
Deploy a solution to automate tasks and clear bottlenecks in their process which delivered a savings of a million development hours. They even helped a global insurer accelerate development of a portal and app for their employees. An intranet app which delivered a 360 degree view of customers and a way for their agents to grow policy sales. Outsystems can solve this conundrum. Build versus buy. Yes, you can do it all. The Outsystems platform is truly a game changer for development teams. With AI powered low code teams can build custom future proof applications and AI agents at the speed of buying.
Leo Laporte [02:07:09]:
But you get something that's perfect for you. Plus because it's Outsystems, the platform gives you automatically fully automated architectures. You get security integrations, data flows, permissions. Those are table stakes. That's all the stuff you need and expect. And Outsystems provides it at the go. With Outsystems, it's so easy to create your own purpose built apps and agents. There's really no need to consider off the shelf sameware solutions again.
Leo Laporte [02:07:35]:
OutSystems number one AI powered low code development platform. Learn more at outsystems.com TWiT that's outsystems.com TWiT we thank them so much for supporting Intelligent Machines. We thank you for supporting us too by visiting them there at that address. outsystems.com TWiT all right, so I see this thing that you put in here, Jeff, the Resonant Computing Manifesto. Mike Masnik supporting it. Simon Willison, one of the signatories. A lot of Tim O'Reilly, a lot of names. You know, Alan K, Bruce Schneier, Hank Green.
Leo Laporte [02:08:16]:
But what is it? I'm trying to read it. I don't understand. What are we talking about?
Mike Elgan [02:08:20]:
Lawrence Lessig.
Leo Laporte [02:08:21]:
Lawrence Lessig. What is the Resonant Computing Manifesto? Would somebody explain to me?
Jeff Jarvis [02:08:26]:
I'm not sure I fully understand either.
Leo Laporte [02:08:28]:
Okay, maybe they just signed it because they didn't understand it, but it seemed like a good idea.
Jeff Jarvis [02:08:32]:
Mike said it to me.
Jeff Jarvis [02:08:36]:
I get a little hung up. At the top it says feeds engineered to hijack attention and keep us scrolling, leaving us a trail of anxiety and atomization in their wake.
Jeff Jarvis [02:08:45]:
We've been there. Okay, so I'm not sure that's the problem. I think we are the problem. So right there I kind of get held up. They say the people who build these products aren't bad or evil. I salute that. They're people and they're trying to come up with a worldview signature of where we go. So there's five principles as a starting place and I think it's hard to argue with these.
Jeff Jarvis [02:09:07]:
The things that we have, privacy.
Jeff Jarvis [02:09:10]:
Whoever controls the context holds the power. That is dedicated software should work exclusively for you. Ensuring contextual integrity where data use aligns with your expectations. I'm not sure what that means. That it's plural. No single entity should control the digital spaces we inhabit. Amen. Mike Masnik.
Jeff Jarvis [02:09:28]:
Protocols, not platforms. That it's adaptable software should be open ended, able to meet the specific context dependent needs of each person. Not sure I understand that. Pro Social technology should enable connection and coordination. Okay, I'll. I'll sign on to that. So. Yeah, I didn't sign because I'm not sure that I fully understand all of it.
Leo Laporte [02:09:52]:
Well, thanks, Mike. I don't. We don't know what it means.
Mike Elgan [02:09:57]:
I don't know how it works. I mean.
Mike Elgan [02:10:00]:
Is Meta going to read this and go, oh, okay.
Leo Laporte [02:10:02]:
Oh, yeah, we should. I think it's more dedicated, plural, adaptable and pro social. Yes.
Jeff Jarvis [02:10:06]:
I think it's more to the people who are on the ground and saying that there are some ethics we need to discuss and I think that's fine.
Mike Elgan [02:10:13]:
Where are you going to spend your money? Where are you going to spend your attention?
Jeff Jarvis [02:10:15]:
Yeah, yeah.
Leo Laporte [02:10:16]:
Maybe we can get Mike Masnick on or.
Jeff Jarvis [02:10:19]:
I think that'd be great.
Leo Laporte [02:10:20]:
Somebody who gets.
Jeff Jarvis [02:10:22]:
Let us also plug while we're on Mike line 173. Mike's texture is incredibly important. It covers all the issues that matter. And Mike has a new fundraiser out because he needs it and people should support TechDirt. So he has his. It's very Mike. If you contribute more than $100, you'll get a 30 years of Section 230 commemorative coin.
Leo Laporte [02:10:47]:
We talked about this because we had Kathy Gellis who writes for TechDirt on TWiT on Sunday and she also mentioned this. Yes, I completely think Mike deserves every.
Jeff Jarvis [02:10:57]:
Mike's on my year end list of contributions. Absolutely.
Leo Laporte [02:11:00]:
Yep, absolutely. You have till January 5 to back the Tech Dirt. If you do a hundred dollars or more, you get the coin shipping in January or February. I don't really care about the coin.
Jeff Jarvis [02:11:11]:
No, I don't either.
Leo Laporte [02:11:13]:
But I do care a lot about tech dirt. And you know, like all of these efforts, it probably doesn't pay very well.
Jeff Jarvis [02:11:21]:
No, this is the independent journalism. But he does research that matter. Not just journalism, but research that matters. And paying people like Kathy Gellis to do the work. So that's good.
Leo Laporte [02:11:29]:
I agree. Yeah.
Jeff Jarvis [02:11:30]:
Meanwhile, I wish Paris were here for this one. Did you see the line above that? Sam Lesson ran an etiquette camp for Silicon Valley boys.
Leo Laporte [02:11:40]:
Sam, who I love, we've had on the show married to Jessica Lesson, her former boss at the Information. He was a. He is A VC and an investor.
Leo Laporte [02:11:50]:
Tech Bros head to etiquette camp. This is from Washington Post. As Silicon Valley levels up its style.
Jeff Jarvis [02:11:57]:
So what does it teach them? How to buy suits and honest to God, I swear to go. How to eat caviar. Oh, no bump on your hand? We.
Leo Laporte [02:12:06]:
No, do not eat caviar with a bump on your hand.
Jeff Jarvis [02:12:09]:
Well, well, don't.
Leo Laporte [02:12:10]:
Don't. That's not how you eat caviar. That's how.
Leo Laporte [02:12:15]:
Russian oligarchs eat caviar. That's how tech pros eat caviar.
Jeff Jarvis [02:12:19]:
And even so, if, if you're making the proper consumption of caviar your issue, you're kind of missing no points of what's in Silicon Valley?
Mike Elgan [02:12:28]:
I think, I think what's. What the market being served here is that people in Silicon Valley are people who, who spend all their time doing engineering, and then all of a sudden they find themselves very wealthy, and so they don't have a function in the world of wealth. How do you eat caviar? I've been eating nothing but ramen for the last 20 years. And so I think that's what he's getting at. I do believe in etiquette. I think etiquette's fantastic. But it shouldn't be focused around eating caviar. It should be around greetings and not ghosting people and table manners and things like that.
Mike Elgan [02:13:01]:
And also international etiquette, I think, would be. Would be great. It'd be great for people to travel around with confidence and know how to, how to behave without offending people internationally.
Leo Laporte [02:13:10]:
I have a better idea. You want to learn etiquette? Go on one of these wonderful Gastronomad. Yes. Trips.
Mike Elgan [02:13:18]:
Yes.
Leo Laporte [02:13:19]:
You'll be with a few.
Jeff Jarvis [02:13:20]:
You will learn how to eat caviar.
Leo Laporte [02:13:22]:
Seriously. You will learn how to be in a culture, to enjoy the culture, to be not an ugly American, but a sophisticated consumer of great food, great wine, great conversation. You'll learn how to converse. You have an etiquette school already, Mikah, @gastricnomad.net I mean, honest. I'm serious about that.
Mike Elgan [02:13:45]:
Yeah, I think that's a great way to look at it. We eat certain things all the time, like olive oil, on gastronomatic experiences. You eat the world's greatest olive oil, but you understand exactly what makes it great, how do you eat it, and so on.
Leo Laporte [02:14:00]:
A tech bro should go on this. I don't think you watch them, but you could, you could pretend you weren't a tech bro and go on these experiences and you'd meet Some wonderful people. You'd learn how to converse. I always thought the best thing I got out of my Yale education was not the classes, although there were some pretty good classes, but the fact that every week the master of the college would have a cocktail party that you would go to and you would learn how to stand around with other intelligent people and converse.
Jeff Jarvis [02:14:30]:
That's where J.D. vaz learned forks.
Mike Elgan [02:14:33]:
Oh, my God.
Leo Laporte [02:14:33]:
That's another matter.
Mike Elgan [02:14:34]:
Well, you know, the funny thing about. Yeah. The funny thing about gastronomic experiences is that everything we do is a secret. So we, we want. We want people to experience this, but we can't tell you what you're going to be experiencing because it's a secret. And so what I would encourage everybody to do, if you're cur. If you've heard this, if you heard about gastronomic experiences, you're like, I still don't quite understand what it is you do. Sign up for our newsletter.
Mike Elgan [02:14:59]:
Go to our site gastronomy.net and. And you'll see the newsletter tab at the top. Sign up. It's free private information. Will keep your details private and all that stuff. And, and little by little, you'll get that information. Or you can go to gastronomad.net.
Mike Elgan [02:15:17]:
Which stands for Gastronomata experience testimonials. So it has all the things that, you know, some of the people who've done experiences without revealing any of the secrets tell you what it was like. And so this, this can be helpful if you're curious about this. But, but especially, please get the newsletter because that, that's a great resource for all this stuff. And there's a lot of etiquette stuff in there, too. So there's Charlie and Julia.
Leo Laporte [02:15:43]:
We made some really good friends. We went on the Oaxaca experience. And I really do think, I'm actually genuinely saying this would be a really good way if you have a little rough around the edges. You don't know how to be around people, to be with a group of people like this. All of them are. I mean, most of the people are repeats now, right? I mean, everybody.
Mike Elgan [02:16:02]:
I mean, these are definitely, yes, definitely.
Leo Laporte [02:16:04]:
Become devoted to this. And, and they are sophisticates, they're intelligent. They know how to make conversation. Certainly Mike and Amira do. And, and they're great facilitators, I think. Even if you had some refugees and.
Jeff Jarvis [02:16:18]:
You get exposed to cultures and appreciate the cultures, that's the most important part to me.
Leo Laporte [02:16:22]:
Yeah, it would transform you, I think.
Mike Elgan [02:16:23]:
And it's, it's the Real culture. It's not the simulacrum that you get with tourism, normal tourism. You know, it's funny because, because we, we, we get a lot of initial people who just trust us. Right. The first time you do one, you're like, okay, I'm just trusting you guys. I don't know. You're not telling us what we're going to do. There's no itinerary.
Mike Elgan [02:16:42]:
And on the, by the way, as I mentioned, I think I mentioned on TWiT, the next year's our 10 year anniversary.
Leo Laporte [02:16:54]:
Wow.
Mike Elgan [02:16:54]:
The company. Yeah, thank you. And we're doing new locations.
Jeff Jarvis [02:16:58]:
I was going to ask you what are there new places you've announced?
Mike Elgan [02:17:02]:
We have announced some of them and we haven't announced others. So we have announced Tuscany, for example. That's, that's, that's one of our newer ones. And we have a few other places that we're going to be revealing over the next couple of months.
Jeff Jarvis [02:17:18]:
Give us a few privacy.
Mike Elgan [02:17:20]:
I'm sorry, Give us a continent. Europe and Latin America.
Jeff Jarvis [02:17:25]:
Okay.
Mike Elgan [02:17:27]:
But, but we also, another thing that's, that's been changing lately is we do increasing numbers of private experiences. So for example, we've had a few people who wanted to have a big birthday, let's just say it's their 50th birthday. And, and they invite all their friends and, and we do a gastronome experience, but it's closed to the public. It's just for the group of, you know, six people, eight people, 10 people, 12 people, whatever it is. And we do a custom thing and that's, that's something, that's.
Jeff Jarvis [02:17:56]:
How long does that last usually?
Mike Elgan [02:17:58]:
Usually the same length. But if they. People have specific requirements, they want to make it a little longer, for example, or they want to incorporate maybe some.
Mike Elgan [02:18:08]:
Certain types of activities. Some people like to do hiking, for example, or we'll integrate that. We can tweak it around the edges. But you still meet our friends, you still have the, you still enjoy the world's most delicious food and wine and drinks and so on, but it can be customized. So it's, it's really fantastic. But yeah, please, please do subscribe to our newsletter. I think it's a lot of fun.
Leo Laporte [02:18:30]:
And don't go to learn how to eat caviar off your hand. No, no, you should do it like a civilized person. Off the navel of a beautiful Woman.
Mike Elgan [02:18:40]:
Yes, of course, of course. Everybody knows that.
Leo Laporte [02:18:43]:
And these tech pros apparently have never learned.
Mike Elgan [02:18:46]:
Yeah, yes.
Leo Laporte [02:18:48]:
Anyway, Mike, I'm thrilled we could have you on.
Mike Elgan [02:18:49]:
We appreciate it. Thank you.
Leo Laporte [02:18:51]:
It's a pick of the week time, so I guess I'll give you the pride of place here.
Mike Elgan [02:18:56]:
Okay, so I've been a frustrated email newsletter publisher since the 90s. And the reason it's frustrating is that there are many things out there after you send the email that will, will stop delivery.
Mike Elgan [02:19:13]:
There are spam filters, there's like, you know, the service you're associated with. I used to use mailchimp. Nowadays I use Substack and a couple of other services, Squarespace for various newsletters and I even gave up my newsletter at some point for maybe five or six years because I was so disgusted by the number of, of emails that didn't get through. So there is a product that I learned about recently. I think it's a relatively new product called Bare Metal Email. Now give you a caveat. This is not for casual individual use. This is great for a business.
Mike Elgan [02:19:44]:
It's expensive. I think the cheapest option is $300,000,000 a month. But if this is, you know, this is a reasonable expenditure on your marketing budget. If you have, if you're a publisher or whatever this. We were talking about how, you know, content creators can have a viable business with good journalism. This is one way to do it. You can move your deliverables up from just north of 50% up to north of 95% using the service, or so they say. And they, they claim 99% deliverability, which is hard to believe in the current climate.
Mike Elgan [02:20:20]:
But basically what they do is they have all these systems and they have it. Here's the AI angle. They have a great AI chat interface that walks you through the setup and how to, how to set up your own ip, have an unlimited addresses and so on. And this is just really a valuable service to get your voice heard if you're using email as a primary method of communication. So I, I think if you have a small business, you should check it out.
Leo Laporte [02:20:49]:
Bare metal email, BME and the address is baremetalemail.com.
Leo Laporte [02:20:58]:
Yeah, Steve Gibson talks about this from time to time because he does have a newsletter. Of course he is running his own server. Yes, he's smart enough to, Steve knows how to set it up. DKIM and SPF and all of the various.
Mike Elgan [02:21:10]:
He's doing the things that this service will do for you, for you.
Leo Laporte [02:21:14]:
Basically the main thing that they do is give you An IP address that's not on a block list, and that's really hard to do. You almost never can do it with your home Internet service provider. It's impossible because your IP address has very likely been blocked all over the place.
Jeff Jarvis [02:21:32]:
Right.
Leo Laporte [02:21:33]:
Good pick. Baremetalemail.com Jeff Jarvis, pick of the Week.
Jeff Jarvis [02:21:38]:
So I want to mention that I was in Austria and there's video of my conversation with Arne Wolf.
Leo Laporte [02:21:43]:
This is where you were last week?
Jeff Jarvis [02:21:45]:
Yes. Who's the Walter Cronkite of Austria on ors.
Leo Laporte [02:21:48]:
Wow.
Jeff Jarvis [02:21:49]:
And it was, it was a nice small gathering of media at the.
Jeff Jarvis [02:21:55]:
Median Gipfel or the European Media Summit.
Leo Laporte [02:21:58]:
Nice. And they. You did this in English because of course, they all speak English perfectly. Well. Yes, yes. Yeah, very nice. Well, I won't play it.
Jeff Jarvis [02:22:05]:
No.
Leo Laporte [02:22:05]:
Because I don't want Armin to pull it down, but no, he won't.
Jeff Jarvis [02:22:08]:
But that's fine.
Leo Laporte [02:22:09]:
It's on Jeff's.
Jeff Jarvis [02:22:10]:
And then I want to give you, Leo Laporte, some condolences. I know we were all rooting for you, but unfortunately you didn't make the list of the first Golden Globes Awards nominees for podcasts.
Leo Laporte [02:22:21]:
Really? They did podcasts.
Jeff Jarvis [02:22:23]:
They're doing them. So the nominees are.
Leo Laporte [02:22:26]:
They're all celebrities, aren't they?
Jeff Jarvis [02:22:27]:
Well, so what they did was Daily's not on it. Well, what they did was they didn't, they didn't honor native podcasters. They, they honored the TV and movie and radio people who, who, who were just doing celebrity. So I'm like, you'd Smartless? Are you kidding me? That whole show, it's hilarious because the people on it are really funny. The hosts are like really, really funny people. But it's all about, like bringing on celebrities. And so, like, how'd you get started? That kind of stuff. It's like, really? That.
Mike Elgan [02:22:38]:
Well, so what they did was Daily's not on it. Well, what they did was they didn't, they didn't honor native podcasters. They, they honored the TV and movie and radio people who, who, who were just doing celebrity. So I'm like, you'd Smartless? Are you kidding me? That whole show, it's hilarious because the people on it are really funny. The hosts are like really, really funny people. But it's all about, like bringing on celebrities. And so, like, how'd you get started? That kind of stuff. It's like, really? That.
Mike Elgan [02:23:06]:
That's a great idea.
Leo Laporte [02:23:09]:
I will give credit to Alex Cooper. Call her Daddy is a native.
Jeff Jarvis [02:23:12]:
That's a native one. Yes.
Leo Laporte [02:23:14]:
You know, she came out of nowhere and has had huge success.
Mike Elgan [02:23:17]:
Yeah.
Leo Laporte [02:23:18]:
So credit to her for that. But the rest of them either are from big name brands or big name celebrities. And that makes. I mean, that's what I'd expect.
Jeff Jarvis [02:23:25]:
And one more, Leo.
Leo Laporte [02:23:27]:
Yes.
Jeff Jarvis [02:23:28]:
Is. Is to give you ammunition which you always seem to need. About. About. No, AI is not useless. Yes, AI is important. Yes, it does amazing things. Line 138 Real time cricket sorting by sex.
Jeff Jarvis [02:23:41]:
Thanks to AI.
Leo Laporte [02:23:42]:
You know, anybody who thought it was easy to sort crickets by sex probably has never tried.
Jeff Jarvis [02:23:48]:
It's not easy. So this is. This is a device that they build. If you scroll down, you'll see the device. So there's a bridge the crickets walk across, and they get photographed. And then there's a Raspberry Pi.
Jeff Jarvis [02:24:01]:
Why would you want this? Because crickets are a source of food. They'll soon be on GastroNomad. But you want to breed them, and then to breed them, you've got to separate the boy crickets from the girl crickets.
Leo Laporte [02:24:11]:
Yeah, it's the old Noah's ark problem.
Jeff Jarvis [02:24:12]:
Yeah, exactly. It's the old hot dog. No, hot dog.
TWiT.tv [02:24:15]:
It's the old hot dog. No hot dog.
Leo Laporte [02:24:16]:
Hot dog.
Mike Elgan [02:24:16]:
No hot dog.
Leo Laporte [02:24:17]:
What is it that distinguishes. Just out of curiosity.
Jeff Jarvis [02:24:21]:
So I showed this to Jason, and if you look at the pictures of. See if you can get it. Go, go down. Okay, here's male and female.
Leo Laporte [02:24:26]:
Here's the pictures of a male.
Jeff Jarvis [02:24:28]:
And basically, it looks like the female has a penis.
Leo Laporte [02:24:31]:
Has a tail.
Jeff Jarvis [02:24:32]:
Yeah.
TWiT.tv [02:24:32]:
So it's hot dog, hot dog.
Jeff Jarvis [02:24:34]:
Yeah. But it's reversed. Yeah.
Leo Laporte [02:24:36]:
I think I could do this. It's pretty simple.
Mike Elgan [02:24:39]:
Based on my experience with. With cartoons, I would have assumed that the females have top hats. Eyelashes.
Leo Laporte [02:24:44]:
Oh, yes, that's right.
Mike Elgan [02:24:46]:
But.
Leo Laporte [02:24:46]:
And the males have top hats.
Mike Elgan [02:24:48]:
Of course, if ever there was an IG Nobel prize nominee, this would be it.
Leo Laporte [02:24:53]:
But, you know, it isn't that ignoble.
Jeff Jarvis [02:24:54]:
Have you ever had insect edibles on gastronomed?
Mike Elgan [02:24:58]:
Oh, of course. Yeah, we had. Yeah, we had the grasshoppers. And in fact, in Oaxaca, you know, it's not for everybody. It's optional for people to have it, but we actually had the most excellent grasshoppers, which were raised in Earth herb gardens, and they. They absorb the flavors of the herbs they eat. And we. We caught them ourselves.
Leo Laporte [02:25:20]:
This was at chef Alex Lindsay's Amazing.
Leo Laporte [02:25:26]:
Country, and they took us out to capture them. In fact, I'm gonna find it because I have it here somewhere.
Mike Elgan [02:25:32]:
Yeah.
Leo Laporte [02:25:32]:
Picture of Lisa Laporte hopping.
Mike Elgan [02:25:34]:
She was into it. She was really into it.
Leo Laporte [02:25:37]:
Yeah. Catching grasshoppers for our dinner.
Mike Elgan [02:25:41]:
Yeah, that's a major part of the Oaxacan diet. Is not a major part of the diet, but it's a major popular food. It's a snack. Kids love them there.
Leo Laporte [02:25:49]:
They roast them. They're crunchy. They're not. And they put them in the moles and other stuff. But there was also. Wasn't there a little dish of grasshoppers on the table that you could just.
Mike Elgan [02:25:58]:
We. We sprinkled them on top of guacamole.
Leo Laporte [02:26:01]:
That's what we did.
Leo Laporte [02:26:03]:
With the guacamole. If you get those grasshoppers, nothing like a guacamole. Grasshopper.
Mike Elgan [02:26:07]:
Yeah. And you go to the market there, which we did, and you see, you see. You know, they have 20 different kinds, and they're not sorted by AI or sex, but by the type. And they have different flavors and so.
Jeff Jarvis [02:26:18]:
On, but it's an important source of protein, so.
Mike Elgan [02:26:21]:
Oh, yeah. It's really, really good for you.
Leo Laporte [02:26:23]:
But they don't eat enough of them, really, to say this is a boost in the protein content. I think it's more. What's.
Mike Elgan [02:26:29]:
What's great. Typically, if you go to, like, a soccer game or something like that, that they. People have little plastic baggies with peanuts and grasshoppers and chilies, and so it's spicy and crunchy and it's.
Leo Laporte [02:26:40]:
That sounds.
Mike Elgan [02:26:41]:
Now, that is way healthier than what you'd get, like, at a baseball game if you go to, you know.
Jeff Jarvis [02:26:45]:
Yeah.
Jeff Jarvis [02:26:47]:
Hot dog. No, hot dog.
Leo Laporte [02:26:48]:
Yeah, hot dog. Well, that's funny. I cannot, unfortunately, find the pictures of us leaping around looking for grasshoppers, but trust me, it was great. We didn't really know what we were doing, to be honest, but we managed to do it. They said, go out in the herb garden, here's some baggies, fill them with grasshoppers. Okay. And later we found out it was food.
Leo Laporte [02:27:15]:
It was a lot of fun. Did you. Could you tell the ones that were grown in basil versus the ones that were grown in oregano? I couldn't.
Mike Elgan [02:27:21]:
They were all together. So you'd. You know, it's.
Leo Laporte [02:27:24]:
It's hard.
Mike Elgan [02:27:25]:
It's hard to tell, but I have. I have tasted the difference between different types of grasshoppers and their night and day. I've. I had some more recently that were just so good, and I don't know how they prepared them, but.
Leo Laporte [02:27:38]:
Great.
Mike Elgan [02:27:38]:
Yeah.
Leo Laporte [02:27:39]:
Grasshoppers.
Jeff Jarvis [02:27:40]:
So, Mikah, last question here. I know we're gonna. I'm not fond of liver, for example. Huh. Right. Is there anything you won't eat?
Mike Elgan [02:27:51]:
Won't.
Mike Elgan [02:27:53]:
Not really. If it's something that people eat in a place, I'll eat it no matter what. So I would even eat haggis if I went to Scotland. Really?
Jeff Jarvis [02:28:03]:
Newmark loves haggis. Loves it.
Mike Elgan [02:28:05]:
Yeah. A lot of people really love it. That's probably a bad example, but there. There's some. There's Some extreme things. I mean, I've eaten termites in. In Kenya. I've eaten all kinds.
Mike Elgan [02:28:15]:
Well, there's a. Actually in Oaxaca, there's a place where you can get a tostada that has not only grasshoppers, but also ants and those little worms that they put in mezcal. They roast those and put them on the tostada as well. And it's actually really delicious.
Mike Elgan [02:28:32]:
Antags is amazing.
Jeff Jarvis [02:28:33]:
You've answered that. Thank you.
Leo Laporte [02:28:34]:
And you thought you were sophisticated eating that bump off your fist.
Leo Laporte [02:28:39]:
Mike Elgin. Thank you so much.
Jeff Jarvis [02:28:40]:
Mike.
Leo Laporte [02:28:40]:
Machinesociety AI gastronoma.net Great to have you. Thank you, Mr. Jeff Jarvis. Buy the books.
Jeff Jarvis [02:28:49]:
I forgot to show off. I'm wearing my look.
Leo Laporte [02:28:52]:
How dressy. It's like a Nehru jacket.
Jeff Jarvis [02:28:55]:
This is a Tyrolean jacket.
Leo Laporte [02:28:57]:
Oh, that's beautiful. I want one. That's great. That's beautiful. You know what? That is a good look. You should wear that from now on with the black turtleneck. I think that's a perfect look.
Jeff Jarvis [02:29:06]:
Yeah.
Mike Elgan [02:29:06]:
Yeah.
Jeff Jarvis [02:29:06]:
Little red there.
Leo Laporte [02:29:07]:
You. You look like one of those young Nazis in the Sound of Music.
Jeff Jarvis [02:29:12]:
No, no. Fun trop. I I.
Leo Laporte [02:29:15]:
That's what you look like. Yeah. He is the author of the Gutenberg Parenthesis and magazine. And we will next week be, I hope, reunited with both Jeff and Paris, which will be a lot of fun. Oh, let me check. I like Jeff.
Jeff Jarvis [02:29:27]:
What if they're eating caviar off a bump at the Consumer Reports party?
Leo Laporte [02:29:31]:
Oh, I doubt.
Jeff Jarvis [02:29:32]:
I have my doubts.
Leo Laporte [02:29:32]:
They're a little too sophisticated. It CJ Trowbridge will be our guest talking about AI sustainability and resiliency. He is a YouTuber.
Jeff Jarvis [02:29:43]:
I they are.
Leo Laporte [02:29:44]:
Think they are. Thank you. We appreciate all your time and thank you for joining us. We do this show Intelligent Machines every Wednesday right after Windows Weekly. That's 2pm Pacific, 5pm Eastern, 2200 UTC. You can join us on the show Watch live if you want in the Club TWiT Discord if you're a club member or on YouTube, Twitch, X dot com, LinkedIn, Facebook or Kick. We stream live, but you don't have to watch live on demand versions of the show available for download at the website TWiT.tv. There's audio there and video.
Leo Laporte [02:30:19]:
You can even watch the video stream right on the page. There's also a link there to the YouTube channel where you can watch the video. Easy to clip little segments, which is nice. And of course, the best way to get any of our shows. Subscribe in your favorite podcast client. You'll get it automatically the minute it's available. Paris said she found some good reviews that she will be reading.
Jeff Jarvis [02:30:41]:
Oh, yeah.
Leo Laporte [02:30:42]:
So leave us some fun 5 star reviews and maybe you'll get a dramatic reading for Paris.
Leo Laporte [02:30:48]:
Thank you everybody for joining us. Have a wonderful evening. We'll see you next week. Intelligent Machines. Bye. Bye. I'm not a human being.
Mike Elgan [02:30:58]:
Not into this animal scene.
Pliny the Liberator [02:31:01]:
I'm an intelligent machine.