TWiT+ Club Shows 740 Transcript
Please be advised that this transcript is AI-generated and may not be word-for-word. Time codes refer to the approximate times in the ad-free version of the show.
Leo Laporte [00:00:00]:
This is TWiT. So we were thinking that this episode of the AI User Group would be devoted to a little OpenClawing. It's not passé by any means now that Steinberger is working for OpenAI and this is the year of the agent. I think that's pretty clear. Everybody and their brother is announcing new agentic harnesses. So I think this is a good topic for us. And Larry's Are you going to be the expert?
Larry Gold (LrAu) [00:00:29]:
Yeah, I'll be— I will do as best as I can. But as I said, I've got two things to show. As I said, I wrote my homegrown one after spending a lot of time with Ubuntu and realizing it's got too many security flaws to really run my daily life. But I'll show you what I've done with it. I've got kind of a canvas that we can kind of play with. And if people throw a suggestion out, we can run and see what it does. I did put it on a better LLM, so we're not, you know, limited.
Larry Gold (LrAu) [00:00:58]:
Oh, you're using a local? You're using a local?
Larry Gold (LrAu) [00:01:00]:
No, I'm not using a local. I'll go through OpenRouter and use a real good one because my one slows down. I think that's nice.
Leo Laporte [00:01:06]:
I'd like to see how those do. A lot of people have said, I'm not sure I disagree with them, that really what everybody loves about OpenClaw is Claude, is 4.6.
Larry Gold (LrAu) [00:01:14]:
Yeah.
Leo Laporte [00:01:14]:
And I think to some degree that's the case. It's just Claude, you know, running, Without end. I love Claude. In fact, I said, you know, could I just tell you my appointments and you'd add it to Fastmail? And he said, yeah, let me write a little MCP server, a Fastmail MCP server, and that has a good API for it. And all you have to do is say, hey, you know, remind me I got an appointment at 3 o'clock and it will add it to my Fastmail calendar. Same thing with contacts. So that was a very easy thing. A lot of people are saying MCP servers even are dead.
Leo Laporte [00:01:53]:
I bet you would disagree, Darren.
Darren Oakey [00:01:56]:
Well, I'm not a big MCP fan, but there are certain things that you need the MCP for, like when you want to want the server elsewhere, for instance.
Leo Laporte [00:02:07]:
Yeah, you taught me though that they, they munch up a lot of context.
Darren Oakey [00:02:11]:
They do.
Leo Laporte [00:02:11]:
Yeah.
Darren Oakey [00:02:12]:
Um, but, but the thing is there are certain things that, um, yeah, you do if you're running locally. There's a lot more evidence that CLI is working better, but there are certain things like, for instance, the Zapier MPT, you couldn't really do that any better way, right? Because you're talking to a remote thing. So, right, so an API isn't enough for a RESTful Well, an MCP is just an API that's been standardized and designed for consumption by LLMs.
Leo Laporte [00:02:55]:
Right. I find that Claude understands APIs quite well and is really able to ingest them and manipulate them without But it's interesting, Claude wanted to write an MCP server for Fastmail instead of just interfacing directly with the API.
Larry Gold (LrAu) [00:03:11]:
So that, that's, I find that interesting.
Darren Oakey [00:03:13]:
But no, there's certain things if you're using, and this is the same with humans, if you're using an API, the, what the first thing you are, you, what you always have problems with is authentication. Like, how do I connect? How do I, where do I store my tokens and things? And that's one of the things that MCP sort of takes care of, is it explains how to, how to turn on. It's always the hardest thing with an API.
Leo Laporte [00:03:41]:
Uh, Paul is telling us, uh, and actually I think Steve mentioned this, somebody's written an MCP server for the SecurityNow transcripts. So you can query, uh, I think actually Steve demoed it, or maybe we just talked about it.
Darren Oakey [00:03:54]:
You can query, uh Oh no, I guess here in the chat he's done a lot with the, um, with— he made the, the, uh, the notebook with all the transcripts in it and stuff.
Leo Laporte [00:04:08]:
Oh, okay, okay, nice.
Larry Gold (LrAu) [00:04:11]:
I would say for my eyes, you look at something like Playwright to build a testing piece, their Playwright testing MCP is phenomenal. Yeah, it's something— there's certain things where I think MCP is good for. I think You know, again, it's a tool in your tool belt, right? You're not going to take a screwdriver and try to hammer a nail. You know, some people do, but you try to find the right tool. You can't overload an MCP because of the context issues. But now people are using skills and they're putting the MCP calls inside the skill, which then prevents that, that overloading of a context.
Leo Laporte [00:04:44]:
Oh, this— so Aaron Nova is the guy who did the Intelligent Machines. Yeah. Notebook LM thing. So Aronova, do you get all of them in there? I mean, how do you—
Larry Gold (LrAu) [00:04:56]:
Can we get them on?
Leo Laporte [00:04:58]:
Yeah, come on in.
Darren Oakey [00:04:59]:
Yeah, he was saying about coming on.
Larry Gold (LrAu) [00:05:02]:
Yeah.
Darren Oakey [00:05:02]:
And he's built a thing, like a Python program that scrapes all the transcripts.
Leo Laporte [00:05:10]:
Okay. And somehow—
Darren Oakey [00:05:12]:
Imports it.
Larry Gold (LrAu) [00:05:12]:
Yeah.
Leo Laporte [00:05:13]:
Tokens.
Anthony Nielsen [00:05:13]:
Yeah. I'm looking at the Notebook LM, it looks like, like, so each MD contains several.
Leo Laporte [00:05:19]:
Yeah.
Anthony Nielsen [00:05:20]:
Like 2 to 3 episodes.
Leo Laporte [00:05:23]:
Yeah. Episode 233 through I Am 857. So is it all of the transcripts? Wow.
Darren Oakey [00:05:32]:
I actually downloaded all of the transcripts and I've been trying to actually train because I've got into, I mean, I wanted to learn about fine-tuning and everything. So. I tried to fine-tune a Leo, but—
Leo Laporte [00:05:47]:
Has anybody played with Andrej Karpathy's new little research trainer?
Larry Gold (LrAu) [00:05:54]:
It is my weekend because I just got my M5 Mac yesterday.
Leo Laporte [00:05:58]:
Ah, so it's really— I don't fully understand it. People seem to be very excited about it. It's just a few hundred lines of Python.
Larry Gold (LrAu) [00:06:06]:
Yeah.
Leo Laporte [00:06:07]:
But it basically trains overnight. Just keeps training.
Larry Gold (LrAu) [00:06:13]:
It's Ralph Wiggum.
Leo Laporte [00:06:15]:
Oh, he's using Ralph.
Larry Gold (LrAu) [00:06:16]:
Okay. Well, think about it.
Leo Laporte [00:06:17]:
Oh, it's like Ralph. Yeah, yeah, yeah.
Larry Gold (LrAu) [00:06:19]:
It just, it's constantly looping and you have to give it the right goals to know whether it's getting better or worse. 'Cause I spent time reading about it and trying to understand before I play with it. And if you're gonna train something, you really need to give it a way to score to figure out if it's getting better or not to keep training. Right. And it looked very interesting. It was like, okay, now I can, you know, now I need to figure out what to do to train it or what some topic to do.
Darren Oakey [00:06:44]:
Yeah, I've had a lot of failures in the training so far. It's not what I thought it was. And I think the best thing if you're doing something is you train a LoRA. You don't train the model because training the model is very expensive. But even training a LoRA—
Leo Laporte [00:07:00]:
I think that's what he's doing is training a LoRA. Maybe I'm wrong.
Larry Gold (LrAu) [00:07:02]:
Yeah.
Darren Oakey [00:07:03]:
Yeah. So it's basically putting an LLM layer on top of the other layer.
Leo Laporte [00:07:07]:
Right. So what's a LoRA as opposed to an LLM?
Darren Oakey [00:07:10]:
It's a small LLM in front of, like you might have 8 billion parameters or something. This is, you know how transformers work.
Leo Laporte [00:07:17]:
And it's specific task oriented? So like you'd have a LoRA for reading radiology X-rays.
Larry Gold (LrAu) [00:07:23]:
Yeah.
Darren Oakey [00:07:24]:
I thought you were gonna, like I thought it was gonna be like a replacement for RAG and I could put in all the transcripts and then then ask Leo about random things and everything. It doesn't quite work like that because it's not, there's not a whole lot of tokens. So at the moment it's more stylizing it. Like it puts a, it's like prosody. Yeah, exactly.
Leo Laporte [00:07:46]:
Yeah. Yeah.
Anthony Nielsen [00:07:47]:
I don't know about for LLMs, but LoRAs in terms of like video and image models, it's like you would create a Leo LoRA to put on top of it.
Leo Laporte [00:07:57]:
It's fine tuning.
Anthony Nielsen [00:07:59]:
Or yeah, extra information on top of another model, like an extension. So like an art style, a person, a thing, like you would have a LoRA for that.
Leo Laporte [00:08:10]:
Okay. Yeah. So it's post-training of some kind for the LLM.
Darren Oakey [00:08:16]:
Yeah. But what it, for LLMs, like I tried to put in it, It was getting the style, but it wasn't getting a lot of facts, and you need a much bigger LoRA to actually ingest information.
Leo Laporte [00:08:30]:
Yeah, it's amazing what's going on. There's so much— it's very fertile right now. I can't think of another time it's been this fertile. Even in the early days of the internet, there was nothing like this.
Darren Oakey [00:08:44]:
Uh, in the really early days, like, we were all coding, and from the beginning, and like when graphics came out and when say Wolfenstein came out, each of these techniques were coming out one at a time. And so everybody knew everything, everybody's playing with this one. And then you move on to the next one and everybody's playing it. And we'd all, we'd all played with basically every piece of software out there and everything. Whereas now it's at hyper speed.
Leo Laporte [00:09:09]:
It's at hyper speed. It's yeah, because, and I think that's because coding used to take a long time. It was a lot slower process. It's the difference between slowly smoking a brisket and microwaving it. It's like we've really hyper-speeded the whole process up. It's hard to keep up. It's very interesting. So do you want to join us, Sarah Nova? You're welcome to.
Leo Laporte [00:09:37]:
And John, is John in here? I think John's in here. I see a Tim.
Anthony Nielsen [00:09:42]:
Yeah, they might be. Yeah, they might be confused in terms of like, they might think this is how to watch versus participate.
Leo Laporte [00:09:49]:
You don't go in there to watch, you go in there if you want to talk. You don't need to watch us that way. You can watch us on YouTube, Twitch, X, Facebook, LinkedIn, and Kick, the usual. And of course Discord.
Larry Gold (LrAu) [00:10:00]:
Yeah, I can say the, the blog post I put up last weekend that I sent you guys was basically, if you remember the movie The Incredibles with Syndrome, we had the, the comment that, you know, if everybody's super, nobody is. And, right, and, you know, I compare the coding to that. And Darren, you know, of course, he— this is why I love Darren, because he contradicts and jumps on me when I make statements like that, uh, because it's good to have someone challenge you. Um, but I, you know, I kind of said it's like, it's, it's making it hard to figure out who's a really great coder, who's not a great coder anymore.
Leo Laporte [00:10:32]:
Yeah, you know, I guess I am not super, you say.
Larry Gold (LrAu) [00:10:35]:
Yeah, you know, um, Well, yeah.
Leo Laporte [00:10:43]:
In fact, I'm starting to see videos from— there was that guy who said, "I was a 10x coder. Now I'm just a nobody." Well, it's tough.
Larry Gold (LrAu) [00:10:57]:
I do think there is this notion or this change of— I remember when the internet came, everyone could make the front page. They could make web pages. Right? And suddenly everybody had web pages and everybody had certain things. So there definitely was this change. And MySpace did that.
Leo Laporte [00:11:16]:
That's a good point. MySpace was a little bit like that, where everybody got to— oh, all of a sudden they're learning HTML and putting their own pages, as ugly as they were, together. And so there was a little bit of a creative buzz, but this is something else. I don't Do you, where do you guys, uh, keep up? How do you guys keep up?
Darren Oakey [00:11:45]:
Uh, I, I have my possible ones that I, uh, if I, if I was to have one source of information, it's Future Tools that I, uh, which is Matt Wolfe's blog.
Leo Laporte [00:11:51]:
Yeah, I have an RSS reader that has all of those, uh, good blogs on there like Simon Willison and Matt and everybody. And so I try to read those. And X, for some, I can't do it on the desktop, but if you, on X, at least on the iPhone, and maybe there's just something I get, I don't know, but under, if I tap for you, I can narrow the subjects down. So instead of getting all the crap, I can just say, let's see if you can see it. You probably can't. I can get my eyes out of the shot. Yeah, you can see it.
Larry Gold (LrAu) [00:12:25]:
Yeah, it's pretty good, yep.
Leo Laporte [00:12:26]:
So I can reduce the checks down just to AI, or maybe AI and science and technology, and then it becomes kind of a decent fee because unfortunately there's still a lot of people who use X, including Andrej Karpathy. And you were mentioning Bob Martin. He posts there regularly. So I try to read X on a pretty regular basis as well. Knowing that a lot of what I'm reading is crap.
Larry Gold (LrAu) [00:12:56]:
So part of my, you know, Open Claw tool or my Perkyl tool actually does what you guys say is it goes out, brings all these RSS feed readers and then generates literally. And if you want to share my screen for one second, let me go back to that page is I just got my afternoon newsfeed because it's 5 o'clock. So it basically—
Leo Laporte [00:13:19]:
I got to do that. That's a good idea.
Larry Gold (LrAu) [00:13:20]:
Right. And it generates it. And it's funny 'cause this is the article I just saw, literally in another message from a friend, is like China, there's like all these people rushing to build OpenClaws in China.
Leo Laporte [00:13:30]:
Oh yeah, of course, China, yeah.
Larry Gold (LrAu) [00:13:32]:
It's big in China.
Leo Laporte [00:13:33]:
Are you summarizing these or is this the full feed and you're just using this?
Larry Gold (LrAu) [00:13:38]:
This is being summarized by OpenClaw.
Leo Laporte [00:13:41]:
That makes me a little nervous to summarize them. I kind of wanna read 'em.
Larry Gold (LrAu) [00:13:45]:
Yeah, well, no, no, I'll read, It— look, if I look at the summary and like, I got to read this, I'll click on it.
Leo Laporte [00:13:50]:
I am doing that. My reader does that.
Larry Gold (LrAu) [00:13:52]:
Yeah, right. And in many cases I look at just the title. I don't even get to the bottom piece because usually the title gives me enough information and I get fintech, you know, finance news because obviously, you know, technology news, vibe coding. So I put different topics and it searches those topics.
Leo Laporte [00:14:15]:
Uh, is reminding me that you can use Nitter, not Twitter, but nitter.net to turn any Twitter feed into an RSS feed. And that's a good way to—
Thomas Burnham (aramova) [00:14:27]:
oh, that's cool—
Leo Laporte [00:14:27]:
to stay off of Twitter or X and add Andrej Karpathy and others too.
Larry Gold (LrAu) [00:14:31]:
And then in OpenClaw, there's something called Blog Watcher, um, which is one of the skills. Oh, um, I think, uh, just type in Blogwatcher articles, and it'll actually— if you load those into Blogwatcher, it'll actually bring those articles to the front and it'll ask you if you want to read it. And again, this is one of the tools and skills, so you'll see what it actually does.
Thomas Burnham (aramova) [00:14:55]:
Nice.
Larry Gold (LrAu) [00:14:55]:
All right, it finds articles, and I have Verge loaded and another one loaded. And you could add whatever RSS feed you want to this, and you could— if you want to summarize, you say Blogwatcher summarize, and then you give it the number because there's usually a number next to them.
Leo Laporte [00:15:08]:
Let me say hello to Thomas Burnham, who's just joined us. Hi, Thomas.
Thomas Burnham (aramova) [00:15:13]:
Good evening or afternoon, wherever you are.
Leo Laporte [00:15:15]:
Tell us about yourself.
Thomas Burnham (aramova) [00:15:17]:
Oh, I've been a, uh, let's see, a fan of you guys since, uh, screensavers. I remember I was one of the early geeky kids who was trying to, uh, get DSS satellite where I was and figure out how to get the best signal to get the screensavers fixed back in the—
Leo Laporte [00:15:37]:
and is that an oscilloscope behind you? What is that?
Thomas Burnham (aramova) [00:15:40]:
Yeah.
Leo Laporte [00:15:42]:
That gives you a little Greek cred.
Thomas Burnham (aramova) [00:15:44]:
Yeah. I'm actually doing a few different projects. I don't want to hijack you.
Leo Laporte [00:15:50]:
Are you more of a hardware hacker than this?
Thomas Burnham (aramova) [00:15:54]:
Professionally, I'm an engineering program manager, but background's hardware, hardware hacking, things like that.
Leo Laporte [00:16:02]:
Tell us what you do with AI.
Thomas Burnham (aramova) [00:16:07]:
I did put together— I remember one of the episodes you did with Steve Gibson earlier when—
Leo Laporte [00:16:15]:
He's really come around, hasn't he? Like last week, all of a sudden he's like, I love this stuff.
Thomas Burnham (aramova) [00:16:22]:
So it's really fun because one of those early episodes was him discussing how he'd be interested in how AI would track, how LLMs would track how his personality or his perception towards these technologies has changed over the years. And I had a lot of trouble getting things like NotebookLM to actually take the entire corpus of the security, is Steve has done just like an absolutely amazing job at getting fantastic transcripts for everything.
Leo Laporte [00:16:53]:
Unlike ours, his transcripts are really well done by a human.
Thomas Burnham (aramova) [00:16:58]:
They're, yeah, I think it was Stacy, he said, or I forget who does it.
Leo Laporte [00:17:02]:
Does, yeah, yeah, she's a, she's a farrier, she's a horseshoer and lives out in the middle of nowhere. And, uh, and that's why he does the, uh, 8-kilobit versions of the show, or whatever they are, 16-kilobit, so she can download them.
Thomas Burnham (aramova) [00:17:16]:
Beautiful.
Leo Laporte [00:17:18]:
Elaine Ferris.
Thomas Burnham (aramova) [00:17:19]:
Elaine, yes.
Leo Laporte [00:17:20]:
Yeah, and she's really— but it's easier because unlike our other shows, there's only two voices ever on Security Now. So the—
Anthony Nielsen [00:17:29]:
and most of it's written out ahead of time too.
Leo Laporte [00:17:31]:
And she also has a script.
Larry Gold (LrAu) [00:17:34]:
Yeah.
Leo Laporte [00:17:34]:
Our other shows are done— it used to be done by humans, not very well, now done by AI using— you said 4.0?
Anthony Nielsen [00:17:42]:
It's using an old— Well, no, that 4.0 is like the content after, but I don't know what they're using. And we use a service called Castmagic.
Leo Laporte [00:17:52]:
And it has a hard time distinguishing different voices and stuff. There was a furious battle in the TWiT forums over why don't you do double enders and have individual, and then you could send the transcript, the transcription software, individual voices and all that. You know how many people download transcripts besides you, Thomas?
Larry Gold (LrAu) [00:18:12]:
So, yeah.
Thomas Burnham (aramova) [00:18:13]:
Yeah, I was actually reading—
Leo Laporte [00:18:14]:
So you're Aranova. I should, by the way, I didn't put two and two together. Now you're mentioning you're doing this Notebook Golem thing. You're the Aranova we've been referring to.
Thomas Burnham (aramova) [00:18:22]:
Okay. Yes, sir.
Anthony Nielsen [00:18:23]:
Well, just to go on a tangent, like, yes, we do have the stems. We could use Adobe Premiere to do the transcripts, but that's going to delay putting out the shows because we have to run the whole thing.
Leo Laporte [00:18:35]:
It's a low priority because not a lot of people read it. It's mostly for SEO, mostly so that the spiders can crawl it, less so for individuals. But that's going to change. I'm sure in the next year our transcripts will get very good. We'll probably whisper or something.
Anthony Nielsen [00:18:50]:
It gets confused with if Benito comes in late in the episode, it doesn't know it's a new person. It's like, I think it's that other person from earlier.
Leo Laporte [00:18:59]:
And we all kind of sound the same anyway.
Thomas Burnham (aramova) [00:19:03]:
So, well, the fun part is going back and getting an LLM to accept that within.
Leo Laporte [00:19:08]:
Well, so answer that first question I had.
Larry Gold (LrAu) [00:19:11]:
How did you—
Leo Laporte [00:19:11]:
do all of those transcripts are in NotebookLM?
Thomas Burnham (aramova) [00:19:14]:
Every single one.
Leo Laporte [00:19:14]:
So what is the limit? How many documents?
Thomas Burnham (aramova) [00:19:18]:
So NotebookLM has some interesting limits the way that it works. Currently you can have up to 500,000 characters.
Leo Laporte [00:19:27]:
So you must have more than that. Are you boiling it down somehow?
Thomas Burnham (aramova) [00:19:30]:
I wrote some Python and I vibe-coded it into Go, a system that goes through and it parses with a bunch of really complicated regex every year's transcripts because you shift them from year to year, timestamps and word stamp. That was really confusing NotebookLM for a while. When Steve started talking about it is when I started working on this. I've been working on this for several months on and off.
Leo Laporte [00:19:58]:
Thank you.
Thomas Burnham (aramova) [00:19:59]:
Things.
Leo Laporte [00:19:59]:
Thank you.
Thomas Burnham (aramova) [00:20:01]:
In between that, Paris started mentioning things and intelligent machines, and I came back to it and spent the weekend. I grabbed all the transcripts. Thank you, Darren, and everything for the API. That was being able to grab those. Then spent time segmenting the transcripts into different categories that the RAG can piece apart and then develop timelines. So a lot of it came down to regexing incorrect timestamps, fixing words and paragraphs that piece together, giving it the ability to follow logic flows and correctly vectorizing the paragraphs and sizes that NotebookLM can piece together.
Leo Laporte [00:20:48]:
So are you tokenizing them? How are you getting them down?
Thomas Burnham (aramova) [00:20:51]:
Oh no, no, no, it's, it's just, uh, using, uh, testing to see what works better. Um, I, I've created like 60 different notebooks, uh, and tried it.
Leo Laporte [00:21:02]:
So there's no one notebook that has all the transcripts of Security Now?
Thomas Burnham (aramova) [00:21:06]:
No, there is. Yeah, yeah, every single, every single—
Leo Laporte [00:21:09]:
how do you do that? I'm still puzzled.
Thomas Burnham (aramova) [00:21:14]:
So you can, uh, take— so when you look at those markdown files It takes all the transcripts from a single or from a group of shows and breaks them up by year. I broke them up, 2025 might have them. There you go. You can try the Twig episodes one, that first link there. 2014, if you click on that very first one on the sources, that one works perfect. The source guide is going to open up just the raw markdown and that's going to be all of your transcripts from episodes 805 through 819.
Leo Laporte [00:21:54]:
You've already cleaned these up?
Thomas Burnham (aramova) [00:21:55]:
Cleaned them all up. Yes. Every episode of Intelligent Machines, Security Now, TWiG, and TWiT have been processed through here. I put the Go and the Python up on my GitHub. Happy to share.
Leo Laporte [00:22:09]:
Is your GitHub Aranova?
Thomas Burnham (aramova) [00:22:11]:
Aramova, yes.
Leo Laporte [00:22:12]:
Yes, Aramova.
Thomas Burnham (aramova) [00:22:13]:
A-R-A-M-O-V-A.
Leo Laporte [00:22:14]:
Okay.
Thomas Burnham (aramova) [00:22:16]:
Yeah. This actually works really well. You can see some of those screenshots. Yeah, you can see some of those that were generated by the—
Leo Laporte [00:22:26]:
Oh, this is great.
Thomas Burnham (aramova) [00:22:28]:
Yes.
Leo Laporte [00:22:29]:
That's where these came from.
Thomas Burnham (aramova) [00:22:32]:
Now, one of the downsides to the way that Novak Alim shares things is, I can share the chatbot, but I can't share the ability to generate things unless I add you to the permissions. So if someone wants permissions, let me know, and then you can generate these at will.
Leo Laporte [00:22:48]:
I had no idea this came from NotebookLM. That's hysterical.
Thomas Burnham (aramova) [00:22:52]:
Yes, we do still see some—
Leo Laporte [00:22:54]:
I can't believe it figures this out. Bland, bland, bland forever. Obviously there's a little error there, but One second per week because the server or malls contain enough metal to block the signal.
Larry Gold (LrAu) [00:23:09]:
I, I—
Leo Laporte [00:23:09]:
oh my God, the accordion segment. Now that's actually interesting that it mentions that. That's from the screensavers. That is, that is ancient history, but I must have talked about it.
Thomas Burnham (aramova) [00:23:20]:
Yes, so you can actually go back into it and say, hey, tell me about the accordion segment, and it will find you and cite you which episode and where the timestamp was in it. Okay, um, it's just a matter of giving it good information, and that's where we run into a lot of problems with LLMs is garbage in, garbage out, right?
Leo Laporte [00:23:40]:
And then Jeff— so Jeff at some point mentioned that steaming coffee trick, that's why that got in there. And this is really old, Jeff hasn't been in Vienna, Sydney, or Brussels in ages. It's funny, so you really did get all the Episode— I mean, I could tell you got all the episodes.
Thomas Burnham (aramova) [00:23:57]:
Yeah. And, um, while you're— I actually made a Leo bot as well. I fed it everything that I can find, 38 million words of Leo Laporte, and set a master.
Leo Laporte [00:24:10]:
Leo's evolving, is that it? Which one is it?
Thomas Burnham (aramova) [00:24:12]:
Here, let me find you and I'll link the Leo bot so you can get answers back in the tone of Leo, which is interesting.
Leo Laporte [00:24:18]:
Oh, this is the one Paris wanted you to do, my evolving AI views.
Thomas Burnham (aramova) [00:24:23]:
Yes, I think that was one of them.
Leo Laporte [00:24:27]:
In 2014, I said AI is BS.
Larry Gold (LrAu) [00:24:34]:
Huh.
Leo Laporte [00:24:36]:
Actually, that was, uh, I think I was quoting Elon there, but that's okay. Uh, interesting. Yeah, so it's not— I wouldn't call this gospel, but then I know the actual Truth. So it's interesting.
Thomas Burnham (aramova) [00:24:55]:
Wow.
Leo Laporte [00:24:56]:
Yeah, I'd like to— I'd love— so you just put a link in the Discord, did you?
Thomas Burnham (aramova) [00:24:59]:
Yeah, I just put it.
Leo Laporte [00:25:00]:
All right, let me find that. This is, this is going to be in my voice, huh? So can we, uh, is there some way we could put this in as AI Leo in the Discord, Anthony? No, probably not, huh?
Anthony Nielsen [00:25:15]:
Yeah, it's a little complicated.
Leo Laporte [00:25:16]:
Yeah.
Anthony Nielsen [00:25:17]:
Although, I mean, I've seen there's like a NotebookLM MCP. Like you could—
Leo Laporte [00:25:26]:
Oh really?
Anthony Nielsen [00:25:26]:
So you could tie—
Thomas Burnham (aramova) [00:25:28]:
You could like—
Larry Gold (LrAu) [00:25:28]:
Okay.
Anthony Nielsen [00:25:28]:
Some people use NotebookLM as like some like RAG system for their Claude code in that way.
Leo Laporte [00:25:35]:
Okay.
Thomas Burnham (aramova) [00:25:36]:
I've heard it's come across as a Vertex option as well that you can use it through an API now. I haven't tried that.
Anthony Nielsen [00:25:44]:
Oh really?
Thomas Burnham (aramova) [00:25:45]:
Yeah. So, um, ask it how Leo feels about, uh, AI-enabled smart gadgets.
Leo Laporte [00:25:59]:
I should just dictate this. I keep forgetting I have that capability. Reading through pages, digging into details, reading your inputs.
Thomas Burnham (aramova) [00:26:10]:
So this is why it would be a little difficult.
Leo Laporte [00:26:12]:
Yeah, it's too slow.
Thomas Burnham (aramova) [00:26:14]:
Yeah, yeah. Even with, uh, the faster Claude token.
Leo Laporte [00:26:18]:
This was Darren. Darren's been working on a bot that we could talk to in Intelligent Machines, and you got—
Thomas Burnham (aramova) [00:26:25]:
you got—
Leo Laporte [00:26:25]:
you said you got the latency down to a couple of milliseconds or something, right?
Darren Oakey [00:26:29]:
Well, yeah, I mean, it depends, but what it does is follow along behind but doesn't talk, and then it looks for its name and then chimes in when it does say, but it does that pretty quickly. Yeah, um, but, um, there's been a few discoveries recently that have, uh, people are talking about lower latency stuff, so I've got to play with them.
Leo Laporte [00:26:54]:
Yeah, this is good. And it was nice, it has all the references, so it has the actual quotes.
Larry Gold (LrAu) [00:27:03]:
Yeah, that's one of the reasons I like NotebookLM when I'm doing any kind of writing or any kind of research, because you put all your data in there and you get everything, and then you can click on the links and really confirm that that's what it was there.
Thomas Burnham (aramova) [00:27:19]:
And so beyond it being fun for, you know, playing, you know, Twit trivia, uh, on the weekends, um, I don't recommend using this because it's completely cheating. Um, Realistically though, something Paris had mentioned was interesting to me, and I had actually gone through this a few— couple years ago when I moved to the new town. I'm in North Jersey and we have, you know, the local towns and stuff. And I had gone ahead—
Leo Laporte [00:27:51]:
like your tagline.
Thomas Burnham (aramova) [00:27:54]:
I still don't understand. So I went ahead and got 17 years of PDFs from the local town of all of their notes.
Leo Laporte [00:28:09]:
Oh wow.
Thomas Burnham (aramova) [00:28:10]:
They were all in PDFs but scanned, so you couldn't copy text out of them. So I used PaddlePaddle OCR on a— which is a open-source OCR that can scan PDFs, uh, the images off of them, and set it up. And, and it's running off of— it's essentially an AI, and it runs off of a 3090. So I let it run 17 years on a 3090 for about 2 months, and it converted all of those PDFs into Markdown files, which I fed into an LLM. So now my burrow can scan 17 years of and have conversations with all the city politics.
Leo Laporte [00:28:53]:
See, that's really useful. That's open government. That's really, that's really cool.
Thomas Burnham (aramova) [00:28:58]:
Yes. And for people like Paris, you can take Open, or Paddle Paddle OCR.
Leo Laporte [00:29:02]:
You've been in touch with her because she really needs to do this. She has all these PDFs she's trying to analyze.
Thomas Burnham (aramova) [00:29:08]:
I have not, but I'd be happy to walk you through the workflow. It's, it's easy, but it's super useful because so much information is locked in these PDFs and it's getting lost because you can't query it, you can't search it, you can't find a good way to feed it into an LLM without wasting huge amounts of resources trying to get it to do the OCR on them. Offloading that to some custom AIs to do the OCR Markdown conversion so then you can clean up your Markdowns, feed them into either a RAG plus, if you can get a RAG plus refinement, that's how you can get massive amounts of information from memory in a personality style. That you're looking for.
Leo Laporte [00:29:52]:
Oh, Darren's already been here. Darren gave you a star. I'll give you another star. I'm clicking the wrong thing. Here's the star.
Thomas Burnham (aramova) [00:30:03]:
Yeah. So I've done some tweaking. You've actually inspired me to start tinkering with Claude code and I am impressed. I was originally a Gemini fan.
Leo Laporte [00:30:17]:
So try it with Turbo Pascal. You might find some answers.
Thomas Burnham (aramova) [00:30:19]:
I might. I might actually do that.
Anthony Nielsen [00:30:23]:
I might have to spin up my—
Darren Oakey [00:30:23]:
I have a Turbo Pascal 1.0 for CP/M book somewhere.
Leo Laporte [00:30:29]:
Turbo Pascal was an amazing, amazing product.
Darren Oakey [00:30:31]:
I loved it.
Leo Laporte [00:30:32]:
Mind-boggling. I've interviewed Philippe Kahn many times, actually. I really— in fact, we're due to get him back on. He doesn't code anymore, but he actually did an interesting thing when his kid was born many years ago. He wanted to tweet a picture of the kid. Wasn't tweeting at the time. There was some sort of way of posting it. And so he really kind of invented this, uh, internet-connected camera and patented it.
Leo Laporte [00:30:54]:
And now, uh, I still believe he gets royalties for every, uh, cell phone camera sold. So he's done quite well, quite well for himself. And then he also, uh, he has a product— I had it in my bed for a while, so I think a Beautyrest or somebody sort of licensed it. There was a little paddle that goes under your bed that measures your sleep, and it was using a technology, a similar, I think, related technology that he'd invented. So he's— but now mostly he sails on the bay in very fast little sailboats with a whole team of people cranking furiously. I don't want to usurp your time too much. I didn't realize we had Aronova on here, and we've kind of gone down a rabbit hole. But Larry, I know you wanted do some more stuff.
Larry Gold (LrAu) [00:31:43]:
No, no, I, I'm enjoying this. You're gonna understand, is part of this is I like learning as much as I like, you know, speaking.
Leo Laporte [00:31:51]:
This is what I love this user group for, is because it is crazy out there, and it's really nice and reassuring to have some people you could sit down and talk to and say, yeah, what are you doing? And how, you know, what, what are you seeing? And, and to share the overwhelm. A little bit too.
Larry Gold (LrAu) [00:32:10]:
Well, there's who said that, you know, groupthink is wonderful because you have enough of us around to play with different things and have different opinions about it, as well as shows, is what we're doing with it. It then, you know, to me it spawns, you know, hundreds of ideas, right? You know, every time I'm, you know, even, even, you know, when you interview, like after watching Guy's interview On Wednesday, Wednesday night, I was thinking, oh my God, that's a couple things I didn't think about. I'm still trying to convince my friends to use Signal. That's the, you know.
Leo Laporte [00:32:45]:
Yeah, good luck.
Larry Gold (LrAu) [00:32:46]:
The whole challenge about this stuff.
Leo Laporte [00:32:47]:
I will, Thomas, give you Paris's Signal. You probably already have it 'cause she gave it out.
Thomas Burnham (aramova) [00:32:53]:
Oh no, I have not seen that. I'll help you get that.
Leo Laporte [00:32:56]:
That's probably the best way to reach her.
Thomas Burnham (aramova) [00:32:59]:
I've been trying to get my entire family to switch over.
Leo Laporte [00:33:02]:
Haven't we all?
Thomas Burnham (aramova) [00:33:03]:
It's like Sisyphus, but it's worth the fight.
Leo Laporte [00:33:07]:
That damn stone keeps rolling back. So Larry, show it to us a little. Since Thomas mentioned OpenClaw, he's a little interested. Give us a little OpenClaw foo here.
Larry Gold (LrAu) [00:33:19]:
Okay, so what you saw from OpenClaw, what I did before was some of the summary stuff that it does. And this is really the chatbot interface that they have. And you can see this is the the thing it dumps to me, the mail that it dumps to me or the Telegram. But honestly, one of the things that's cool and I tell it to say, you know, actually I told it to really create 3 agents.
Leo Laporte [00:33:43]:
Make it a little right now, the text a little bigger, if you will, so we can read it. Yeah, yeah, you could do that. Yeah, yeah, yeah, that's good.
Larry Gold (LrAu) [00:33:51]:
Yeah.
Leo Laporte [00:33:52]:
Now I can read it.
Larry Gold (LrAu) [00:33:52]:
I can read it. Yeah. But what you can do is you can tell it to create some agents or spawn some agents and connect the agents together. These are the three that I like to create when I'm doing something, which is a web search which says, hey, go out and search the web and do something, then do a deep think, and then write the code.
Leo Laporte [00:34:13]:
But they are serialized, they're not parallel.
Larry Gold (LrAu) [00:34:17]:
You could parallel different agents if you want to depending what you want to do. For me, most of the time I want to build something. I want to sit there and say, here I have an idea, go search it, come back with stuff. Deep think about it and then code it. And I think very progressively. Remember we talked about, I'm a big fan of like spec kit. I'm a big fan of someone going in logical and it definitely comes from years of writing code in that way. You know, even in agile, you're still doing, you know, have the idea, then you deep think about it and then you finally code the smaller subset and piece of it, right? And you can get these things to kind of do these pieces, right? And I really spend, you know, my blog is, is I'm trying to become a better person.
Larry Gold (LrAu) [00:35:01]:
I try to build these personal assistants to help me do better at everything, right? The, the notion of that news is, as you said, how do you keep up with this stuff? Well, I have to build the tools to do that. Um, and I'll just— I'm going to flip to my dashboard really quickly because what I have on my personal one that built me this kind of this personal dashboard that— let me blow this up a little more. And really what it does is it tracks basically how I try to improve myself, whether it's physically, mentally, or spiritually, right? You know, it tracks each day whether I've done a workout, my eating, whether I was healthy or unhealthy. You can see the blanks where I'm unhealthy. I make a note what I'm grateful for, my hours of sleep. And you can see a couple nights I didn't really sleep that well. It's supposed to track my random act of kindness. I haven't done a lot of those.
Larry Gold (LrAu) [00:35:49]:
My values, I haven't done a lot, my energy level. And I can actually see, believe it or not, what you call it, my score. And you can see how it's tracking what I do, and you can see when I have good days and bad days. And I completely interact with this via Telegram, and I send it messages, and I tell it how I ate, how I didn't eat. I didn't write a line of this code. And in fact, The whole concept of this was me asking it, I want to be a better person. I want to be a better friend, a better parent, a better son, right? And a better just all around. And it said, well, let's track some things.
Larry Gold (LrAu) [00:36:27]:
And it came back with suggestions. And it started building these pieces to track it. And then after we got, you know, after we got some of these pieces, right? Then it was like, okay, if you want to be better healthy, What are your recipes? Well, I asked this, what could I do? It says, well, how about we read Instagram recipes and we make a recipe list? Basically, I can give it an Instagram link and it built me a whole bunch of recipes that I could follow, and then I can add it to a shopping list. If I add multiple to a shopping list, I can then go in and look at my shopping list and it aggregates everything I need to shop for.
Thomas Burnham (aramova) [00:37:04]:
You've never found glue on here, right?
Larry Gold (LrAu) [00:37:06]:
Huh?
Thomas Burnham (aramova) [00:37:07]:
You never found glue on here, right?
Leo Laporte [00:37:08]:
No Elmer's glue, no rocks. What model are you using? OpenRouter and a local model, right?
Larry Gold (LrAu) [00:37:17]:
So I'm using OpenRouter. I have a local model for general stuff. For the coding, I go to Opus 4.6.
Leo Laporte [00:37:23]:
Of course.
Larry Gold (LrAu) [00:37:26]:
Anytime I write code, I say go to Opus 4.6. The deep thinking, I'm using Kymmy 2.5 now.
Leo Laporte [00:37:30]:
Okay.
Larry Gold (LrAu) [00:37:31]:
And that's on OpenRouter. Only the general stuff does that.
Leo Laporte [00:37:35]:
Are you using KIMI locally or using their server?
Larry Gold (LrAu) [00:37:37]:
No, I'm using OpenRouter. OpenRouter, because it makes— OpenRouter, this way I can switch between KIMI and very quickly and very efficiently.
Leo Laporte [00:37:43]:
And what is the token cost on KIMI? Is pretty low, isn't it?
Larry Gold (LrAu) [00:37:47]:
So this is hysterical. So this is my cost for the month, um, is right now it's 57 cents. So it tracks my spend.
Anthony Nielsen [00:37:55]:
It's early yet.
Leo Laporte [00:37:56]:
It's only—
Larry Gold (LrAu) [00:37:57]:
it's early.
Leo Laporte [00:37:57]:
And mostly the 13th.
Larry Gold (LrAu) [00:37:59]:
Well, the other thing, if you go back to the You know, it's going to be $1.50 before you're done, right? If you go, if you go back to the dashboard, you could see this dashboard has been running for only 30 days. So the code, most of the code was written previously. I have not written a lot of— I've not asked the IQ9 to build much lately. It was really that—
Leo Laporte [00:38:16]:
you're not worried about security because you're not really giving it access to anything?
Larry Gold (LrAu) [00:38:21]:
No. So there's no email, there's— it doesn't connect to anything. You can't—
Leo Laporte [00:38:26]:
it doesn't have access to your hard drive.
Larry Gold (LrAu) [00:38:28]:
Doesn't have access to the hard drive. There's no— there's no nothing funky in here that, that would do anything like that. And I, and I told you, even the skills, I asked it to build the skills and everything by itself. I don't ask it to, to do anything that would be, you know, I would say soft or hard getting into this information that I don't want.
Leo Laporte [00:38:47]:
If you look, the health information doesn't have my blood pressure or, you know, there's a whole bunch of stuff, um, this stuff I actually just found a program that takes all of my Apple Health for the last 4 or 5 years and makes it Markdown files that I fed to Claude.
Larry Gold (LrAu) [00:39:04]:
That'd be cool because I have a—
Leo Laporte [00:39:06]:
Claude said you need better sleep. Claude said, okay, there's only one real problem I see here.
Thomas Burnham (aramova) [00:39:16]:
Yeah, Garmin has a similar— I have a Garmin watch on it. It downloads it nightly.
Leo Laporte [00:39:23]:
I've been feeding Apple Health everything for a long time, so there's a, it's a longitudinal kind of database.
Anthony Nielsen [00:39:30]:
To reiterate, Parker is asking the chat room, you said that you only, the only skills it's using are ones that it wrote or you wrote together. You're not like pulling it.
Larry Gold (LrAu) [00:39:38]:
Yes, it wrote that we wrote together. I do not use anybody else's outside skills.
Leo Laporte [00:39:42]:
That's probably smart. That's, that's where you get prompt injection.
Larry Gold (LrAu) [00:39:45]:
Yeah.
Leo Laporte [00:39:46]:
And other problems. And do you, uh, how often do you look at the skills and and massage them?
Larry Gold (LrAu) [00:39:53]:
Once they're running and I think they're good enough, I don't go back and check them, you know, 'cause it seems to be that once I got each piece running, I'm like, okay, wow, this is good. Let's let it run for a while. And again, it's only been a month, right? Really, you know, I can say Open Claw is 50+ days old, right? And the first 20 days was me experimenting. And then on the 20th day is when I said, okay, I want to create something.
Leo Laporte [00:40:15]:
And you got a Mac mini for this?
Larry Gold (LrAu) [00:40:18]:
So this actually now is running on an old PC because I decided to repurpose the Mac Mini for something else.
Leo Laporte [00:40:27]:
And I don't know why people think they need Mac Minis for this stuff. I guess if you're running local models, it would be better to run it.
Larry Gold (LrAu) [00:40:32]:
Yeah, but I kind of like Macs for certain things, but this is running on WSL Linux.
Leo Laporte [00:40:38]:
Oh, really?
Larry Gold (LrAu) [00:40:40]:
Yeah.
Leo Laporte [00:40:40]:
Wow.
Larry Gold (LrAu) [00:40:40]:
So I'm running Linux inside of a Windows— an old Windows PC.
Leo Laporte [00:40:43]:
But that's good in a way because you're running in a VM, right?
Larry Gold (LrAu) [00:40:46]:
Running. Absolutely. That's exactly that.
Leo Laporte [00:40:48]:
Yeah.
Larry Gold (LrAu) [00:40:49]:
And this way I can remote into machine because now when I'm running the machine, you can see I'm remoting into the Google Chrome browser and that Linux box does not have access to the outside of anything. Right.
Leo Laporte [00:41:00]:
What, Darren?
Darren Oakey [00:41:02]:
You could run it on Docker on anything, right? And it would still be just as isolated.
Leo Laporte [00:41:06]:
Docker is not completely impermeable, is it? I mean, it's pretty— Is it pretty good? Pretty sandboxed? Okay.
Larry Gold (LrAu) [00:41:14]:
Yeah.
Darren Oakey [00:41:14]:
Yeah. And we use Docker for every real thing in production.
Leo Laporte [00:41:18]:
So, I mean, I don't know enough about it, but I keep hearing of Docker security issues.
Larry Gold (LrAu) [00:41:25]:
Yeah, I hear those too. But I think if you're running it locally at home, no one— and you've got no incoming—
Leo Laporte [00:41:30]:
nobody's—
Larry Gold (LrAu) [00:41:32]:
yeah, yeah.
Leo Laporte [00:41:33]:
I just set up Tailscale and I'm kind of praying that— because I— just a short anecdote, when I was in Orlando. Lisa said, are you, uh, why am I getting all these charges from Claude? I said, what? You shouldn't be getting any charges from Claude.
Larry Gold (LrAu) [00:41:49]:
What?
Leo Laporte [00:41:49]:
She says, $10, $10, $10. Claude spent $200 in tokens in 2 days, and it was here, it was running locally, and I didn't have Tailscale set up, so I had no way of figuring out what was going on. So I disabled the API, the token, and that, that stopped it cold. But I had no idea what was going on.
Darren Oakey [00:42:09]:
Well, this is why I told you guys that I'm— I've moved over to completely using either local models or in-subscription stuff because I've had 2 or 3 errors, and they're all my mistakes on my part. But each time it's like $200 here, $300 here.
Leo Laporte [00:42:26]:
That sucks.
Darren Oakey [00:42:28]:
Lose a lot of money.
Larry Gold (LrAu) [00:42:29]:
So one of the things I've done with OpenRouter is I don't auto auto refill.
Leo Laporte [00:42:34]:
Yes, so that's one of the things I did when I got back is I set a limit and I turned off auto refill and, you know, yeah, yeah, yeah.
Larry Gold (LrAu) [00:42:42]:
And going back, going back to the other health stuff is I have a Whoop. I don't have— well, I have an Apple Watch too, but Whoop has an API, so I have it on my to-do list to connect to get that. But I, I, I stopped working while I was— Whoop kept telling me I couldn't sleep very well.
Leo Laporte [00:43:00]:
Yes.
Larry Gold (LrAu) [00:43:01]:
And I was getting restless not being able to sleep because Swoop is telling me I can't sleep.
Leo Laporte [00:43:06]:
Yes, you know, I know that's the worst thing about sleep trackers. They make you sleep more poorly. So maybe I'm going to do this because you, you've kind of inspired me. Part of the problem I've had is I don't know what I would do with it. Yeah, I, I mean, at Claude Code, I sit down and I say, oh, here's, let's, let's write something. Or, you know, as I said, I just yesterday I said, let's set up Tailscale, and It was very trivially easy. I wish I'd thought to do that before I left town. But it's really, you know, I'm just kind of simple like that.
Leo Laporte [00:43:39]:
I don't know if I would say, hey, tell me what I'm doing wrong in my life. Not that I don't think Claude would have some good advice. The other day I fed it, we're in the middle of a lawsuit over the house, and I fed it a bunch of discovery documents and did a really, really good job of finding things, of summarizing. It's amazing.
Larry Gold (LrAu) [00:44:08]:
But see, that, that I put in NotebookLM because to me I would trust NotebookLM over, over anything else.
Leo Laporte [00:44:14]:
I don't know, I maybe—
Larry Gold (LrAu) [00:44:15]:
I see Tom shaking his head yes.
Thomas Burnham (aramova) [00:44:19]:
For OpenClaw, I would look at a use case for Yulio would be, you know, give it some themes on news articles you're looking for, have it talk to itself to refine whether those themes make sense or it's been something that's talking.
Leo Laporte [00:44:32]:
Yeah, I mean, my— the dream would be, because what I've really done is I've created more work for myself because now I look at 200 different RSS feeds and every day I'm going through thousands of articles. And so the dream would be, if I trusted it, hey, tell me, surface the 10 most important things each day so that I can add them to the rundowns for the show. Uh, but I'm afraid I'll miss things. I think I have a kind of personality in the stories I choose, and I just I just don't know if an AI would do that well. I feel like that's the—
Anthony Nielsen [00:45:04]:
or maybe flip it so like you, you show it what you have currently and then like, am I missing— did I miss anything big?
Thomas Burnham (aramova) [00:45:12]:
Well, and also don't let it reverse Centaur you. Let it, let it keep control of it.
Leo Laporte [00:45:17]:
That's what I've done, by the way.
Thomas Burnham (aramova) [00:45:19]:
Yeah, but, um, yeah, what would be—
Leo Laporte [00:45:25]:
so, okay, so I like what— I like what Larry, what you did is like, help me be a better husband, father, friend. I like that. Let me be— help me be a better person. That's a good project for it. What are some other things that you found it useful? I mean, for me, it keeps coming back to news stories and stuff, but maybe I don't— I'm not ready for that yet. What else? So I like the recipe thing.
Larry Gold (LrAu) [00:45:50]:
What I want to show is I built— you know what Empower is there?
Leo Laporte [00:45:53]:
Yeah, yeah. We use Empower.
Larry Gold (LrAu) [00:45:55]:
I built a a version of that that can actually look at my portfolio, I extract the portfolio and do a full analysis. And you can ask it to build a full Monte Carlo simulation. So it'll actually show you all that. I couldn't write one myself, but I could say, you know, Claude, go write this.
Thomas Burnham (aramova) [00:46:17]:
Wow.
Larry Gold (LrAu) [00:46:18]:
You know, and give me an API to it. And then I can tell OpenClaw to connect to it. So if you look, actually, there's other pieces I said I can't really show you, but it does have the retirement planner. The other thing, this is one neat thing that I did do I can show. One of the articles was an article about what's called an oversold scanner and then basically Warren Buffett's calculator on how he picks stocks. So I actually took the article and I dumped it into Open Claw. I said, write the Python to do this. So, you know, we'll type in, you know, Microsoft because, you know, everyone owns that stock anyway.
Larry Gold (LrAu) [00:46:56]:
And it'll tell you basically what Buffett's article would have generated, you know. So this is, you know, there's no opinion on this or whatever, it just, you know, what general would be picked up from it. And then it's got this Williams, you know, R threshold, which is a calculator you can find on the internet, and it runs that on top of it too and gives you that technical analysis. It was neat because it just came in the article and you could just take the article and say, hey, go write this code and build this into my application. Now—
Leo Laporte [00:47:30]:
Are you getting rich?
Larry Gold (LrAu) [00:47:31]:
No.
Leo Laporte [00:47:34]:
Is it beating the market?
Larry Gold (LrAu) [00:47:39]:
So can I show you something that wasn't done— was done by Claude Coda, but it wasn't done by Open Claw because it's on here also. So a long time ago I started having, uh, a comparison. I don't know if you can see this, but it's OpenAI versus Claude trading crypto and trading stocks, and it's a fake account, right? Um, and basically it's been randomly trading. And who's purple?
Leo Laporte [00:48:07]:
I'm not using purple.
Larry Gold (LrAu) [00:48:09]:
That is Claude doing crypto, and orange is Claude, is what you call it, OpenAI doing, sorry, no, that's Claude doing stocks.
Leo Laporte [00:48:18]:
So who's green?
Larry Gold (LrAu) [00:48:20]:
Green, this is actually OpenAI doing stocks. And what I asked is I asked Claude—
Leo Laporte [00:48:26]:
That actually looks a lot like the S&P 500, but okay.
Larry Gold (LrAu) [00:48:28]:
Oh no, actually the blue line is the actual S&P 500.
Leo Laporte [00:48:31]:
Oh, all right, okay. So it's tracking pretty well.
Larry Gold (LrAu) [00:48:35]:
So it's actually been beating the S&P 500. The big wins here both were AMD. So this big jump here and this big jump here.
Leo Laporte [00:48:43]:
It said buy AMD.
Thomas Burnham (aramova) [00:48:44]:
It believed in itself.
Larry Gold (LrAu) [00:48:46]:
Yeah, it was pretty cool. And I forgot where the big loss was, but it was an interesting project because I actually asked both OpenAI, I said, well, write your own prompt for picking stocks. And then I said the same thing to Claude and same thing for crypto. And I said, okay, now let's build an application to run it daily. And then, you know, get this information from Yahoo or Y Finance and just pretend you have an account and start with $10,000. There's a whole bunch of people who do this on GitHub, so I kind of figured I'd do my own.
Anthony Nielsen [00:49:17]:
How often would it like check and like reevaluate?
Larry Gold (LrAu) [00:49:21]:
So it reevaluates every 2 hours. So it's not, you know, can't do any day trading or anything like that. So or, you know, high-frequency trading.
Leo Laporte [00:49:32]:
Right.
Larry Gold (LrAu) [00:49:35]:
But it was, it was just, it's, again, it was one of the first things that I actually did on Cloud Code, you know, because I was, I told you I was using other tools before that. So the other stuff, yeah, the other stuff is, has too much in there. But this is, this is the real, you know, this is still actually still my favorite. IQ-9, if you're a Space Cruiser Yamato fan or Star Blazers fan, that was the robot from there. And yes, I'm wearing my Space Cruiser Yamato Hawaiian shirt for the demo, just to be consistent. And then I do have one called IQ-10 that I'm working on now, which is the new laptop, but it completely just killed itself. It was really bad. So I'll get that up some other time.
Leo Laporte [00:50:24]:
Nice.
Larry Gold (LrAu) [00:50:27]:
Is there any— like, was there any— I wasn't checking the chat of any questions. On the Open Claw that I should answer that I missed?
Leo Laporte [00:50:32]:
Trust no one says, I would agree that open— oh, this is why I'm building a personal assistant based on Claude code, says Peter Parker. Yeah, I feel like I can trust it. I would agree that Open Claw is just too Wild West right now. There's always a risk with Anthropic's trying to build consumer safe stuff. I think we're going to see really interesting consumer safe stuff soon. I mean, this is obviously a hot, hot area.
Larry Gold (LrAu) [00:50:55]:
Yeah, I think, you know, you already saw things from any of the announcements or things like from Perplexity or Cowork, which does the same thing, you know, computer complexity, computer.
Anthony Nielsen [00:51:06]:
How much have you spent? I'm curious how much, if you don't mind, how much have you spent, uh, on OpenClass since, uh, you know, diving in?
Larry Gold (LrAu) [00:51:15]:
So February, I think, was the highest. That was like $47 or something like that. And you saw in February— March has been nothing because it's just running. Right. And then, and then the models I'm using are so cheap, right? Because until you do coding, when I go to OpenAI's 4.6, it's very inexpensive.
Leo Laporte [00:51:34]:
Good, good. I just have to think of something I need. I mean, honestly, Claude Code scratches the itch for me right now.
Larry Gold (LrAu) [00:51:44]:
But I would say, because you're a coder, I would say yes. And for other people, when the other tools come available that are more user-friendly, that, you know, are more secure, I would, I would go in that direction. That's at least my, my thoughts.
Darren Oakey [00:52:00]:
What I'm using it for— well, I'm not using OpenClaw, obviously, but I've built similar things.
Leo Laporte [00:52:07]:
And, um, I—
Darren Oakey [00:52:10]:
it's— there's stuff that we want to do now, but there's stuff that you want to do ongoing. And so, like, the— you were saying about news thing. I've made a news trainer that comes up and gives me, like, just gives me news suggestions and shows me what it subscribes to, like 200 different RSS sites, gives me a bunch of them and then classifies them into good, great, or other. And then if you go to my website, you can see the classified as ace. And every now and then, I mean, as the grades were good on there. And then every now and then I go and retrain it if I see something I don't like. But then, so you could do something like that. But then I thought, well, okay, I'll add a daily digest to that.
Darren Oakey [00:53:02]:
So it now has a little daily summary that is more like an article on those news things. If you jump out of news, there's a lot of things on your machine that I think are fun to do. And like, I've got one thing that's going through at the moment and it's just describing using Moondream Studio. It's describing all my photos on my Google Photos backup. So it's just giving a description so I can have a local search of them and things like that. But then I also found something that another model that has an aesthetic score. So it's actually just giving all my images a score just in terms of how attractive they are. And so the combination of that is—
Leo Laporte [00:53:51]:
Is it good at that?
Darren Oakey [00:53:53]:
Well, I can show you. Let me show you. I think so. It's only done a a certain percentage of them at the moment.
Larry Gold (LrAu) [00:54:08]:
I wonder if it's doing like a Bayesian analysis. Could you give it like a like or dislike, like spam?
Darren Oakey [00:54:15]:
For the email, I'm actually using a Bayesian filter for the news trainer. So this is the news trainer. So you can see it's given this is good, this is good, it's good. And I can say, oh, this is Like for instance, this, I don't care about security. So I just say other, right? And that just goes into my training. So every time I go this, it just goes into my training. And so every now and then I can just look at this and go, oh yeah, that's right. I don't care about that.
Darren Oakey [00:54:48]:
And all of that. So I've labeled 1,900 things. And so in terms of my curated news, you can see, great, yeah, I think that's pretty interesting. In fact, that is interesting that they've just opened the 1 million context window for all of us. So things like this.
Leo Laporte [00:55:08]:
Has anybody used the million? Just out of curiosity.
Anthony Nielsen [00:55:14]:
Well, I'll be curious. I will for the TWiT best of episode.
Thomas Burnham (aramova) [00:55:18]:
Oh yeah.
Anthony Nielsen [00:55:18]:
I'll throw all the transcripts in there.
Thomas Burnham (aramova) [00:55:23]:
Yeah.
Larry Gold (LrAu) [00:55:23]:
So I'm— I like it. Go, Darren.
Darren Oakey [00:55:24]:
No, I think we're all using it because, you know, you always get to that point where it says compact now, right? And millions at that point is way further away.
Leo Laporte [00:55:34]:
That's true. I haven't turned it on.
Thomas Burnham (aramova) [00:55:37]:
I'm—
Leo Laporte [00:55:37]:
I think I'm afraid of how much it's going to cost. I, I haven't had to turn it on, so I'm still using the two, at least in cloud code, the 200,000.
Larry Gold (LrAu) [00:55:45]:
So, so my theory and the way I work when I'm coding is that 0 to 30% is the sweet spot. That's when you know you're going to get really much better code out of something. 30 to 60 is okay, but after 60% of your— from 60 to 100, when you're in that range of your context window, to me it's a failure. I spent a lot of my time trying to figure out how to keep that context window as low as possible. And break things down to very smaller tasks. Again, I said I'm a huge fan of either Spec Kit or, you know, GSD or any of these spec-driven because you want to break stuff down to very small tasks and then work in a, in a window of that context. So if you have a million context window, it just gives me a little bit longer of that 30% to play with.
Thomas Burnham (aramova) [00:56:34]:
Well, and that's when I'm coding, I see the same thing. And the first step is always to have it draw out a detailed markdown file as though it's a project manager. To direct it later to do the programming. That produces a better code outcome for a couple of reasons. One, you can have it refresh it midway through when you get to 45-50% and you don't want to lose your prior context of the last half hour or so. You can just say, hey, reread this Markdown file of the plan and make sure you're up to it. Who's this to join us?
Larry Gold (LrAu) [00:57:12]:
My dog decided to check in.
Leo Laporte [00:57:17]:
Cute.
Darren Oakey [00:57:20]:
So yeah, it's just finishing this, the Daily Digest. It's just a summary of those, any new great or good things that have come in in the last day. It just gives a little article of it. But if you look at what I was talking about with the other things I'm doing. This latest image, so this is just looking at and describing every image. I've got hundreds of thousands, I've got 3 terabytes of Google Photos images and it's just going through and describing every single one of them so they're available for search. But these are, this is from my photo album. These are the highest scored by aesthetics.
Anthony Nielsen [00:58:05]:
Looks like it's working.
Thomas Burnham (aramova) [00:58:06]:
Yeah. So yeah.
Darren Oakey [00:58:08]:
And these are just things I've taken on, like images I've taken on my dog walk or in the Cook Islands or whatever. But you can see some of them.
Leo Laporte [00:58:20]:
The Cook Islands stuff looks gorgeous, I have to say.
Darren Oakey [00:58:24]:
And so that comes from, like, this is sort of my version of something like that. I've sort of got a different model, but you can see all of that is just coming from sort of this graph of it's just going through and modeling everything and it's just describing all the images. So it's described 140,000 so far and then it's given 33,000 an aesthetic score and things like that. So these are just ongoing stuff and the news articles are just classifying all the news articles and so on.
Thomas Burnham (aramova) [00:59:02]:
Is this working off of local tokens or are you routing this through something?
Darren Oakey [00:59:07]:
Everything I do is free or inside my subscription. I've got the Daz Agent SDK, which is a wrapper around Claude Codex. And everything, but they all work inside the subscriptions and they all work into the, they use this, the Claude agent SDK, but Codex actually has a Codex agent SDK. And I work with Gemini by putting it in a tmux type thing. So that does that. So, and yeah, I can see everything on my machine.
Anthony Nielsen [00:59:46]:
So like, You're, you're running all your images, like for going back to that image thing where it's like bringing out the description, also scoring it, like what model? So like, is that another, do you have a computer running like a server running like a—
Darren Oakey [01:00:04]:
Yes, there's a server on here. Like I've shown my auto thing before, which, which manages lots of processes. It's like a launch daemon type thing. And I'm running Moondream Studio, which is a local model thing for doing the descriptions of the image. And I can't even remember what model.
Anthony Nielsen [01:00:24]:
So it's not just like running all the time, like the— it's managing like what to spin up and like—
Darren Oakey [01:00:29]:
no, so those are just running as servers all the time.
Thomas Burnham (aramova) [01:00:33]:
Okay.
Darren Oakey [01:00:33]:
And then this, this is, um, in the, um, Stateflow thing. This, if you go into a cell, so images describe, where is it? I don't know how to use my own program.
Larry Gold (LrAu) [01:00:53]:
You need a UX expert.
Darren Oakey [01:00:55]:
Yeah, exactly. If you look at events, for instance, you can see this is every event it's happened and so image save, I'm looking for an image describe event. Oh, news label event. Sorry.
Thomas Burnham (aramova) [01:01:23]:
If you have a local model processing these images, then that's going to save you on the tokens.
Anthony Nielsen [01:01:28]:
Yeah. I guess my next question is how many How many apps and computers you have running in your home lab then?
Darren Oakey [01:01:37]:
I've only got 3 computers running. Well, and NASes and stuff and stuff. But, and yeah, various things are running on various computers, but I've got a lot of stuff running in Docker on the NASes. So a whole bunch of things running in Docker. Yeah, so you can see this has just made a call. So this is how the cell is described and the source code for each cell. Then the output of that from the service is just the JSON description of that particular image. But because it's all an event store and everything, I can see every single event that's ever happened and I can trace through it and I can do all sorts of things like that.
Darren Oakey [01:02:27]:
And I've got a similar thing to the news thing that allows me to like compare songs and play them, you know, work out my perfect playlists and generate playlists for the thing and so on. So of course it's obviously not working at the moment. So that's why you never demo things that you haven't tested.
Leo Laporte [01:02:49]:
Anyway.
Thomas Burnham (aramova) [01:02:50]:
Awesome.
Darren Oakey [01:02:54]:
But it's just to give ideas.
Leo Laporte [01:02:56]:
That's cool. Yeah, I like that.
Darren Oakey [01:02:58]:
In terms of ongoing stuff, there's a lot of stuff on your machine which you might want to index or track. We've all got stuff that is currently not useful because it's just in a big pile somewhere and finding ways of making it useful.
Thomas Burnham (aramova) [01:03:12]:
Yeah, the thought that I've had coming out of this is I'm going to My inspiration with OpenClaws, I'm going to feed it my OpenSense read-only and see if it can track trends in the firewall or any kind of oddities, because I'm trying to troubleshoot a Matter issue with my lights and they could be helpful with that.
Leo Laporte [01:03:34]:
Yeah, I'm really looking forward to doing Home Assistant, coding up some Home Assistant front ends. Uh, I've got the Home Assistant green HA green server, but I need to really It's not that useful. Their software is kind of ugly. I would like to— I would— I feel like this is a natural.
Thomas Burnham (aramova) [01:03:51]:
Um, Cloud Code is excellent with Home Assistant.
Leo Laporte [01:03:53]:
Yeah, I figured.
Thomas Burnham (aramova) [01:03:54]:
Or integrated some lights that don't officially support Home Assistant, and I managed to vibe code and integration, uh, having it use Nmap to find these two-year lights on my local VLAN.
Leo Laporte [01:04:08]:
Nice.
Thomas Burnham (aramova) [01:04:09]:
The port and find it responds to mDNS, hijacked that, and now I'm running my own Vibe Code Python service with Home Assistant to, uh, that's a good use on the lights.
Leo Laporte [01:04:22]:
That's a very good use. I like that. Awesome.
Anthony Nielsen [01:04:30]:
Well, uh, we're about an hour in now if we want to—
Leo Laporte [01:04:34]:
is it time to call it a day?
Anthony Nielsen [01:04:36]:
Unless anyone has anything else they want to Talk about or show off.
Larry Gold (LrAu) [01:04:40]:
The one thing that's funny is I didn't show you guys is the same thing that Darren has. I have a service manager also because I was thinking the same thing is how do I look at all the services that are there? These are all the services when they run. I can check the logs. I can run something now if I want. I can see the history of when it was run and it was succeeded or failure. Without a doubt, it's something that I wanted to make sure we have. There's a lot of stuff, as I said, all these check-ins, you can see that Nate's at work from home, if I want a live work check, everything. I wanted it like the same thing with Schedule Manager to figure out where everything is running.
Larry Gold (LrAu) [01:05:22]:
I said, hey, build me a UI so I can look at it. Let me see the UI, let me see the logs, let me see the history. It's neat to have something like that.
Leo Laporte [01:05:31]:
You've put me to shame. Mine's just a, a TUI. That's much prettier. I, I mean, I do everything as a TUI. I don't want to use the mouse.
Larry Gold (LrAu) [01:05:44]:
Well, you have to make, you know, Alan Cooper proud.
Leo Laporte [01:05:46]:
Yes, yes.
Larry Gold (LrAu) [01:05:49]:
Show it to him.
Leo Laporte [01:05:50]:
I need widgets. Yes, I agree, I agree.
Larry Gold (LrAu) [01:05:53]:
He would be upset that I took all his classes and I didn't learn anything.
Leo Laporte [01:05:58]:
I love it. Well, hey guys, really a pleasure. Nice to meet you, Thomas. Thank you for joining us and thank you for that. I wanna play with that NotebookLM. It's amazing what you've managed to do.
Thomas Burnham (aramova) [01:06:11]:
Yeah, absolutely. I'll share all the links in the chat so everybody can take a look at it.
Darren Oakey [01:06:17]:
Okay, great, great.
Leo Laporte [01:06:17]:
Larry, thank you for the OpenClaw inspiration. I guess I'll install it.
Thomas Burnham (aramova) [01:06:23]:
Oh.
Leo Laporte [01:06:23]:
Darren, do you find it's a— is the Synology NAS fast enough to run these Docker containers?
Darren Oakey [01:06:29]:
And I'm not running that sort of thing.
Thomas Burnham (aramova) [01:06:33]:
Oh, okay.
Darren Oakey [01:06:33]:
Like most of the things I'm running, like for instance, they're all very low things. Like one of them is one— is a thing called Chatterbox or something. It's a third party. They're all third party things. And But one of them watches websites and gives you an event when the website has changed. So it's a low process poller that runs every now and then. There's another thing like this for various media things and things like that. So they're all just services that I need to run all the time.
Darren Oakey [01:07:13]:
I don't want to think about them. Also, my NAS is on ExpressVPN and everything. Things I just want to always run behind the VPN and things like that.
Leo Laporte [01:07:24]:
Sure.
Larry Gold (LrAu) [01:07:24]:
Yeah. Mine is running on a laptop that's got to be 7 or 8 years old right now. It's not a fast laptop. Again, because the models and everything are outside and that's really where more processing is. The home server that has the LLM locally, I mean, that's a NVIDIA 5700 chip or 5070 GPU card. So that's fast. And then I think I posted in the AI user group, I'm looking at, they sell V100 chips on PCIe cards.
Leo Laporte [01:08:00]:
Really?
Larry Gold (LrAu) [01:08:01]:
Yeah. So if you go on eBay and you search—
Darren Oakey [01:08:03]:
My worry with that is the memory. They don't have much memory on them.
Larry Gold (LrAu) [01:08:08]:
Well, you can get the 32 gig, which is again, for the models that you want to run locally should be fast enough. So if you just go on eBay and just search V100, uh, Tesla V100 PCIe card, and you should be able to see some for sale. They have 16 gig and 32 gig ones. I've seen them for about between $200 and $250 for the 16. Um, it's— that's cheaper than a 5070 card right now.
Leo Laporte [01:08:36]:
Yeah.
Larry Gold (LrAu) [01:08:38]:
Um, And remember, there's no video out, so you're putting it on there. You just need a machine that has a lot of power to, to adjust to it. So that was something I saw over the weekend. I'm like, okay, this is actually reasonable to buy and to put into a machine.
Leo Laporte [01:09:01]:
Thank you, everybody. It's a pleasure doing this with you. I wish we were doing it every week, but you've given me something to work on. I think you've given us all something to work on. Uh, Larry, Darren, Thomas, and of course Anthony, thank you for being here. Thank you to all the people in the chat. Thanks to our club members who make all this possible. Without you, there would be no stream.
Leo Laporte [01:09:24]:
The stream would be dead, sad to say. Thank you everybody for being here. We'll see you next month, usually the first Friday of the month. I was out of town. I think we will be able to do it the the first Friday of next month. So only 3 weeks.
Larry Gold (LrAu) [01:09:38]:
That's, uh, that's Good Friday, so we gotta—
Leo Laporte [01:09:40]:
oh no, we'll figure something out.
Darren Oakey [01:09:43]:
Yeah, man.
Larry Gold (LrAu) [01:09:44]:
Oh, I mean, I'm around because I don't celebrate, but I just know it's Good Friday because the day before is— we do a charity event. So—
Leo Laporte [01:09:52]:
oh nice, for, uh, I see you do a charity for, uh, the Morgan Stanley.
Larry Gold (LrAu) [01:09:57]:
Yeah, that's cool. Morgan Stanley Children's Hospital. Yeah, I was a former patient there, so I Spent a lot of time.
Leo Laporte [01:10:02]:
Oh, nice, giving back.
Larry Gold (LrAu) [01:10:04]:
Absolutely.
Leo Laporte [01:10:05]:
Thank you, everybody. Have a wonderful day.