Security Now 1072 transcript
Please be advised that this transcript is AI-generated and may not be word-for-word. Time codes refer to the approximate times in the ad-free version of the show.
Leo Laporte [00:00:00]:
It's time for Security now. Steve Gibson is here with a show about something that should send a chill into the heart of every coder. The nightmare PYPI exploit. Light LLM will do a kind of deep dive onto what happened, how it happened and what we can do to prevent it in the future. Plus we'll talk about age verification on Linux. A good move from Apple on the clicks fix vulnerability. And is quantum computing moving closer? Steve has thoughts next on Security Now
TWiT.tv [00:00:37]:
Podcasts you love from people you trust.
Leo Laporte [00:00:41]:
This is twit. This is Security now with Steve Gibson. Episode 1072 recorded Tuesday, March 31, 2026. Light LLM. It's time for Security now. I know you wait all week for Tuesday, best day of the week.
Steve Gibson [00:01:03]:
Leo's back. Leo's back.
Leo Laporte [00:01:05]:
Well, I'm back. But as always, Mr. Gibson, Steve Gibson
Steve Gibson [00:01:08]:
did a great job last week.
Leo Laporte [00:01:10]:
Thank you, Micah, for filling for me.
Steve Gibson [00:01:11]:
Holding. Holding the fort down.
Leo Laporte [00:01:13]:
I was at rsac, the big security conference in San Francisco. I ran into a friend of yours, Marcus Hutchins, the hacker. In fact, I kind of relived old times because I said, yeah, we were. When, what was it wannacry that he.
Steve Gibson [00:01:28]:
We were following along with what he was doing.
Leo Laporte [00:01:30]:
Yeah. And then he got, he left Black Hat in Vegas, got picked up by the feds before he boarded his plane in the airport and it was held and. Cause of his youthful indiscretions, not because they didn't recognize the valuable contributions he'd made as an adult. Anyway, we went through that and everything and I guess he was there because there's a new documentary being made called Midnight in the War Room, a documentary of cyber warfare. And, and so we went over there and then he was sitting there signing hats. So I got him, I got a Marcus Hutchins hat and it says as if I didn't know who he was. It says cyber security guru on the side there. By the way, we made a wonderful rsec about an hour long kind of tour of.
Leo Laporte [00:02:16]:
Through rsec we talked to or a dozen people or so, something like that. And you can watch that on the Twit feed. It's on YouTube. Securing the agentic era. Our SEC 2020.
Steve Gibson [00:02:29]:
That is the challenge, isn't it? Those little pesky agents, they get up to all kinds of things.
Leo Laporte [00:02:34]:
Well, I was really, you know, I specifically wanted to talk to people who are using AI defensively. And there are a number of companies doing. There was one, there's one company called Aikido. AI Keto. Keto, right. And, and I And we were talking about which models they were using and how they were getting the models to get do the job. And the, the guy said, at one point, he said, you know, we had to. The best thing we found was to tell the models, we will sue you if you don't find the flaw.
Leo Laporte [00:02:59]:
And it worked. It scared. It scared him. It's like. It is. We are living in a weird time with AI.
Steve Gibson [00:03:09]:
We're going to need AI HR before long.
Leo Laporte [00:03:12]:
Absolutely.
Steve Gibson [00:03:13]:
Don't you mistreat. You know, you got to give your AI a little, literally time to cool off.
Leo Laporte [00:03:19]:
Yes, yes, exactly.
Steve Gibson [00:03:20]:
Those chips, cool. Let the fans take the heat off of them. And.
Leo Laporte [00:03:25]:
Well, and they do. The Claude code was accidentally leaked from Anthropic today or yesterday. And they do have instructions in there to say be more personable, you know, here, be a little more sassy. They have. They're telling it to have some personality because it makes it more sticky. If you go to YouTube.com twit There it is. RSAC 2026, Securing the Agentic Era. And you can talk, hear me talk to Marcus Hutchins and a bunch of other people in that, in that video if you want to play it.
Leo Laporte [00:03:57]:
Now, let's talk about, though, what's going on today on security.
Steve Gibson [00:04:01]:
So probably the biggest news, the most. Well, you know, a lot has happened in the last week. I actually had a couple pieces of email saying, hey, I thought you were going to talk about this or that, like the, like the foreign routers being outlawed and like, quick.
Leo Laporte [00:04:18]:
I went up to Ubiquity at RSEC and I said, do you. I want to talk to you about that. And they said, no, don't we not. No, no, we're not giving any interviews. They wouldn't talk to me. Yeah.
Steve Gibson [00:04:28]:
And I looked around and I couldn't find any, like, definitive dates. I even looked at the official government document and it's like, okay, like, when or where or what? Like, like what? And, you know, my takeaway was for most of our listeners, there are really good alternatives, like, you know, running open sense on a little, you know, arm box or.
Leo Laporte [00:04:52]:
I think that's what a lot of people are saying. I'll just run my own.
Steve Gibson [00:04:55]:
Right, exactly. You really don't need to get something from ASUS any longer and arguably you get a much stronger and more capable result. But anyway, I want to talk about this big event with Light LLM and the. It's such a perfect glimpse into a supply channel attack. And so we'll sort of use that as our. AS Our armature for discussing just in general the problem we're having with our supply chain because boy, are we seeing an acceleration of that. You know, I think probably we first really touched on it with the, with the log4j exploit which scared the entire industry and it turned out to be a nothing burger because the, because it was difficult to do. And what we've learned is that it's the easy attacks that a much larger percentage, a much greater percentage of the population of hackers are able to jump on.
Steve Gibson [00:05:57]:
But overall, I mean, and we've been talking about, you know, all of the infections found in NPM and, and, and PI. PI. So that's where we're gonna focus on this episode 1072 for the last day of March, March 31st. But we're also going to talk about, oh, the other real hot button, Leo, is this California Looney Tune law that says that operating system platforms must enforce age verification. And of course every Linux person's head exploded. It's like, wait a minute. I mean the reason they are a Linux person is so that they can't have the government or anybody telling them what to do. So wow, a lot happened there.
Steve Gibson [00:06:44]:
We'll talk about that. We've also got some new behavior from iOS 26.4 which Apple just moved their whole ecosystem to, which requires, is requiring UK users to prove their age proactively. And it turns. Well, I don't wanna, I have to calm myself down because we'll get there. Also Russia, in this continuing move to just apparently withdraw from the world and society and good luck with that, has chosen to use a homegrown 5G encryption for their future mobile network which won't be compatible with anything else or anybody's phones. So okay. Oh, there was a great story of a, a, a Ukraine drone maker who was aware that a Russian spy agency was installing a, a bad spying thermostat in their facility. And what that, that and what happened with that? We've got Google moving forward, forward or backward? Closer the, what they call the Q day, the day when we actually think that we have to worry about quantum computing.
Steve Gibson [00:08:08]:
They've moved it into this decade, into 2029. So we're going to look at that. Also at RSA where you were the UK's CEO of the NCSC, you know, their, their big cyber security agency stood up on stage and warned about the danger of vibe coded sas, you know, software as a service, replacements. We're going to touch on that. We've got more information about nasty click fix campaigns which Continue to proliferate. And many people emailed me to say, hey, Apple took your advice. Well, they didn't take my advice, but they did the right thing, which Microsoft refuses to do. Yeah.
Steve Gibson [00:08:56]:
We've also got the news that more than one in seven Reddit postings are now being posted by AI bots. And our, the, the CEO of Reddit is not happy and he doesn't know what to do except something that Reddit users really don't want to do. And then we're going to talk a little bit about. Well, actually a lot about this. What was going on behind this light LLM disaster that was averted, but only because the people who coded it the, the malware were apparently in too big a hurry.
Leo Laporte [00:09:35]:
It was five coded.
Steve Gibson [00:09:37]:
They made a mistake. Yep, they made a mistake that allowed it to reveal itself almost immediately. I mean, which was a good thing because again, this absolutely constitutes dodging another bullet. And how many are we going to dodge before we get hit by one?
Leo Laporte [00:09:57]:
I'm not even sure how much it was dodged. I mean, we don't know yet how many people. We know 47,000 people downloaded it. We don't know how many of them got bit.
Steve Gibson [00:10:05]:
Which is not to say that nobody got hurt, but when you're downloading, when you're downloading three point some million per day.
Leo Laporte [00:10:12]:
Right.
Steve Gibson [00:10:13]:
It could, could have exploded. Yeah.
Leo Laporte [00:10:15]:
Well, and there was a new one this morning. A, An Axios, which is an NPM library compromised. This is not the, this is just the beginning. This is not.
Steve Gibson [00:10:25]:
Well, and we'll be talking about why because there's there's a, as usual, a takeaway that we try to find here. And to my way of thinking, this is the trade off we are still making in preferring convenience over security. Yeah, it's like we're hoping for the best and so far, you know, maybe I will change it. We're not sure.
Leo Laporte [00:10:50]:
Well, that was the interesting thing in our sec is the number of companies that were proposing AI based defense. We're using AI agents for defense. It was, it was very, you know, it was probably the number one topic at our sex. It was very interesting and. Yeah. Well, we're going to get to the picture of the week. Something you should avoid with wet nosed doggies, apparently. I haven't finally saw the caption, but that's coming up in just a little bit.
Leo Laporte [00:11:17]:
Picture time.
Steve Gibson [00:11:18]:
Yes, I gave this, as you noted, I gave this picture the caption. This solution is not recommended if your dog has a wet nose.
Leo Laporte [00:11:29]:
Okay. So I'm thinking it's going to have something to do with electricity here, let me just scroll up. We can discover it together. There's a nail. Uh oh.
Steve Gibson [00:11:42]:
Oh boy.
Leo Laporte [00:11:44]:
That's okay. So this is an interesting solution if you have the wrong country's power.
Steve Gibson [00:11:50]:
Yes. You know, and Leo, really all of these solutions are interesting. I was, Benito and I were talking about this before we began recording. Know there was that one with the two nail clippers that I thought was particularly inventive. But these, I think if we were to stand back and look at the 20 year plus history of the podcast, it would be people who seem to have the wrong plug for their outlet. And the mystery of fence gates standing alone in the middle of fields seem to be of the overriding themes of our pictures of the week. So for those who aren't, who are listening and are not seeing, we've got a. The individual here is trying to plug a European style AC plug that's got the round pins into.
Steve Gibson [00:12:48]:
Well, he would like to plug it in. Apparently in the US or somewhere where we have the parallel straight slots.
Leo Laporte [00:12:55]:
I gotta tell you, that's the worst looking power strip I've ever seen.
Steve Gibson [00:13:00]:
Everything about this, I mean, beat the hell out. And you. And when I say, when you use the term rusty nail, you're normally not really literal. But here I don't even know how he's getting a connection with. There's so much rust on these nails. Because of course rust is an oxide, which is an insulator. But anyway, so we basically sort of have a Jacob's ladder made with two, two nails stuck into the slots of this power strip and then pushed down between the nails so that they're sort of splayed apart, thus creating the Jacob's ladder effect is this round plugged European style plug which just sort of hovers there. And my point was, if you have a curious dog that likes to go around, sniff it in the corners.
Steve Gibson [00:13:51]:
And I don't think even its nose would need to be wet. It would get a very rude surprise if it will stick its nose across.
Leo Laporte [00:14:00]:
And how did they get the nails in without shocking themselves? They must have had rubber gloves or so. I mean, this is, this is insane.
Steve Gibson [00:14:07]:
It's nuts. Yes. Hopefully the, the outlet strip has a switch that we don't. That we don't see.
Leo Laporte [00:14:13]:
Oh, there you go.
Steve Gibson [00:14:14]:
Or it just could not be plugged into its normal American style outlet where it gets its power. Anyway, thank you again listeners for another entertaining picture of the week. Always appreciate them. Okay, so our first news of the week was Inspired by a question I actually received from a listener. It relates to much of our recent discussions about Internet age verification and separately or specifically to its recent escalation, which we've been seeing everywhere to include, as just happened, operating system platforms themselves. Our listener, who identified himself as Fred M. He wrote. Hi Steve, I recently read that freedos was not going to comply with California's age verification requirements.
Steve Gibson [00:15:11]:
He said, since freedos is the OS distributed with spinrite, I was wondering how this would affect you when the new law takes effect. Thanks, Fred. Okay, so the good news is spinrite is not age restricted content. So I don't think we have a problem, even if we were going to have a problem, which I don't think we would. But his question refers to California's Assembly Bill 1043, and it's unclear to me why this issue suddenly and recently popped up on everyone's radar. But the Internet is currently buzzing about it and our listeners have been sending their questions and opinions to me, which I appreciate. The bill in question was approved by California's Governor back on October 13th of last year of 2025, and it doesn't take effect until the start of next year, January 1st of 2027. So why all of the sudden this awareness of it? Because, I mean, it was a while ago and so I did some looking around over the past month.
Steve Gibson [00:16:21]:
The only thing that I could find was that the very popular and respected and widely read Tom's Hardware site did post an article about this on March 1st, which sort of seems maybe to have been the catalyst which for everyone going, what? What are you talking about? So the Tom's Hard Hardware's headline was California Introduces Age Verification Law for All Operating Systems Including Linux and Steam OS User Age Verified during OS Account Setup. Okay, so you know, mostly no Linux users want a nosy government to have its mitts on their beloved independent open source operating system. And since Linux doesn't have any central control authority, you know, the way Windows does Mac, Android and iOS, they reasoned, Linux users reasoned there would be no way for that to happen. Right, Right. So, okay, since California legislators have also recently proposed, as we talked about, requiring all 3D printers to somehow magically identify and refuse to print any component that might be part of a handgun, no one knows how that could possibly be made possible either. Unfortunately, our and I say our because Leo and I are both residents of California, our legislators here in California do seem to be having fun asking for things they cannot realistically have. Not that that's stopping them from, you know, asking so, okay, first let's step back and take a look at what this legislation is because it does exist. It was signed into law on October 13th and it is coming into effect on January 1st.
Steve Gibson [00:18:18]:
That's all happening. The section heading in this bill is Age Verification Signals, Software Applications and Online Services. And the section's overview, just the overview of the detailed, you know, point by point, says existing law generally provides protections for minors on the Internet, including the California Age Appropriate Design Code act, that among other things, requires a business that provides an online service product or feature link likely to be accessed by children to do certain things, including estimate the age of child users with a reasonable level of certainty appropriate to the risks that arise from the data management practices of the business or apply the privacy and data protections afforded to children and, you know, to all consumers and prohibits an online service product or feature from, among other things, using dark patterns to lead or encourage children to provide personal information beyond what is reasonably expected to provide that online service product or feature or to forego privacy protections. And of course, Leo, the other thing that happened just in the last week was Meta and Google with YouTube losing those major cases and there was also one in Arizona I think, which is beginning to hold these, you know, big tech accountable for the design practices in their applications which do exactly what this California Age Appropriate Design Code act says they shouldn't do. So they wrote this bill beginning January 1, 2027, which did get signed in on October 13, they said would require, among other things related to age verification with respect to software applications, an operating system provider as defined to provide an an accessible interface at account setup that requires an account holder as defined to indicate the birth date, age or both of the user of that device for the purpose of providing a signal regarding the user's age bracket to applications available in a covered application store and to provide a developer as defined, who has requested a signal with respect to a particular user with a digital signal via a reasonably consistent real time application programming interface, you know, API regarding whether a user is in any of several age brackets as prescribed. The bill would require a developer to request a signal with respect to a particular user from an operating system provider or a covered application store when the application is downloaded and launched. This bill would prohibit an operating system provider or a covered application store from using data collected from a third party in an anti competitive manner as specified. This bill would punish non compliance with a civil penalty to be enforced by the Attorney General as prescribed.
Steve Gibson [00:21:52]:
Okay, so that's as much of as I'm going to quote from that so while it's true that details matter, I'll first note that that you know, this is like what this bill is asking for is what we've been suggesting Apple with iOS and Google with Android should both somehow manage to provide. And the way this should be done, at least for the use case of smartphones, is beginning to take shape. We're beginning to see this manifested in, you know, Apple's apparently reluctant incremental movement on this front. The parents or guardians of a minor child should be able to configure the birth date of the user of a smartphone and be able to securely lock that date into their child's device from then on. And I'll say optionally should be able to, but like so the point is the platform would provide the capability, but it should be at their discretion from that point on. Anytime a website, a local application or app store download contains age restricted content and thus needs to obtain age gated permission, they may cause the user's operating system, the user who's the kid may cause the user's operating system to display a clear or I'm sorry, the age restricted content provider may cause the user's operating system to display a clear and uniform pop up asking for an age bracket to be provided if the user wishes to. Again, not automatically, not unless they set it that way, but if they want control if they choose to, they may then allow the operating system to inform the requested application on their behalf whether its user is under 13, between 13 and 15, between 16 and 18 or over 18. Those are the brackets California specifies in which the world seems to have sort of be settling on.
Steve Gibson [00:24:21]:
If the user declines to provide their bracket or if their device has not been set with a date of birth, the requesting site will be told that no age assertion is available and should probably not deliver this age restricted content. So what seems right about this is that this solution places the handling and responsibility on of their young child's age into the parents hands where it should be, not the government, not the os, not the platform provider. The platform provider provides the capability to configure the device to do this if the parent or guardian should so choose. And all it requires of Google and Apple and Apple's like they're both almost there now, is that they provide the means to to accept, lock and protect that decision and provide a uniform platform specific API for making that information available on a case by case basis. Again, if it's been configured to do that to any entity that inquires. And as I said, both companies apparently Reluctantly have been moving incrementally in this direction. So at this point I cannot, given what, what's happening on the legal side in local and national governments, I can't find any sympathy for someone who complains that this, like what I've described, would represent an invasion of an online user's absolute privacy, which is what we see. There's a lot of that on the net.
Steve Gibson [00:26:07]:
You know, opening the front door of your home, walking outside and down the street compromises someone's illusory absolute privacy. We live in a world of laws which attempt to protect vulnerable young people by age, gating, where they can go and what they can do, you know, and, you know, with the vast resources that are now online, there's a lot of stuff that needs, arguably young people, you know, parents should have the right to decide if that's something that they want their children to have access to. So as a society, we're now working to more fully incorporate all of the many facets of the Internet, which are many now, into our daily lives. So to do that responsibly means that a user's age, although it hasn't previously been, it's going to have to be taken into account moving forward. Okay, so, but what happens when we leave the mostly well and clearly and cleanly defined realm of personal use smartphones, you know, which have per user accounts? Things become a lot less clear and clean. And I argue, I mean, I agree with, with everyone who's upset about California and what this means for Linux, that we're stepping into a huge mess. So here's what Tom's hardware wrote, which may have been, as I thought, the catalyst for this recent upsurge of, you know, interest and outrage. They said California's Digital Age Assurance act, That's Assembly Bill 1043, signed by Governor Gavin Newsom in October 2025, requires every operating system provider in California.
Steve Gibson [00:27:58]:
And I don't even know like, if Linux has an operating system provider. Right.
Leo Laporte [00:28:03]:
Well, I mean, Ubuntu is, for instance, that's a company that makes a distro. It would have to be by distro. There's also Android operating systems like Graphene. Graphene's already said we're not going to do this.
Steve Gibson [00:28:16]:
Well, and did you see that? It got stuck into System D and then got yanked. Yeah, yeah, so.
Leo Laporte [00:28:23]:
So, I mean, nobody wants this, but there's no enforcement mechanism. There's no, it's not even clear how they would know. And it doesn't require id, Right. It just says, you say how old you are correct. What good is that correct? Silly. Yeah, just nonsense.
Steve Gibson [00:28:43]:
So, so they said the laws. Tom's hardware said the law's broad definition of operating system provider as anyone, quote, who develops, licenses or controls the operating system software on a computer, mobile device or any other general purpose computing device. Well, that means smart TVs too, right? So like, okay, you're going to tell your TV how old the viewer is.
Leo Laporte [00:29:08]:
Tizen and WebOS, all the TV operating systems. Wow.
Steve Gibson [00:29:13]:
So. And so, Tom said, pulls in not just Windows, Mac OS, Android and iOS, but also Linux distributions and valves, Steam OS. According to AB 1043, OS providers must maintain a, quote, reasonably consistent real time application programming interface. Again, API that categorizes users into four brackets. Those are the ones that we, that we mentioned. Developers who receive the signal are, quote, deemed to have actual knowledge. So if the OS makes a claim, then they're kind of off the hook there. Well, the OS said this was the age of the user.
Steve Gibson [00:29:52]:
So they are at that point deemed to have actual knowledge of their user's age rage under the law, which shifts legal liability for age appropriate content decisions onto them. So that, so that says if they've been told, then they must act accordingly. And Tom's wrote, penalties for non compliance run up to $2,500 per affected child for negligent violations and $7,500 for intentional violations. And it's important though that this is all enforced by the California Attorney General. And that was a point made elsewhere, that it means random groups can't sue on behalf. You know, under this law, it's only the California Attorney General.
Leo Laporte [00:30:40]:
So it gives them a lot of discretion about who they're pursuing.
Steve Gibson [00:30:42]:
Exactly. So Tom said the law does not require, as you said, LEO photo ID uploads or facial recognition with users instead simply self reporting their age. What?
Leo Laporte [00:30:55]:
99.
Steve Gibson [00:30:59]:
So he says this sets AB 1043 apart from similar laws passed in Texas and Utah that require, quote, and we seen this when we talked about this, commercially reasonable, whatever that means, verification methods such as government issued ID checks. Assembly member, California assembly member Buffy Wicks, who authored the bill, said this, quote, avoids constitutional concerns by focusing strictly on age assurance, not content moderation. In a press release, the bill passed both chambers unanimously, 76 to 0 in the assembly and 38 to 0 in the Senate.
Leo Laporte [00:31:43]:
That's kind of like a yeah, sure, why not Vote.
Steve Gibson [00:31:46]:
Yeah, it's like, okay, that. Like that. This is an easy one.
Leo Laporte [00:31:51]:
Yeah.
Steve Gibson [00:31:53]:
However, Gavin was a little circumspect. Toms wrote, despite signing it, Governor Newsom issued a statement urging the legislature to amend the law before its effective date, citing concerns from streaming services and game developers. Right, a streaming service on a smart tv, a streaming service and game developers about quote, complexities such as multi user accounts shared by a family member and user profiles utilized across multiple devices. In other words, you know we're talking about the same thing, a version of the same thing we've been talking about with networks for since the beginning of the podcast authentication. On one hand, there's identity authentication. Now we're facing age authentication and it's just as messy because you're remote and these are just not easy problems to solve. So Toms said. Whether amendments will materialize before January 2027 remains to be seen.
Steve Gibson [00:33:00]:
Enforcement against Linux distributions, however, is likely to be problematic, wrote Toms. Distros like Arch, Ubuntu, Debian and Gentoo have no centralized account infrastructure with users downloading ISOs from Mirrors Worldwide and can modify source code freely. The small distros lack legal teams or resources to implement the required API. It's easy to do so A more realistic outcome for non compliant distros is a disclaimer that the software is not intended for use in California and maybe eventually anywhere.
Leo Laporte [00:33:47]:
So really not use this anywhere.
Steve Gibson [00:33:50]:
Can't use this on earth. So good luck. So I spent some time reading everything I could because I you know this is again this the cross network authentication of anything is hard. I found a posting made by the Reason Foundation a few months ago that I It's worth sharing. It summarizes the current state of affairs, highlights the ways in which California's new legislation actually represents a useful step forward, and also suggests a way out of the mess that California also created. So they wrote. California Governor Gavin Newsom signed the Digital Age Assurance act Assembly Bill 1043 into law on October 13, marking a significant evolution in state approaches to online youth safety. There is room for improvement, but the act introduces a meaningful first step toward a more privacy preserving age signaling model intended to minimize data exposure while improving compliance certainty for businesses.
Steve Gibson [00:35:01]:
This step is a welcome advancement over earlier approaches, but but it also creates potential complications if later paired with more restrictive bills. A trade off the policymakers should weigh carefully. So they wrote California's AB 1043 mandates and so yeah, we got the. We got the four age bracket thing. So they said. This approach contrasts with that of Utah, Texas and Louisiana, which enacted the first statewide app store age verification laws in 2025. The bills require app stores and developers to verify users ages through the here's that expression commercially reasonable methods. Utah and Louisiana's laws are set to go into effect in 2026, while Texas, I think it's in the summer.
Steve Gibson [00:35:54]:
While Texas has been temporarily blocked by a federal judge on constitutional grounds, two federal bills, the App Store Accountability act introduced by Senator Mike Lee, a Republican from Utah, and the Parents Over Platforms act introduced by Representative Jake Archencloss, who's a Massachusetts Democratic senator, included the same commercially reasonable languages as the state bills. Although this phrasing does not explicitly mandate government ID or biometric checks, it creates strong incentives for app stores to collect the most precise forms of evidence available driver's licenses, passports or credit cards. Fearing the risk of lawsuits and non compliance penalties, companies would default to the most definitive identification techniques, which that's a problem, right? So what they're saying here is that by by having state and even federal laws which say you must do the best job you can. Unfortunately the best job you can is very intrusive of of privacy. So they said in 2025 alone, several popular apps that already required government ID checks for age verification suffered significant data breaches, highlighting the privacy risks associated with such mandates. The T app, a woman only dating advice platform that required users to upload selfies and copies of government issued IDs as part of its account verification, experienced a major breach in July that exposed over 70,700 identification images and sensitive personal data. And again, it's like, why are they not deleting this the moment they've they've determined someone's age? But okay. In October, global messaging platform Discord, as we know, suffered a breach directly tied to its compliance with the United Kingdom's Online Safety act, which mandates robust age verification for platforms likely to be accessed by minors.
Steve Gibson [00:38:17]:
To meet these legal requirements, discord began requiring UK based users to submit either facial scans, government IDs or the last four digits of credit cards for age checks, vastly expanding the pool of highly sensitive data at risk. When hackers later compromised a third party vendor managing this information, thousands of ID photos and partial credit card details were exposed. These incidents underscore how rigid age verification systems can turn well intentioned privacy protections into security liabilities and inadvertently create new vectors for harm. In Contrast, Assembly Bill, California Assembly Bill 1043 correctly prioritizes privacy and security by using a self declared age signal rather than a verification process. The law integrates core privacy by design principles by separating identity from compliance status and ensuring that user data never leaves local systems in identifiable form. That is all it ever discloses is brackets. It also provides developers with clearer compliance certainty than Utah style frameworks, which which remain mired in vague terms like commercially reasonable. However, there is still issues with AB 1043 that should be addressed.
Steve Gibson [00:39:48]:
First, the laws mandate that device makers integrate age signals into all devices risks, sidelining parents from key digital literacy decisions. For AB 1043 to achieve its stated balance between safety, privacy, and parental empowerment, California could modify its framework to make age signaling optional for parents rather than required. Second, debates over youth online safety laws raise a subtler issue their impact on family relationships and parental oversight. Age verification and age signal frameworks are often presented as empowering parents, but automation can easily displace meaningful dialogue between parents and their children. True digital literacy depends on ongoing dialogue, trust, and continuous education about online risks, not on technical filters alone. When technology assumes the entire role of risk management, it can foster complacency and a false sense of security, as if software settings could replace parental judgment Policymakers yes, really good. I know it sounds like you, Leo. That's that's exactly the point that you've often been making here, they said.
Steve Gibson [00:41:15]:
Policymakers should therefore ensure that digital safety tools operate as supports for families and not substitutes for them. California's initial framework in this respect could be refined through a simple but meaningful adjustment make the device level age signal optional for parents rather than compulsory. An opt in structure would preserve AB1043's privacy benefits while strengthening family agency. Parents could choose to enable the system during device setup if they desire automated filtering or app age controls, or skip it entirely for now if they prefer to guide their children's use through household rules and open communication. Optional enrollment would further align the policy with California's broader digital rights precedents, reinforcing choice, consent and proportionality. And they finish writing. On the whole, California's AB 1043 represents a meaningful advancement in the national debate on age verification. It replaces high risk identity checks with privacy preserving signals, curtails constitutional litigation risks, and clarifies enforcement responsibility.
Steve Gibson [00:42:42]:
But if the state were to shift to an opt in model, it could preserve the law's privacy protections, align with its digital rights values, and restore parents to the central role in guiding children's online well being. Age assurance need not come at the expense of privacy or parental autonomy. So I think this author gets a lot of this exactly right. No, we would be moving toward an environment where the devices used by someone less than 18 years old could optionally be configured by that minor's parent or guardian to conditionally supply its user's age bracket, never its date of Birth just where you are in those brackets or know which bracket you're in. The idea would be that the various operating systems would implement a simple API. And you know, iOS is there, Android is there. I mean they're like right there. That could be queried by applications running on the platform.
Steve Gibson [00:43:51]:
If that application's a video game offering age restricted content, it could learn which version of its game to display to that platform's user. If the application were the application's app store, it would learn which applications to list and allow to be downloaded and which should simply be filtered and not shown to someone underage. And if the application was a web browser, it would learn the age range of its user and could use that information when queried by a remote website. Now we would need for that, we would need the W3C, the World Wide Web Consortium, to define a standard means for a remote site to query the browser client for its user's age, which the browser would have received by an API from the underlying platform. But even that should be trivial. I mean that would not take more than a day to define. So as for the non smartphone platforms such as Windows, Mac OS, Linux and Steam, and of course smart TVs, all of those platforms, at least the OS platforms not I don't know about smart TVs, but I guess it could be added operate with the concept of a root or an admin, right, Whose account should not be used as the daily driver and daily users who work with, you know, with far more safe and reduced privilege user accounts. So those platforms could easily arrange to add date of birth awareness to their user accounts.
Steve Gibson [00:45:37]:
And then an API would be added to a lot to surface the brackets of those, to the, to the online requester of that information. So you know, basically following exactly the model of the smartphone that would give parents who wished to govern what their young children were able to do online, you know, what they saw and where they went, a clear and clean means for do means for doing so. And I so much favor date of birth over age because that automatically changes the bracket at the user's birthday, rather than needing to constantly update the age after a birthday. And of course parents could set whatever birth date they wanted. They, if they felt that their child was more mature than the, the typical child at that age, they could say that they were born earlier, which would then move them into a later bracket sooner. So you know, all the online fury and indignation raging over the idea of California attempting to effectively outlaw, as I've seen online, any platform that doesn't provide these services. It that all disappears when it's just an optional feature capability that a system's admin, you know, in this case mom or dad might choose to employ if in their role of parent or guardian they would like to exert some control over the age gated use of that platform. So I think it's also worth noting that this solution also nicely resolves the whole VPN backlash dilemma that is also beginning to appear.
Steve Gibson [00:47:26]:
We're hearing the legislators saying, well you know, those VPNs are being used to bypass, you know, the laws we put in place, so we need to outlaw those. You know, a VPN in this case would be of no benefit since the user's platform, not their IP address, would be producing the age bracket indication. So you know, I, I thought that was an interesting take that, that reason published and I thought it was good that Newsom said well I'm going to sign this but I hope we, you know, maybe make some modifications before it goes into law. And in any event I don't think Linux people have anything to worry about. I mean it is open source and we did see somebody kind of very quickly added the, the capability into user accounts on Linux and it was, Linux was immediately forked without that provision in there. Even though it didn't even have an API, it didn't have a ui, it wasn't in any of the desktops, it was just down basically in the JSON structure. They added a field for date of birth and you know, a lot of people in the Linux community freaked out over that. So it's like, okay, I mean I, I get it, but again, there's no question we need to have strong identity online.
Steve Gibson [00:48:52]:
We've needed that for a long time. Everyone wants it to be anonymous. We're trying to hold on to that age gating and age verification is coming to the Internet and let's hope we do it in a responsible way. Okay, so before we leave this discussion of age verification, I just wanted to note that Apple has started requiring age verification for their users in the UK and South Korea. With the rollout of this latest 26.4 version of iOS, Apple account holders in those two countries may be asked to register a credit card or take a picture of a government issued id. If Apple's able to determine an account holder's age without asking that, like by looking for other signals, like the length of time they've had an account or which would sort of automatically say well they've got to be an adult by now, then no other information is needed but otherwise they're going to get intrusive. So you know, as I said before, if this has to happen, I trust Apple more than any other third party to protect its users privacy. Apple really does, you know, appear to be doing everything as right as they can.
Steve Gibson [00:50:16]:
I've got links in the show notes for anyone who wants more information. That like of an Apple support page that just says if you are asked to verify your age, you know, we need to do that. I heard from a couple of our listeners who are in the UK that that they had a variant of a driver's license which Apple software did not recognize. So they were having a problem with that. So it's looking like on, there's like on phone, you know, image recognition of UK government issued IDs and driver licenses which Apple is able to ingest and then use to satisfy the, the, the UK's law. This is all, you know, it's not like Apple wants to be doing this. They definitely don't want to be doing this. Yeah.
Steve Gibson [00:51:11]:
If you're asked to confirm that you're an adult and then you know, you, you proceed. So this is, this is now happening, you know, by Apple when they're operating in countries that require it.
Leo Laporte [00:51:25]:
It's selfish of me to say this, but I'm glad that they're testing this in UK and Korea so that we don't have to deal with it until it's a. The kinks are worked out a little bit.
Steve Gibson [00:51:34]:
Yeah, yeah.
Leo Laporte [00:51:36]:
Wow. But it's, you know, it's ultimately this could be better. Right. I mean they're a choke point. Right. So Apple and Google, because everything goes and it should be for app stores, not for desktop operating systems, not for TV sets and things like that. But I can see why with the app stores you might want to have that kind of.
Steve Gibson [00:51:55]:
And also because a smartphone is inherently more of a personal use device. You know, kids have their, the fact that we use the pronoun their smartphone.
Leo Laporte [00:52:06]:
Right.
Steve Gibson [00:52:06]:
Says that you know, they're bound to it. It's their social media that is on that phone and you know, their use of the phone, you know, they take it into their bedrooms with them. So it makes sense that you know, the parents could, could work with Apple to establish a secret date of birth and the phone only divulges brackets when necessary. So I think that's. We're going to end up being, and, and again if it worked, if anybody had to be doing this, I would trust Apple much as you and I both Leo, are increasingly annoyed by some of what Apple is doing. You know, as I said, I've lost. I've I don't even know how my photo app works anymore on my phone. I get.
Steve Gibson [00:52:51]:
I get into some strange mode where I can't get rid of half the screen is information about the photo and I try to like to push it down. It won't go away. It's like what happened. Anyway, I chose to share this next story because it's so loony and because it serves as another example of the disturbing and growing intersection that we are seeing everywhere of politicians and technology. The Risky Business newsletter covered this puzzler by writing the Russian government is working on a law that would require all mobile operators in Russia, all Russian mobile operators to use a custom, domestically developed encryption algorithm for the country's 5G mobile network. If the bill passes, and what it's expected to all phones sold in Russia going forward will will need to support the NEA hyphen 7 algorithm, apparently because 1 through 6 were no good or they will not be able to connect to Russian mobile Networks, which by 2032 will only support NEA 7. Foreign algorithms such as Snow used in Europe, AES, of course used in the US and Zuc used in China will be supported only until 2032 as part of a transitional phase to allow current smartphones to reach their natural end of life. Work on the proposed regulation began last year and the bill is now in its second draft, according to Russian news outlet Ivestia.
Steve Gibson [00:54:43]:
The bill is part of a broader set of measures designed to hinder the operations, which is so stupid of Ukrainian drones and missiles, which if you are stupid, you'll see why in a second which have used Russian SIM cards to connect to mobile towers, determine their location, and then guide themselves to planned targets. However, using a custom encryption algorithm to encrypt 5G traffic won't stop the Ukrainian side from using Russia's existing mobile network, since they can always fall back to older protocols, not the 5G protocols, LTE and 3G, both of which will continue to function. So on the other hand, writes Risky Business, the proposed law represents a, quote, patriotic legislative flexibility. It's the type of unrealistic stuff they wrote that's been happening in the Russian Duma recently to show that Russia is important and still matters on the global stage. That's why show that by withdrawing from the rest of the world.
Leo Laporte [00:55:51]:
Wow.
Steve Gibson [00:55:52]:
As is Vestia points out itself, Russia is insignificant on the mobile market, where it only accounts for 2% of annual sales. So it's very possible that most phone makers won't bother to implement NEA 7 on their chipsets. Why? Why would they? There's also no base tower equipment that supports the algorithm, which raises the possibility that Russia will be years behind in rolling out its 5G network because it's going to have to design and implement all of the base tower technology too, for to to add NE the 7 support, they said. The Russian news outlet warns that NEA the 7 may be used as a Trojan horse by foreign manufacturers to request a favorable market position or a monopoly in exchange for adding the algorithm to their firmware. Okay, I suppose that's possible on Ukraine's side. The answer is likely to be the same as with Russia's rollout of max. Remember, that was their the Russian only messaging system, and Russia demanded that everybody use it and then began cutting off access to all the others as they are now. The Ukrainian intelligence services were delighted that Russia was mandating everyone in their country use an incredibly insecure and easy to hack mobile app, meaning Macs.
Steve Gibson [00:57:20]:
Rolling out an untested and largely unknown encryption algorithm for your entire future mobile network may create a major opportunity for hacks and surveillance operations who know their way around encryption, as intelligence services usually do. So I think as I look at this and think of like what Russia's doing, it occurs to me that one of the most important lessons taught by the Industrial Revolution is the incredible power that comes from its from standardization. If, for example, if every country had their own screw thread standard, then nuts and bolts would be incompatible with one another, and it would be necessary for a shop to redundantly stock a separate supply of bolts for every country of origin.
Leo Laporte [00:58:18]:
Well, weren't train gauges incompatible for a long time?
Steve Gibson [00:58:22]:
Another great example. Yes, yes, it's a little hard to
Leo Laporte [00:58:25]:
go from country to country, right?
Steve Gibson [00:58:27]:
And Leo, think about it as it is. We do have separate metric and imperial threading, and look what a mess that creates.
Leo Laporte [00:58:37]:
You have to have both sockets.
Steve Gibson [00:58:39]:
Yep, yes, just that. So you know, another example of standards failing when they differ is as as our picture of the week showed, AC outlet plugs around the world. Although I'll admit that they have provided a terrific supply of pictures of the week for this podcast. But that further demonstrates the failure, right? So my point is that the standards that the world has agreed to, the standards surrounding the Internet, Ethernet, usb, all being examples, they've resulted in an as in like in astonishing economies thanks to their interchangeability and interconnectivity, which we get free of charge simply by choosing not to roll your own. So I think this clearly demonstrates the insanity of what Mother Russia is choosing to do after 2032. What? Five years from now, Russian citizens will likely be stuck with Russian made Android smartphones with God awful hardware and no choice in the matter. If they want 5G, they've got to use their Rusky phone. Or you know, Rusky phone, that's right.
Leo Laporte [01:00:01]:
In Soviet Union, risky phone call you.
Steve Gibson [01:00:06]:
And that's not progress.
Leo Laporte [01:00:09]:
That's hysterical. Wow. Of course, speaking that some people drive on the left side of the road, some people drive on the right side
Steve Gibson [01:00:14]:
of the road and boy, have someone drive you if you're in one of those countries because you all every instinct you have is wrong.
Leo Laporte [01:00:24]:
Merging on the freeway is really tough. Oh, for me anyway. Yeah.
Steve Gibson [01:00:30]:
Okay. So speaking of Russia, it seems that Russian intelligence services somehow arranged to install some spying hardware into a Ukrainian drone factory. Now, the device was embedded inside a thermostat, but it did much more than control the room's temperature.
Leo Laporte [01:00:53]:
Wow.
Steve Gibson [01:00:54]:
Since it included a camera, a microphone and a little router. The story becomes even more interesting and fun though, when we learn that the Ukrainian drone maker Techex was fully aware of the device before it was installed, thanks to a warning which they received from Ukrainian intelligence services. So the Russian surveillance device was installed, after which techex worked with Ukrainian intelligence to supply a constant stream of disinformation which was regarded as highly trusted because the Russian spies were certain that nobody knew about it. So why would they be making stuff up in front of the thermostat?
Leo Laporte [01:01:46]:
Whoops. That's smart.
Steve Gibson [01:01:50]:
Yeah, Love it. Okay, so last year, as we recall, we had some fun looking at a very clear demonstration of the claims being made about quantum factorization. Remember that, you know, the threat posed by the emergence of practical quantum computers is that they may be able to solve the prime factorization problem upon which rests on all of the cryptographic security provided by the invention of RSA style public key crypto. Last year it appeared that the world had a much longer way to go than was assumed, because that takedown of all of the progress that had was being claimed, which we examined carefully, convincingly revealed that not a little bit, but a lot of sleight of hand had been going on behind the scenes with the use of, for example, highly contrived factorization targets. Now, Google appears to disagree, or perhaps they're just taking the better to be safe than sorry approach. The news of last week is that Google has moved the what they call the so called Q Day to 2029. Only three years from now, Google expects Threat actors to break classic public key encryption using quantum computers by the end of this decade. Okay.
Steve Gibson [01:03:30]:
You know, they've introduced a 2029 timeline to secure their products. That is as their deadline to finish securing their products with post quantum crypto PQC protections. Both Chrome and Google Cloud already have PQC post quantum crypto protections in place, and Android is getting them later this year. We also know that Apple and Signal have both already added post quantum crypto to their messaging platforms. In addition to Cloudflare, aws, Azure, Meta and Zoom all have PQC in place today. Plus TLS version 1.3, the current and latest version of TLS is already capable of negotiating post quantum crypto encrypted connections. And Cloudflare tells us that more than half of, of all the traffic moving through Cloudflare is now already quantum safe. So, you know, hats off.
Steve Gibson [01:04:41]:
You know, we've been, we've been covering this move and the need to move toward quantum safety for years now with the cryptographers getting to work on post quantum algorithms way before it seemed that we had a problem. I still think it's way before we have a problem even now based on, you know, the real evidence that we've seen. But hey, our chips have the power, our processors have the power. No reason not to do dual quantum or dual crypto schemes where you encrypt with both a pre and a post quantum crypto to be safe. And in that case, I don't know what, what the NSA is going to do with all that data that they've been sucking down. Leo. It's got, I mean, I guess historically, the older pre, post quantum crypto communications, they could decrypt if it's still of any value.
Leo Laporte [01:05:41]:
To be old.
Steve Gibson [01:05:42]:
Yeah, it's, yeah. Really old.
Leo Laporte [01:05:44]:
Yeah. Do you really think that quantum is going to happen by 2039?
Steve Gibson [01:05:51]:
No, I, I don't, I do not. I, I do. Well, Google is saying 2029.
Leo Laporte [01:05:56]:
I'm sorry, 2029.
Steve Gibson [01:05:57]:
Yes. Yeah, I don't, I just don't see. I, I, to me, it doesn't look like we're even close. And it's not as if you're able to break down the fat, the prime factorization problem into smaller pieces. If you could, we would have.
Leo Laporte [01:06:15]:
Right.
Steve Gibson [01:06:15]:
You know, we would have already decomposed it into something that classic computers can solve. It's intractable right now. So, you know, if they're jumping up and down about factoring 31 and then we find out they cheated, it's like, okay, I have a hard time getting worked up about this. You know, I might get surprised.
Leo Laporte [01:06:38]:
Okay, it's prudent to have post quantum crypto available.
Steve Gibson [01:06:43]:
Why not?
Leo Laporte [01:06:43]:
See any reason not to?
Steve Gibson [01:06:45]:
Why not? Exactly. It doesn't cost us anything at this point. Our chips are fast enough, we've got the algorithms, we're an and we're not just using them, we're using both. So if, if a problem is found in either one, the other one protects us. So.
Leo Laporte [01:07:01]:
Right.
Steve Gibson [01:07:01]:
Why not?
Leo Laporte [01:07:02]:
Right.
Steve Gibson [01:07:06]:
Okay, so Last Tuesday, during the annual RSA Security Conference where you and Lisa were present to hobnob with many of the podcast networks supporters, the CEO of the UK's NCSC, which is the UK's cyber security agency, spoke to the conference. The publication the Record wrote about his presentation. What they said was Britain's National Cybersecurity center warned Tuesday that a rise in so called Vibe coding could reshape the software as a service industry while introducing new cybersecurity risks if organizations fail to adapt. The warning they wrote coincides with remarks by NCSC Chief Executive Richard Horn at the RSA conference in San Francisco where he urged security professionals to ensure AI coding tools become a net positive for security. Unquote, he said, highlighting again how digital societies are facing a surge in cyber attacks, exploiting classes of software vulnerabilities that are known about and can be fixed. Horn said there was a risk AI tools would simply propagate the production of insecure software. His comments followed a sharp market sell off in shares in software and cloud companies in February, driven by investor concerns that Vibe coding, a term used to describe software developed using AI tools and minimal human input, could reduce demand for subscription based software as a Service. You know SaaS platforms, Horn said, quote during his speech, quote the attractions of Vibe coding are clear.
Steve Gibson [01:09:10]:
Disrupting the status quo of manually produced software that is consistently vulnerable is a huge opportunity, but not without risk of its own. The AI tools we use to develop code must be designed and trained from the outset so that they do not introduce or propagate unintended vulnerabilities. In a blog post published alongside the speech, the NCSC itself said advances in AI assisted software development are already changing how organizations approach writing code, potentially setting the stage for significant disruption of the SAS model over the next few years. And I'm going to be talking about that as soon as I finish with this because I think this is really interesting, they said, describing the February sell off as a billion dollar wobble and referencing the term Sasspocalypse. Yes, the Saspocalypse the agency cited anecdotal examples of developers using AI tools to build replacements for SaaS products in a matter of hours, particularly in response to rising subscription costs or feature restrictions. The SaaS industry has dominated enterprise IT by offering subscription based access to software while offloading infrastructure maintenance and security to vendors. We've talked about this, all this outsourcing that is now being done. In its blog post on Tuesday last a week ago, the NCSC said this dynamic could shift as AI tools make it faster and cheaper to build bespoke enough software in house, driven by the same business incentives that triggered the original rise of of SAS companies themselves and the early uptake of cloud computing.
Steve Gibson [01:11:19]:
But it warned that AI generated code can be unreliable, difficult to maintain and prone to security flaws, increasing the chance that vulnerable systems could be deployed if those behind the Vibe coded systems were too tolerant of the risks. The the NCSC urged organizations to prioritize security as the technology develops, including ensuring AI systems generate secure code by default, verifying the integrity of models, and expanding the use of automated code review and testing. The blog post stated, quote, if security professionals do not lean in from the start, the landscape will evolve without this crucial input, as was arguably the case in the early years of cloud adoption, A challenge the security community will face is that no one yet knows exactly what we need to introduce to ensure the Vibe Coded future is a safer one. If we face this challenge head on from the start, we have a chance to introduce some strong security fundamentals. Unquote, the article finishes saying. The NCSC said any disruption to SAS is likely to take place over several years, with adoption varying depending on system complexity and organizations risk tolerance. But the agency said it could easily imagine that the only companies in the sector that will survive will be those that cannot be easily replaced with a Vibe coded alternative, perhaps because their services have themselves become critical to a business or there are regulatory requirements they meet, or they simply have a critical mass of data across customers. Okay, so there were a number of interesting takeaways here.
Steve Gibson [01:13:25]:
I think the first is the obvious. When you look at it, you know, threat that Vibe Coded replacements for software as a service represent. Okay, think about it. Why would any large enterprise rent under an expensive recurring subscription what a handful of their in house coders could whip up overnight using the benefit of Vibe coding AI to create bespoke software that more perfectly fits their needs? I hadn't really stopped to consider it before now, but this entire world of outsourced service industries that have sprung up over the past decade are Hugely vulnerable to the emergence of DIY homegrown in house coded alternatives that Vibe coding now makes so easy to create. You know, remember we heard that C suite executive saying, quote, we're only going to hire someone new if you first demonstrate that AI cannot do their job, unquote. So it's not much of a stretch to imagine a similar executive asking why should we be paying tens of thousands of dollars per month to this annoying outsourced company when a couple of our guys in the back room can use AI to write the same thing that we will then own, can customize to work exactly the way we want, and can use going forward without any recurring cost. No more subscription fees. So it does seem pretty clear that this is going to be an accelerating trend in the future.
Steve Gibson [01:15:29]:
But the other shoe to drop was this NCSC CEO's primary concern, which was that the threat that carefully created refined and secure SaaS solutions would be too hastily, which is what we have today from third parties who wrote these things, you know, 10 years ago, carefully and with human programmers, encoders and have since worked all the bugs out, but they're not free that they would be too hastily replaced by half baked, unproven and insecure Vibe coded clones. One of the tendencies we have seen over and over is that security will truly be sacrificed at the altar of economics. And why is everyone pulling and blindly using libraries from open source repositories? Well, because they appear to work and solve a problem and the price is right, it's zero. But the truth is that everyone is just holding their breath and hoping for the best, right? Hoping that this library they pulled isn't malware. No, it doesn't seem to be. Nobody else says it is, so. Okay, but that's not the way security is obtained and maintained. On the other hand, it doesn't cost anything, it's free.
Steve Gibson [01:17:04]:
It's going to be very interesting, Leo, to see what happens as enterprises develop, you know, for in house use the various systems that they've been outsourcing because you know, they're gonna, and I overall think it's probably going to be a win, but I imagine there will be a few stumbles along the way.
Leo Laporte [01:17:25]:
It's not like, you know, the SaaS software you buy from these big companies is necessarily secure, robust and reliable. We talk about all the problems they have all the time on this show. So that's true. You're trusting somebody unless you write it yourself. And nobody can afford to write it themselves. Doing it in house is really hard, I think.
Steve Gibson [01:17:46]:
Well, you know. Yes. But Vibe offers the op the opportunity of, of a bunch of programmers saying, okay Claude, you know, here's what we need. We need a customer, a customer relations management system and it needs to take our database and here's the schema and we want to have this UI and blah blah, blah, and presto bango, you got an app. I mean that's Claude.
Leo Laporte [01:18:15]:
Yeah. I've been very tempted to write a sales system for Twit. We had it. It was written by a low level employee many years ago in Net and he knew what he was doing, I guess, but when he left he said I'm not maintaining it, so you're on your own. And it has little bugs like two people that can't use it at the same time or it crashes and you have to have a hard reboot and stuff.
Steve Gibson [01:18:40]:
And that's often the case with your typical bespoke home. Somebody wrote it in house software. Yep. It got written by and I. We're. We were using for. Well actually we only just retired that. We call it Dino Database.
Steve Gibson [01:18:57]:
I don't know why.
Leo Laporte [01:18:58]:
You probably wrote it yourself though, Right.
Steve Gibson [01:19:00]:
It was actually, it was the, the only coder who ever wrote any code that we actually used. A brilliant guy named Steve Frank. Yeah. And he wrote it in Dbase 2 which we then we then moved to Fox Pro. Exactly. And, and, and sue, as recently as until the release of 6:1 would look old customers up on, you know. Yes, it worked great. Sure.
Leo Laporte [01:19:32]:
I wrote a lot of Dbase2 software in my time for the radio station I worked at. Yeah. No, and you know, that's not even really coding because it's just a database and you're writing a front end to a database really. But yeah, yeah, I don't know. I'm very bullish on what cloud code can do, but obviously, you know, it may introduce errors and then pulling these libraries is nowadays really risky. There are solutions though. People have found solutions. One guy said, well look, just pin the version until say you can't download it until it's been out for a week.
Steve Gibson [01:20:08]:
Yes.
Leo Laporte [01:20:09]:
And presumably somebody will have caught on by then, Right?
Steve Gibson [01:20:11]:
Yes. And that was a problem with Light LLM is it was not pinned. And so everybody grabbed the latest. Let's take a break and then we're going to look at an update on the click fix campaigns.
Leo Laporte [01:20:22]:
Okay.
Steve Gibson [01:20:23]:
Because that's still bad.
Leo Laporte [01:20:25]:
Well, and yeah, I'm curious what you think of what Apple's kind of sort of solution was, which I thought was interesting. Steve.
Steve Gibson [01:20:32]:
So last Wednesday recorded future posted the results of one of their threat forensics groups that was looking closely at the insidious Click Fix social engineering attacks, as they wrote elsewhere. In describing the nature of these attacks, they said. First documented in late 2023, Click Fix has transitioned from a niche social engineering tactic to a cornerstone of the global cybercriminal ecosystem. Click fix is a social engineering methodology that lures victims into manually executing malicious commands by masquerading as a necessary technical resolution for fabricated system errors or human verification prompts. Which I think perfectly sums up like describes what you know the nature of this okay, so here's what we learn from recorded futures in Insect Insikt group they write Insick Group identified five distinct clusters leveraging the Click Fix social engineering technique to facilitate initial access to host systems observed since at least May of 2024. These clusters include those impersonating financial application intuit QuickBooks and the travel agency booking.com insect group leveraged the Recorded Future HTML Content Analysis data set, which enables systematic monitoring of embedded web artifacts to identify and track new malicious domains and infrastructure. So basically this is their their, you know, cyber forensics system, they said. The clusters demonstrate significant operational variance in lure themes and infrastructure patterns and highlight the techniques evolution, moving past simple verification by visually fooling victims with various fake challenges and demonstrating technical sophistication through operating system detection to tailor execution chains.
Steve Gibson [01:22:57]:
Despite these structural differences, its operation is largely the same, showing that Qlik Fixes core techniques work across platforms and only the social engineering lure needs to be adapted to the victim. Threat actors manipulate victims into executing malicious obfuscated commands directly within native system tools like the Windows Run dialog box or Mac OS Terminal. This living off the land approach allows malicious scripts to execute in memory, effectively bypassing traditional browser security and endpoint controls. Parallel clusters targeting sectors as diverse as accounting, real estate and legal services indicates that Qlik Fix has transitioned into a standardized high ROI template, you know, High Return on investment template for both cybercriminal and potentially advanced persistent threat apt groups. To protect against these threats, security defenders should move beyond simple indicator blocking and prioritize aggressive behavioral hardening. Key recommendations include disabling the Windows Run dialog box via Group policy objects, implementing PowerShell constrained language model CLM, and operationalizing digital risk prevention tools such as recorded futures malicious websites to identify and mitigate threats to your digital assets. Based on increasing use since 2024, InSick Group assesses that the Click Fix methodology will very likely remain a primary initial Access vector throughout 2026 as threat groups continue to social engineer victims to enable exploitation. Looking ahead, Insect Group anticipates QLIK fix lures will become increasingly technically adaptive, incorporating more selective browser fingerprinting, while continuing to use infrastructure that can be built and dismantled quickly.
Steve Gibson [01:25:26]:
In addition to technical refinements, Insect Group predicts that the social engineering component will continue to evolve, leveraging new techniques to lure victims into executing malicious commands. Okay, well, we all know how annoyed I am with Microsoft. This entirely preventable, you know, detectable and preventable vulnerability is now three years old and its use has been accelerating rapidly to the point that this family of readily blocked exploits, as we learned a few weeks ago, now accounts for more than half of all security breaches. Just what one technique, more than half,
Leo Laporte [01:26:18]:
is how effective it is.
Steve Gibson [01:26:19]:
It is that effective. Exactly. Everybody is going to fall for it unless they have some savvy and, and it's like, wait a minute, why am I to confirm my. That I'm not a, you know, to confirm that I'm human. Why am I opening the Windows run and pasting this string into and then hitting Enter?
Leo Laporte [01:26:41]:
So bad.
Steve Gibson [01:26:42]:
But again, most people are just script followers. I mean, most Windows users don't really know how Windows works, right? I mean, I hear Paul saying the same thing. So by comparison, our listener Jeff Adamson sent a note Friday with a link to a story over at Apple gadget hacks.com with the headline Mac OS 26.4 adds terminal paste Prompt to Block Paste jacking.
Leo Laporte [01:27:15]:
You think it's specifically aimed at clickjack?
Steve Gibson [01:27:18]:
Yes, it is. It is exactly aimed at it. And so they called it packet jacking. They wrote. Is it? Well, packet jacking is. What is this that the term used in the headline. That's another name for click fix. Whenever a Mac OS user of Terminal attempts to paste a suspicious string into Terminal, an intercept dialog will be displayed to cap to caution the user about the possible implications of what they're attempting to do.
Steve Gibson [01:27:51]:
The dialog reads possible malware paste blocked, and it says, your Mac has not been harmed. Scammers often encourage pasting text into Terminal to try and harm your Mac or compromise your privacy. These instructions are, are commonly offered via websites, chat agents, apps, files, or a phone call. And then you've got two options. Don't paste is what's highlighted and recommended, or paste anyway. So that's, you know, a nice, like, stop sign comes up and says, whoa, no, don't just follow these instructions. Let's think, you know, double, you know, think about this for a second. So, you know, I don't suppose that Windows 10 users who still compose one quarter of the Windows desktop population will ever see Windows behavior change, right? Microsoft has moved on, but it would sure be nice if Windows 11 users could have this simple exploit prevented by Microsoft caring, which is all it takes a little bit of care from Microsoft as Apple has just demonstrated they do because this is so easy to fix.
Steve Gibson [01:29:21]:
I also wanted to highlight the tips near the end of the Recorded Future article that talked about available mitigations for this under Windows. The mitigations of disabling the Windows Run dialog box via Group policy objects and implementing PowerShell constrained language mode. You know that won't help the general Windows population because you know they're just using Windows at home. But within any enterprise I would jump on both of those immediately. You know, unless an enterprise IT staff know that the Windows Run dialog box is needed and I'm not sure why it would be disable it, you could the good news is you could turn it off for all of your Windows users inside an enterprise and immediately prevent this most easy solution so and then also constrain what you can do with PowerShell so that it's basically neutered. I mean what we see here the problem is that over time, just like has happened with our iPhones, Windows has gotten incredibly complicated. I mean it always it still has everything it ever had and they just keep adding more stuff and most users just want to run an app to, you know, open Word or open email or run you know, be it they don't want need all this other crap and don't they don't know what it is right and it is all dangerous. I don't know Leo.
Steve Gibson [01:31:03]:
Meanwhile, Reddit has been detecting a growing prevalence of AI posting bots on their site and may need to resort to various proof of humanity measures moving forward. And Reddit users are not happy. PC Mag provided the details under their headline Reddit could soon face required face ID to prove you're not a bot, they wrote. Reddit, like practically every other social media platform, has been struggling as of late with a deluge of bots and AI generated content. In a study from last year, roughly 15% of posts on the platform were found to be AI generated. Okay, so that's more frequent than one out of every seven Reddit posts they wrote. Now it may soon start experimenting with asking users for biometric data like face ID or touch ID or other forms of passkey technology to stem the tide of bots. In an interview with the TBPN podcast first spotted by Engadget, Reddit's CEO Steve Huffman said this tech is the most lightweight way.
Steve Gibson [01:32:31]:
That's his quote. To ensure all users are human, Huffman indicated the platform may use decentralized third party information providers. Oh boy. To verify users personal details. You know, we've recently been talking about all of the uses to which residential proxies could be put. Bouncing AI bot traffic through such residential proxies makes detecting and blocking them based upon IP address impossible. You just look like any random user like spread around the globe. So yes, some sort of logon time verification is needed, but we all know the potential downside of using any third party identity verification system.
Steve Gibson [01:33:26]:
PC Mag continues writing. Steve Huffman told the podcast hosts, part of the promise to users is we don't want to know your name, but we do need to know that you're a person. In 2026, they write. Bots are an existential risk to online platforms. Content aggregator Dig, which was in beta ahead of its comeback, was recently forced to pause operations and lay off staff in response to the horde of bots on its platform. Meanwhile, the ability of bots to influence the discourse on Reddit has already been demonstrated. In April of 2025, researchers from the University of Zurich secretly deployed AI powered bots to influence debate in a subreddit called Change My View, with bots pretending to be a rape victim, a black man who was opposed to the Black Lives Matter movement, and someone who quote, works in at a domestic violence shelter, unquote. Reddit founder Alexis Ohanian said his website using face ID was not something that he had on his bingo card, but argued that something has got to be done about all the fake botted content in a recent.
Leo Laporte [01:34:55]:
I don't know when that interview was, but Alexis Ohanian and Ste and Kevin Rose Square recreated Dig.
Steve Gibson [01:35:02]:
Yeah.
Leo Laporte [01:35:03]:
And had to shut it down last week.
Steve Gibson [01:35:05]:
Yep.
Leo Laporte [01:35:07]:
Because of the bots. Oh my God, it's a nightmare out there.
Steve Gibson [01:35:13]:
It really, really is. And, and Leo, when you add AI to the mix and, and, and, and hundreds of thousands of residential proxies that, that there. That the AI bots are able to bounce their traffic through. You cannot detect them.
Leo Laporte [01:35:29]:
Yeah.
Steve Gibson [01:35:29]:
I mean we have an undetectable bot problem. Yeah. So many Reddit users have already expressed grievances with the move, with one user saying tell me you want to kill Reddit without telling me. Meaning this kills Reddit if you start requiring people to de. Anonymize themselves.
Leo Laporte [01:35:55]:
Yeah.
Steve Gibson [01:35:55]:
So they. The article finishes saying Reddit would not be the first anonymous platform to start requesting users providing biometric data. To prove who they are. For example, Discord earlier this year started to demand that some users provide face scans so its AI tool could determine if they were over 18 as part of efforts to keep miners off the platform. And as we know, it wasn't Discord's AI tool. They farmed it out and those people got hacked and 70,000 some, you know, personal, private information got loose. So, I mean, Leo, there's no solution.
Leo Laporte [01:36:36]:
I mean, yeah, I'm sympathetic. I don't know what these guys are going to do. I mean, I think there's a lot of. When I'm on Reddit, half the time I see a post, there's somebody will say, that's a. I stop using AI. You're using AI. And I don't know if it's obvious that it's AI or not. If you use bullet points in a post, AI, if you use certain words, AI.
Leo Laporte [01:36:59]:
And I don't know if that's true or not.
Steve Gibson [01:37:01]:
I don't know how you know, and it may once have been. But AI is a moving target. I mean, if, if, if, if, if, if it's doing something that is getting it called out as AI, it's going to change its behavior.
Leo Laporte [01:37:15]:
Right? And I, I use bullet points and EM dashes and occasionally I'll use the word delve. That doesn't mean I'm AI. So, I mean, the problem is, as AI gets better and better, it looks more and more like average content.
Steve Gibson [01:37:29]:
That's the whole thing we have. This is a problem that has no solution.
Leo Laporte [01:37:34]:
Yeah, it's an.
Steve Gibson [01:37:34]:
And I don't say that often. I mean, I spent seven years devising a solution for online identity authentication, because I thought there was one. You know, squirrel was that. But I don't see a solution here. I do not see a solution.
Leo Laporte [01:37:49]:
And you're pretty ingenious.
Steve Gibson [01:37:51]:
And that's my point is I, I'd be like saying, well, know, we could do this or that. No, I don't see a solution.
Leo Laporte [01:37:58]:
Somebody fed the Declaration of independence to an AI detector and it said, well, about 93% chance that's AI written.
Steve Gibson [01:38:05]:
So I thought I saw that. Now, wait, wasn't that 1776? And I don't think that we really,
Leo Laporte [01:38:11]:
we didn't have AI back then, but that Thomas Jefferson. I know. Actually, they said it was 98%. 98% AI generated, that it was a. Oh, yeah. This is the problem is that these AI detectors aren't really any good. You know, that's. They don't.
Leo Laporte [01:38:30]:
We can't detect AI And I think a lot of people assume that something's AI when it's not on Reddit. I mean, how do you know it's one in seven? It could be one in two, or it could be one in a thousand. It's just, you don't know. And I think it's way too draconian to say. Okay, well, from now on, everybody has to give us a driver's license before they post. That's. That will kill Reddit. A lot of what Reddit's all about is anonymity.
Leo Laporte [01:38:56]:
Not for any nefarious purpose.
Steve Gibson [01:38:58]:
No, it's just that's what people want on the Internet. We know that they want to be able to say what they want to say without being held personally responsible.
Leo Laporte [01:39:06]:
I'll give you a completely innocuous example. I hope I'm not. I think she said this on the show. Paris Martineau is a fan of reality TV shows and she moderates a reality TV show subreddit, but she doesn't want that to be in her real name. That's a guilty secret. She should be able to do that privately without revealing that, you know, I'm she, You know, I should be able to, you know, say that I like, you know, leather boots without having to admit it in public. Wait a minute. I mean, I didn't mean to say that.
Leo Laporte [01:39:41]:
That was a mistake. Do you want to take a break right now, Steve?
Steve Gibson [01:39:45]:
Yep. And then we're going to look at Light LLM and what happened last week.
Leo Laporte [01:39:50]:
Oh, I can't wait to hear about
Steve Gibson [01:39:51]:
this bullet we dodged. Okay, we're going to look at Light LLM and we will take our last break here before we finish this. So, okay, so let's start by, by answering the question, what is Light LLM and why would we want was backed by initially by Y Combinator and the Light LLM page over at Y Combinator describes their project. They said Light LLM is an open source LLM gateway with 18k plus stars on GitHub. Now that's over 41,300. So, yes, very popular. And they wrote trusted by companies like Rocket Money, Samsara Lemonade and Adobe. Light LLM provides an Open source Python SDK and Python fast API server that allows calling 100 plus more than a hundred LLM APIs, Bedrock Azure, OpenAI, Vertex AI Cohere, Anthropic, and on on on in the open AI format.
Steve Gibson [01:41:08]:
They said we've raised a 1.6 million seed round from Y Combinator, Gravity Fund and Pioneer fund over at GitHub the about paragraph for light LLM says Python SDK proxy server parens AI gateway to to call more than 100 LLM APIs in OpenAI or native format with cost tracking, guardrails, load balancing and logging. Then they say they enumerate some bedrock Azure, OpenAI, Vertex AI cohere anthropic sagemaker hugging face VLLM Nvidia Nim okay, so the Light LLM site itself largely echoes this and highlights a couple testimonials. It quotes David Lean, a Netflix staff software engineer who says of Light LLM has let my team provide the latest LLM models to our users, usually within a day of them being released. Without Light LLM this would be hours of work. Each time a new model is announced. It means we don't have to transform inputs and outputs across providers and has saved us months of work. And Mark Holt Nuck, a principal architect of generative AI platforms over at Lemonade, says our experience with Light LLM and Lang Fuse at Lemonade has been outstanding. Light LLM streamlines the complexities of managing multiple LLM models okay, so I think everybody gets the idea right with the general chaos that currently reigns across the AI domain with new models appearing daily, pricing varying, and today's top dog, latest and greatest, you know, being tomorrow's, you're not still using that, are you? At the same time, everyone is in a frenzied, frothing and frantic rush to mark out and claim some territory in whatever this is all going to eventually wind up being.
Steve Gibson [01:43:22]:
Essentially, you know, with LLMs being the hottest fungible, commercially tantalizing mystery that humankind has ever created, the last thing anyone wants to be is locked in to yesterday's less glamorous, you know, now it's underperforming or it's overpriced model. So to their credit, the guys at LL at Light LLM who, the guys who created this idea, they were very quick to see a need and an opportunity. They created what is essentially a universal large language model API translator that allows front end developers to code to a single fixed model, the one originally developed by OpenAI by default. And the Light LLM proxy shim would allow any other model to be swapped in behind it on the back end without needing to recode any of the front end. The famous Code Academy folks have a page titled what is Light LLM and How to use it where they write Light LLM is an open source Python library that acts as a unified interface for large language models. It allows us to connect with multiple AI providers such as OpenAI, Anthropic, Google, Gemini, Mistral, Cohere, and even local models through Ollama, all using a single standardized API. Working with multiple LLMs results in juggling different LLMs, results in juggling different API formats, authentication methods and SDKs.
Leo Laporte [01:45:15]:
Is that the ice cream truck? Is that Lori?
Steve Gibson [01:45:21]:
That that is my lovely wife who has forgotten that I'm in the middle of a podcast right now and she just put her hands over hi Laurie. Oh, she hung up. So anyways, they said this usually requires code rewrites, new dependencies and manual adjustments. Light LLM resolves this by acting as a bridge between the application and major LLM providers, letting you manage requests, responses and errors consistently. So, you know, basically it's a big, you know, switching hub that decouples what you're doing on the application end using a large language model from whichever large language model you want to use. So it's kind of a no brainer, right? Like why would you not want to use this? They've been working on it since the winter of 2023 and good boy. As you might imagine, the challenge I don't want the job of supporting an exploding number of individual, varying and evolving AI LLMs, each with their own API requires a great deal of never ending work. They're hiring, by the way, but that's the path these guys have taken, and until recently things have been pretty smooth sailing.
Steve Gibson [01:46:51]:
So what happened last week? Okay, let's start with TechCrunch's overview and then we'll dig a bit deeper. TechCrunch wrote this week some really atrocious malware was discovered in an open source project developed by Y Combinator graduate Light LLM. Light LLM gives developers easy access to hundreds of AI models, blah blah blah. It's a breakout hit, writes TechCrunch. Downloaded as often as 3.4 million times per day, according to Sync. That's, you know, Synk, we've talked about them before. One of the many security researchers monitoring the incident, the the project had 40000 stars on GitHub and thousands of forks. The malware was discovered, documented and disclosed by research scientist Callum McMahon of Future Security.
Steve Gibson [01:47:54]:
I'm sorry, Future Search, a company offering AI agents for web research. The malware slipped in through a dependency, meaning other open source software that Light LLM is itself relied upon. It then stole the login credentials of everything it touched with those credentials. And this is as Leo, you know. Your point is we don't yet really know how much damage was done. It said that they TechCrunch said with those credentials, the malware gained access to more open source packages and accounts to harvest more credentials and so on. The malware caused McMahon's machine to shut down after he downloaded Light LLM. That event prompted him to investigate and discover it.
Steve Gibson [01:48:51]:
Ironically, a bug in the malware caused his machine to blow up because that bit of nasty code was so sloppily designed, he as well as famed AI researcher Andre Karpathy or Carpathy included, concluded it must have been vibe coded. As you said Leo, the Light LLM developers have been working non stop this week to rectify the situation and the good news is that it was caught relatively fast, likely within hours. Okay, so last Tuesday, as mentioned by TechCrunch, this developer, Callum McMahon with future search explained what he had discovered and how at the end of a separate but related posting, Callum explained that their use explained their use of Light LLM. Knowing what we now know about Light LLM and it's exactly what we would expect, he said, we use Light LLM to let us use models from a wide range of providers, letting us strike the best balance between quality, speed and cost. You know, in other words, the you know, current LLMs are just fungible. So here's what happened. Callum's posting was titled no Prompt Injection Required where he's, you know, kind of tongue in cheek, he wrote. Earlier today I got taken out by malware on my local machine.
Steve Gibson [01:50:25]:
After identifying the malicious payload, I reported it directly to the PYPI security team who credited our report and quarantined the package, as well as to the Light LLM maintainers. I wrote a blog post that became the primary source cited by the Register, Hacker News Sync and others. The play by play is pretty interesting when looking back. It started with my machine stuttering hard, something that really shouldn't be happening on a 48 gig Mac. H top took tens of seconds to load, the CPU was pegged at 100%. All signs I'll be working on my local environment for a time, meaning things got messed up, he said. After failing to software reset my Mac, I took a final picture for evidence and then hard reset it.
Leo Laporte [01:51:26]:
Wow.
Steve Gibson [01:51:27]:
So he said. So far the clues had been cursor asking me for network access right as the machine was freezing up. The process list showed a bunch of Python commands all execing a base 64 encoded string. And 11,000 processes running. He said I set u limit to 16k for machine learning workloads so this was partly expected. In other words, you he he has his he Has a system configured to allow, you know, 16,000 different processes, but he had 11,000 running for no apparent reason at that moment, he said on restart, I. I asked Claude to investigate after going down a rabbit hole on the wrong shutdown due to my force shutdown, meaning that Claude started to look at at at something different because he had done a force shutdown.
Leo Laporte [01:52:34]:
There were two. Yeah.
Steve Gibson [01:52:35]:
Yes.
Leo Laporte [01:52:36]:
Crash.
Steve Gibson [01:52:37]:
Not generating the expected logs, he said. I presented it with the start of the base 64 string, just enough to decode import sub process, import temp file.
Leo Laporte [01:52:51]:
Oh boy.
Steve Gibson [01:52:52]:
Before the remaining text went off screen. Claude then became adamant that this was its own doing. The standard Claude code way of running bash commands to escape control characters. Despite the many bugs I've encountered with the C with that cli, I wasn't buying this explanation. Further Claude code probing Further Claude code probing eventually found the offending cause, the rogue package buried within my UV cache. Something I would have never found on my own. So he's crediting Claude with helping him, you know, forensically diagnose what it is that happened to him. He said two minutes later, it had reproduced the entire malware trigger within a local container to double check its claims this time.
Steve Gibson [01:53:49]:
And a further two minutes later, I had a blog posted on our site detailing the specifics of the malware to share as a warning to others. Claude even proactively suggested the emails of both the PYPI security team, who were quick to quarantine the package, as well as the light LLM maintainers.
Leo Laporte [01:54:11]:
By the way, that's pypi. I just want to make that there is something called pypi.
Steve Gibson [01:54:15]:
Oh, okay. Yes. PI. PI. Good, thank you.
Leo Laporte [01:54:17]:
That's the library.
Steve Gibson [01:54:18]:
Yeah, he says. So what actually happened? Okay, so, okay, I'll just interrupt you to note that McCollum is about to start reference. Start referring to MCPS, which is the Model Context Protocol. The MCP site. The Model Context Protocol site explains it's an open source standard for connecting AI applications to external systems. Using MCP, AI applications like Claude or ChatGPT can connect to data sources, local files, databases, tools like search engines and calculators and workflows, you know, like using specialized prompts which enable them to access key information and perform tasks. And they said, think of MCP like a USB C port for AI applications. Again, another standardization which is so very powerful.
Steve Gibson [01:55:16]:
They said, just as USBC provides a standardized way to connect electronic devices, and MCP provides a standardized way to connect AI applications to external systems. Okay, so armed with only that much understanding what column Explains can make sense, and it's not necessary for us to deeply understand it More Column says the root cause was mundane. MCP clients like cursor, claude code and others are using local MCP servers via some executor tool such as UVX for Python or NPX for node Js. When you run an MCP via uvx, it automatically downloads dependencies of that MCP and runs the given command. Unfortunately, our mostly deprecated MCP server had an unpinned dependency of a light LLM package. When my cursor IDE tried to auto load the MCP server, UVX stepped in to download that latest Light LLM version again. Because it was unpinned, it wasn't saying I want this version, it was saying give me the latest. Which he writes was malware uploaded to PI by hackers just minutes earlier Minutes earlier, the seamless ergonomics of UVX meant I became one of the lucky beta testers of the freshly released malware.
Leo Laporte [01:57:03]:
Congratulations.
Steve Gibson [01:57:06]:
Okay, so in other words, exactly the sort of textbook, classic supply chain attack we've discussed so many times in the past. In this case, it wasn't a dependency such as a library that would be downloaded, compiled and linked into a result like the log 4j was. It was a working piece of tooling the light LLM package. And by being unpinned, Collins dependent packages were not saying we want this exact version, so the default behavior was to grab a copy of the current one, and in this case that latest and greatest had been deliberately compromised by bad guys. Column continues saying this is great too. A sloppy likely vibe coded mistake in the actual malware implementation led it to turn into a what he called a fork bomb. It installs a file called light llm_ init pth into the site packages directory. Python automatically executes PTH files on every interpreter startup.
Steve Gibson [01:58:26]:
The first thing it does is that child Python process also triggers light llm_ init pth since it's still in site packages, which spawns another child, which spawns another, which spawns another, which spawns another, thus leading to the only sign I would have noticed that the malware was running. That's where those 11,000 instances came and the reason is 48 gig Mac crashed is it it got into a an infinite loop of spawning these light these light LLM underscore init PTH processes and the system crashed. As Andre Karpathy pointed out on X, without this error it would have gone unnoticed for much, much longer. The malware's own poor Quality is what made it visible and discoverable. So we have to ask ourselves, what if the. If the author of this malware had not made that mistake? So what's the takeaway? We've. We've since. So he writes, we've since moved to a remote M MCP architecture.
Steve Gibson [01:59:48]:
The server doesn't run on the user's machine anymore, which collapses this entire attack surface. No local code execution means a poisoned dependency, can't touch your file system or request network access from your os. And it's much more localized to one audited version that we have under control. However, sometimes you can't reliably do that. There are advantages and disadvantages of local versus remote MCP servers. And in that case, you still need to do what you can to minimize. To mitigate this risk. If he finishes saying, I don't think there's anything new to say here.
Steve Gibson [02:00:29]:
It's the same thing we've been doing everywhere else to keep us safe. Reduce the attack surface, pin your dependencies, or even better, use lock files with check sums audit packages before upgrading. And when Claude tells you everything is fine, maybe ask it again. He said, we analyzed the blast radius of this attack. 47,000 downloads in 46 minutes, 88% of dependent packages unprotected. So, Leo, let's take our final break and then we will continue looking at a little more of the forensics of this mess.
Leo Laporte [02:01:09]:
Wow, this is amazing. You know, I saw Andre Karpathy's tweet almost instantly, thank goodness. And immediately went to Claude and said, hey, is there any Light LLM anywhere in my system? And it said, no. I mean, you have. The name is in your package list, but you never downloaded it, so you're okay. I know. It was terrifying. It was terrifying.
Leo Laporte [02:01:32]:
Now let's get back to Steve Gibson and a further dissection. By the way, I really appreciated Column's write up. It was a very good write up. He did it very quickly, got the word out to the community. 46 minutes after he discovered it, it was taken down, which was thank goodness, because that thing.
Steve Gibson [02:01:51]:
Yeah, and I also thought he noted that Claude wrote the email for him. So I realized that speed of action is one of the things that we get from AI also.
Leo Laporte [02:02:02]:
Oh, absolutely. It's a lever, it's a tool. And used properly, it really adds to the power of what you can do. It also adds to the power of what bad guys can do. And that's the, that's the double edged sword of all this.
Steve Gibson [02:02:16]:
All right, okay, so let's, let's take a closer look at the malware itself. For that we turn to Trend Micro, who titled their coverage of this your AI gateway was a back door inside the Light LLM supply chain compromise, which they tease with the follow on Team PCP or those are the bad guys. Team PCP orchestrated one of the most sophisticated multi ecosystem supply chain campaigns publicly documented to date. It cascaded through developer tooling to compromise Light LLM and exposed how AI proxy services that concentrate API keys and cloud credentials become high value collateral when supply chain attacks compromise upstream dependencies. So they gave they they led their coverage with three key takeaways, they said. Light LLM, a widely used AI proxy package, was compromised on PyPi with two of its versions containing malicious code. These Light LLM versions deployed a three stage payload credential harvesting, Kubernetes, lateral movement and persistent backdoor for remote code execution. Sensitive data from cloud platforms, SSH keys and Kubernetes clusters were targeted and encrypted before exfiltration.
Steve Gibson [02:03:49]:
Second point Light LLM incident was part the Light LLM incident was part of a broader campaign by the criminal group Team pcp, which has demonstrated deep understanding of Python execution models, adapting their attack rapidly for stealth and persistence, in this case a little too rapidly. Team PCP has pre and third Team PCP has previously compromised security tools like Trivi and Checkmarks kics to steal credentials and propagate malicious payloads. Attackers leveraged compromised CICD pipelines and security scanners to escalate privileges and publish trojanized packages. So here's what more we learned from Trend Micro. They explain on March 24, production systems running Light LL and that's exactly last week. Last Tuesday, production systems running Light LLM started dying and just as happened to column and engineers saw runaway processes CPU pegged at 100%, containers killed by out of memory errors. The stack traces pointed to the Light LLM package, a popular python package downloaded 3.4 million times per day that serves as a unified gateway to multiple LLM providers was compromised on PyPI. Upon analysis, it was found that versions 1.82.7 and 1.828contained malicious code that stole cloud credentials, SSH keys and Kubernetes secrets.
Steve Gibson [02:05:36]:
The malicious versions deployed a three stage payload, a credential harvester targeting over 50 categories of secrets, a Kubernetes lateral movement toolkit capable of compromising entire clusters and a persistent backdoor providing ongoing remote code execution. And just to pause, just think of if this had not been caught, 3.4 million instances downloaded per day would have been infected with this nasty malware. I mean, this is bad malware, they wrote. This compromise was not an isolated event. It was the latest link in a cascading supply chain campaign by a threat actor tracked as Team pcp. This post traces the cascade from its origin, the open source vulnerability scanner Trivi, and then presents our technical analysis of the Light LLM payload. Team PCP orchestrated one of the most sophisticated multi ecosystem supply chain campaigns publicly documented to date. The campaign spanned PyPi, npm, Docker Hub, GitHub Actions, and OpenVSX in a single coordinated operation.
Steve Gibson [02:07:01]:
While it did not specifically target AI infrastructure, the campaign's cascade through the developer toolkit caught Light LLM within its blast radius and exposed how AI proxy services that concentrate AI keys and cloud credentials become high value collateral when supply chain attacks compromise upstream dependencies. Key sections of this blog and I'm not going to share all the details because we don't need that, but they wrote Key sections of this blog entry include a technical analysis of the malicious multistage payload and its impact on AI environments, a timeline, an operational review of Team PCP's campaign, and a deep dive into how security tools themselves became attack vectors. Trend AI Research's analysis into the Light LLM compromise also covers attribution challenges, gaps in public threat intelligence, and actionable def defense strategies. Detailed indicators of compromise and miter attack mappings have been provided, but for an even more comprehensive understanding of the security incident, reach out to Trend AI Research for the full technical report. Okay, so that's much deeper than we need to die for all that. But what they uncovered and reported about the root source of the vulnerability was interesting under their how your security scanner can become the attack vector, they wrote. Trivi is an open source vulnerability scanner developed by Aqua Security. It scans container images, file systems and infrastructure as code for security vulnerabilities, and it is integrated into the CICD pipelines of thousands of software projects via the trivia action GitHub Action Security scanners now so, so, okay, the point is, Trivi was the the root of this compromise, so they explain.
Steve Gibson [02:09:17]:
Security scanners are uniquely dangerous supply chain targets by design. They require broad read access into the environments they scan, including environment variables, configuration files, and runner memory. When a scanner is compromised, it becomes a credential harvesting platform with legitimate access to secrets. In late February 2026, an actor operating under the handle megagame10418 exploited a misconfigured pull request target workflow in Trivi's CI. Their continuous integration to exfiltrate The Aquabot personal access token. Aqua Security disclosed the incident on March 1 and initiated credential rotation. However, according to Aqua's own post security post incident analysis, the rotation wasn't atomic, and attackers may have been privy to refreshed tokens. Okay, now that's an important point, so I want to pause here to explain that.
Steve Gibson [02:10:37]:
We've talked about the concept of so called atomic operations. The name obviously comes from the word atomic, and it's meant to imply that it cannot be further divided into smaller pieces. Molecules of course being collections of atoms, are divisible, not so the atom. So to clearly illustrate the occasional need for atomic operations, you know, say, say that a computer program needed to count up to a certain number, but no more. If the program was single threaded, meaning that it only ever had one thing going on inside itself at once, that would be easy to do. The program would read the value of the thing that's being counted. If it was already at its upper count limit, then the program. I'm, I'm sorry, if, if it was not already at its upper count limit, then the program would increment it to its next value.
Steve Gibson [02:11:47]:
If it was already at the upper limit, it would just leave it there. But now imagine what happens if there's a lot more going on in the program with multiple simultaneous execution threads running around. Perhaps because the CPU has multiple cores, or the application itself has many threads running in this environment. There's a chance that both CPUs would wish to increase the count at the same instant. So they would both be executing the exact same code at the same time. They would both read the counter's value, they would both see that it had not yet reached its limit, so they would both increment it, thus increasing its initial value by 2. But if the counter had been previously sitting at 1 below its limit, that increase by 2 would move it up past the limit. A very subtle bug.
Steve Gibson [02:12:56]:
These sorts of so called race conditions have historically been the source of, of, you know, many hard to find problems. You know, they're the, they're the sort that never happen while you're watching it, while you're developing the code, but they somehow always occur when you're on stage demonstrating what it is that you've got. So in our example, that test the value and maybe increment it, that would need to be made atomic so that the testing and the incrementing could not be broken apart and performed separately, even by different processors that are executing at the same time. That operation could only be done by one processor or execution thread at a time. So the other one, the other processor trying to do it would be briefly stalled until the first processor had finished with that atomic operation. And at that point the second processor could could proceed and if it saw that the variable was already at its limit, it would not also increment it. Okay, so we left off with Trend Mic Trend Micro noting Aqua Security disclosed the incident on March 1 and initiated credential rotation. However, according to Aqua's own post incident analysis, the rotation was not atomic and attackers may have been privy to refreshed tokens.
Steve Gibson [02:14:25]:
In other words, somebody might have still been logged in when a token was updated and then they would have grabbed that Trend Micro then continues the gap. The gap that is this race condition gap that that gap proved decisive on March 19th at at 17:43 UTC, Team PCP used still valid credentials to force push 76 of 77 release tags in the privy action repository and all 7 tags in setup Privy, whatever those details mean. But it meant two malicious commits containing a multi stage credential stealer. The malicious code scraped the runner worker process memory for secrets, harvested cloud credentials and SSH keys from the file system, encrypted the bundle using AES256CBC with an RSA4096 public key, and exfiltrated it to a typo squatted domain scan aquasecurity.org According to analysis by CrowdStrike, the legitimate trivy scan still ran afterward, producing normal output, leaving no visible indication of compromise. Okay, in other words, because Aqua Security was for whatever reason, logistically unable to rotate every single credential at once when no one was actively logged on, the bad guys were able to maintain their corrupting persistence. Trend Micro finished this portion of their write up by writing this is the meta attack. A security scanner the tool defenders rely on to catch supply chain compromise itself became the entry point for a supply chain compromise. The trivy compromise in GitHub Actions gave the attacker the keys to publish arbitrary versions of Light LLM to pypi.
Steve Gibson [02:16:42]:
Everything that followed was exploitation of that initial foothold. And Light LLM was just a coincidental casualty of this. They said the lesson that the lesson is uncomfortable but critical. Your CICD security tooling has the same access as your deployment tooling. If it's compromised, everything downstream is exposed. And what we're now seeing is the bad guys have gotten sophisticated enough to take advantage of that. I mean, it is truly terrifying. So what we see is that the enabling of this attack on Light LLM had nothing to do with AI per se.
Steve Gibson [02:17:27]:
It's just its popularity that allowed it to would, that would have allowed it to explode at 3.4 million instances of compromise per day had they had the bad guys not made that crucial mistake that that crashed the machines that it tried that it was trying to compromise. So after providing a fully detailed forensic analysis of this malware campaign, Trend Micro concluded with a summary and recommendations they wrote. As AI machine language tooling proliferates across enterprise CICD pipelines, the attack surface expands with it the tools the developers install to interact with AI systems. Proxy gateways, model routers, experiment trackers and inference servers handle high value secrets. By design, supply chain attacks against these tools inherit the trust and access of the AI infrastructure itself. So again, AI is not to blame here. It's really just a case of the more tools you're using, the more exposure there will be when any one of them might be compromised. Trend Micro continued saying the malicious payload is analyzed in this report is a direct exploitation of the systemic secret management failures extensively documented in prior Trend AI research.
Steve Gibson [02:18:58]:
As previously described, developers have adopted ENV files so profusely that they have forgotten their sensitivity, leaving them exposed. And threat actors are actively scanning for exactly those files. The harvester analyzed here operationalizes that attack surface at scale. It performs exhaustive file system walks targeting env, local ENV production and ENV staging files across up to six directory levels while simultaneously extracting AWS credentials, cloud provider tokens, kubernetes, service account secrets, CICD pipeline configurations and database connection strings. The same categories of secrets Trend AI Research previously identified as most commonly stored in plain text inside dot ENV files. And they finish off by some well reasoned security recommendations. They said this case highlights the risk of building an entire ecosystem on top of fragile trust. The light LLM hack is just the latest example of attackers exploiting the reliance on on open source repositories and poor secret hygiene.
Steve Gibson [02:20:38]:
Security is not an afterthought you can outsource entirely to a vulnerability scanner. So the apparently very highly skilled this team pcp, these attackers appear to have just appeared been in a bit of a hurry. This led them to deploy otherwise very potent and and sophisticated malware that must have taken a lot of time to to generate allowed them to deploy it containing a flaw that that unfortunately for them and thank God for us immediately caused that malware to draw attention to itself.
Leo Laporte [02:21:24]:
This is why you want to test before you push to production. Man, they should have known.
Steve Gibson [02:21:29]:
Yep, they got bit by their own. You know by. Yeah, by being in a race and as A result, the infection was almost immediately spotted and stopped. Had the bad guys not made that mistake, at a Download Rate of 3.4 million instances of the infected light LLM per day being used, you know, the damage that was all set up and engineered to occur likely would have, and the resulting mess would have been far worse in the truest sense. As I said at the top of the show, we have dodged another bullet.
Leo Laporte [02:22:11]:
Oh yeah.
Steve Gibson [02:22:12]:
There can be no question that the entire industry has built an ecosystem upon which has, be it has become dependent, if you'll pardon the pun or double entendre, I guess, whose security guarantees are truly fragile. These are fragile guarantees. We're essentially hoping for the best because the goodies are just too enticing for us to resist. Or phrased another way, the cost to us today of deploying truly secure solutions prices them out of reach, rendering them impractical. So we knowingly and deliberately create dependencies upon sprawling packages over which we have no oversight or direct control. Well, all we can really do at this point is hope that our luck holds.
Leo Laporte [02:23:08]:
It's not just the packages, it's also automation. I mean they were, it sounds like it was a cicd. And by the way, this happened a couple of weeks ago, a GitHub CID CD issue with GitHub Actions. And I keep seeing this again and again. And so that's a case of. And I understand why CICD is incredibly useful. It's an automated way of delivering your building and delivering your software. But that's what it stands for.
Leo Laporte [02:23:36]:
Continuous integration, continuous delivery, continuous development. Development. But if you're automating to that degree and you're not paying attention, well, some
Steve Gibson [02:23:47]:
real risk AI is bringing us another layer of automation.
Leo Laporte [02:23:52]:
Right?
Steve Gibson [02:23:52]:
I mean it, it's, it's the, the, I mean people are just stunned by what it does for them. Like they, like they, they, they don't even understand what the installation on their local system is. It's just like, you know, Claude will
Leo Laporte [02:24:08]:
automatically push to GitHub. So all my repos are pushed to GitHub. And it was automatically building it, it was setting up GitHub Actions and building the software. I wasn't even, I didn't even know how to do that. It just did it for me. And so yes, we're kind of giving over a lot of our agency to systems partly because it's so complicated these, these days.
Steve Gibson [02:24:32]:
So drivey created a super complicated system
Leo Laporte [02:24:36]:
which was a security tool.
Steve Gibson [02:24:38]:
Scanner. Scanner. It was a, it, it was an open source scanner used by these systems to scan themselves for malware.
Leo Laporte [02:24:47]:
So we're.
Steve Gibson [02:24:48]:
It itself was compromised.
Leo Laporte [02:24:50]:
It was compromised. And then it pushed a bad version of Light LLM because.
Steve Gibson [02:24:56]:
Because Light LLM used it to scan for malware.
Leo Laporte [02:25:00]:
Right. And so it. In the.
Steve Gibson [02:25:01]:
And so it had access to.
Leo Laporte [02:25:03]:
So in the GitHub actions. The Trivi keys were compromised. They rotated them as best they could, but they didn't get them all. Apparently, bad guys, this, this PCP team got the key and then used that key to compromise Trivi to inject malware into Light LLM as part of the CICD process pipeline.
Steve Gibson [02:25:24]:
Right.
Leo Laporte [02:25:25]:
Wow. That's actually pretty sophisticated.
Steve Gibson [02:25:28]:
Oh, no. As trend as Trent said, these guys really know what they're doing. I mean, this is a. You know, unfortunately, the goodies are so big. I mean, they know that, you know how many copies of stuff is being downloaded per day.
Leo Laporte [02:25:45]:
This stuff, exfiltrated tokens, SSH keys, crypto passwords, kubernetes keys. It basically took all the secrets on your system and sent them to the.
Steve Gibson [02:25:56]:
Encrypted them and sent them off to a. To a spoof domain.
Leo Laporte [02:26:01]:
I mean, imagine if this had gone on for a day or two, the night. I mean, I'm surprised we haven't heard more pain from the 47,000 who installed it. And I understand now why they were in a hurry because they had a compromised key to Trivi, but they didn't know how long that key would stay. Good.
Steve Gibson [02:26:16]:
Yeah.
Leo Laporte [02:26:17]:
So they said, we got it. We got to strike while the iron's hot. Quick, get some malware out there.
Steve Gibson [02:26:21]:
I mean, and it literally must not have been tested because apparently the immediately when you ran it, it crashed your system. And I think that explains why the 47, 000 instances we haven't heard anything from.
Leo Laporte [02:26:33]:
I mean, everybody crashed, came out of
Steve Gibson [02:26:35]:
the gate and just stumbled.
Leo Laporte [02:26:36]:
Yeah, it didn't work. Yeah, because they rushed. They said, quick. We got, you know, we got, we got one minute to take advantage of this. Let's push something out. And they probably told Claude, write something real quick that will get on a system, encrypt the keys, and send them to this address. Wow. Ay ay, yai yai yai.
Leo Laporte [02:26:56]:
Karamba. Well, I'm glad, you know, it's really good. Thank you. You know, columns write up was great, but I didn't understand the trivia part of it. So thank you for explaining that. Trend Micro report. This is why we listen every Tuesday to security. Now I hope you'll tune in next Tuesday if you want to share this show with your Friends and family.
Leo Laporte [02:27:17]:
There's many ways you can get it. I would suggest going to Steve's site. He has some unique versions ofsecurity now GRC.com of course while you're there it's an opportunity to pick up spinrite, the world's best mass storage maintenance, recovery and performance enhancing utility which everyone should have. If you have mass storage, you do, I presume. Unless you're committing everything to memory, I don't know, you need it. You should have spin, right? And he also has of course the brand new DNS Benchmark Pro. Brand new just came out all@grc.com There is the. When you go to the.
Leo Laporte [02:27:51]:
There are a lot of other things on the site including a huge number of freebies that Steve just generously gives away like the very famous shields up to test your router security while you're there. The show is also there. He has 2, 2 unique, 3, 4 unique versions. All his versions are unique. A 16 kilobit audio version which is low quality admittedly, but small, easy to download for bandwidth constrained individuals. There's a 64 kilobit full version audio. There's also the Show Notes which he writes every week. Puts a lot of effort into this.
Leo Laporte [02:28:25]:
Leaves his family life on hold just for this. 20 pages of goodness. And the Show Notes are very complete. Lots of links, pictures, information. You can get that@grc.com he also has the transcripts written by Elaine Ferris, an actual human being. And they're very good. Those take a few days, but after the show's out for a few days they'll be also@grc.com if you'd like to get the Show Notes emailed to you automatically, you could do that. Steve has a mailing list.
Leo Laporte [02:28:53]:
If you go to grc.com email there's actually the real purpose of that page is to whitelist your email so you can send him comments, pictures of the week, things like that. But you can also click a box below it that says send me the Show Notes. There's another box below that. Send me announcements about new software which come out about once at an eon. So you don't have to worry about getting too much email on that one. But you will get a weekly email if you sign up for the Show Notes newsletter. We have copies of the show at our website, Twit TV SN. There is a YouTube channel dedicated to the video.
Leo Laporte [02:29:24]:
Yes, there's audio and video. We have audio and video on our website as well. Or subscribe in your favorite podcast client. Then you don't have to think about it. From now on it's automatic and I promise no malware embedded. I don't think we could if we tried. I don't think MP3s can contain. I don't know.
Leo Laporte [02:29:42]:
I shouldn't say that. I won't. I won't make any promises I can't keep, but I think that that's pretty safe. You can get it automatically in your podcast clients so you can listen the minute it's available. We stream this show when we do it live every Tuesday right after Mac break weekly. That's 1:30 p Pacific 4:30 Eastern 20:30 UTC. You can watch live in the club Twit Discord but you can also watch on YouTube. Everybody's welcome to watch on YouTube, Twitch, X.com, facebook, LinkedIn and Kik.
Leo Laporte [02:30:13]:
Again that is 130 Pacific 430 Eastern 2030 UTC. Thank you Steve.
Steve Gibson [02:30:20]:
Have a wonderful week my friend. We will talk in April.
Leo Laporte [02:30:25]:
Thank you. Goodness. Bye bye bye Security now.