Transcripts

Security Now 1078 transcript

Please be advised that this transcript is AI-generated and may not be word-for-word. Time codes refer to the approximate times in the ad-free version of the show.

 

Leo Laporte [00:00:00]:
It's time for Security Now. Steve Gibson is here. The FCC has backed down a little bit and that's good news for router manufacturers. AI has found a 21 year old critical flaw in, well, the most secure operating system I know about. We'll talk about the let's encrypt outage and then how Digicert responded to its recent breach. Steve said a plus to DigiCert. That's coming up next on Security Now.

TWiT.tv [00:00:29]:
Podcasts you love from people you trust.

Leo Laporte [00:00:33]:
This is twit. This is Security now with Steve Gibson. Episode 1078 recorded Tuesday, May 12, 2026. Digicert does it right. It's time for Security now. The show we cover your security, your privacy and how computers work and maybe a little sci fi and block vitamin D thrown in with this guy right here. Because basically this is the show where Steve talks about the stuff he cares most about. Hello Steve.

Steve Gibson [00:01:05]:
I'm not sure if it was self selecting, but our listeners tend to agree. Yes. So they're like, they're. I mean I'm getting vitamin D, email and, and, and sleep supplement questions. And so I know that our listeners are.

Leo Laporte [00:01:19]:
And Steve, we should add coffee to this because Steve is, as we well know, a five shot venti latte drinker. There it is in the giant mug. I had some wonderful coffee in Kona. They grow Kona coffee on the side of the volcano.

Steve Gibson [00:01:34]:
That's how they named it actually. Yeah.

Leo Laporte [00:01:35]:
Yes, the name came from the city, but it is amazing coffee. So much so that I bought a large amount of it to bring back because it's just, it's so.

Steve Gibson [00:01:47]:
No, I mean real good coffee is just in a class by itself.

Leo Laporte [00:01:54]:
But I don't think you would like Kona coffee because I remember we talked about this. In fact this came to mind as I'm drinking it because of where it's grown in the volcanic soil. It has less caffeine and very little bitterness, very little bite. And I remember you like the bite.

Steve Gibson [00:02:12]:
Yes, in fact decaf lacks the bite. And so I was like, eh, seems a little. Why bother?

Leo Laporte [00:02:18]:
Kona coffee is almost like tea. It's very smooth and delicious and a little bit less caffeinated. So yeah, I thought maybe he wouldn't, maybe he wouldn't like it that much come to think of it. So I didn't send you any. Okay, that's very.

Steve Gibson [00:02:32]:
I'd rather have salt.

Leo Laporte [00:02:34]:
Expensive. I do have some salt from salt. Hank, that's coming, coming your way. As soon as I figure out how I could package those Glass jars in a way that they don't break. We had some special salt made for Steve and his wife. They have a. A fetish for my son.

Steve Gibson [00:02:50]:
Yeah, that's right. Okay, so we are at episode 1078 for May 12th. And I, I teased this last week. The news was just breaking and I didn't have the whole story. And actually they didn't have the whole story. I'm talking about a. An interesting problem that Digicert, the industry's now by far number one certificate authority, suffered. I titled today's podcast Digicert does it right because I and a lot of other industry experts have singled this, their reporting out as this is the way you do this.

Steve Gibson [00:03:40]:
If you suffer a breach, how do you disclose? And anyway, so we're gonna. I want to really, you know, give them some props for like the job that they've done and take a look at what they did to just, you know, both give them credit, but also to show the. In. In some detail this is the way it's done. Right.

Leo Laporte [00:04:03]:
So many people do it wrong. We should. Oh, we should definitely mention when they do it right.

Steve Gibson [00:04:08]:
Yeah, exactly. And especially we've covered other CAs that have lost their cass because of, you know, trying to like. Oh, no, that really was. Oh, well. Oh, you found that. Okay, well, then we'll have to talk. I guess we'll have to, you know. Yeah, it's just, you know, wrong.

Steve Gibson [00:04:26]:
So, you know, props to them. We're going to talk about the fcc, however, deciding that firmware updates might actually be a good thing. So let's rethink that policy. Netgear, speaking of the FCC is the first and as far as I know, so far only router manufacturer to get a full pass on this ridiculous Euro.

Leo Laporte [00:04:52]:
Just got one yesterday.

Steve Gibson [00:04:54]:
Oh, Good.

Leo Laporte [00:04:54]:
So there's two now.

Steve Gibson [00:04:56]:
Good. Also, AI has uncovered a 21 year old critical remote code execution vulnerability. And this one is trivially implemented in one of the most secure Unixes ever, my favorite and the one I use, FreeBSD. And I should also mention I didn't get it in the notes for this week, but I'll be talking about it next week. Google has announced that they have uncovered the first AI generated zero day. So we now have a confirmed example of AI as we have been worried about. Even pre Mythos, this is not, you know, because presumably the bad guys didn't have access to it. Although we do know that there were some leaks of Mythos access.

Steve Gibson [00:05:48]:
But this has been the concern, so it's beginning to happen. There was also A brief let's encrypt outage. We have another example of a company doing the right thing. Turns out there are now some reports, not surprisingly, also of AI model repositories overflowing with. I'm not sure you call it malware malprompting. I don't know what mal. It's bad. So mal.

Steve Gibson [00:06:19]:
But anyway, that's. That's a thing now, in addition to, you know, NPM and PI PI and everything, all the other open repositories that we've talked about. The. They. The. It looks like cisa. It's cisa. Although it's a decade old.

Steve Gibson [00:06:39]:
It's that agreement that was signed in 2015 which allows private companies to share cybersecurity things with the government without fear of reprisal. Can. Looks like that was. Well, it was temporarily extended. Looks like it's going to be made permanent. We have some very distressing news about the Edge browser and what was found by someone and has now been confirmed not only by people online, but some of my own news group participants who have done this and found all their usernames and passwords in the clear. So we'll talk about that and then we're going to get into a deep look at how you do this right. If you are a.

Steve Gibson [00:07:28]:
A company with serious responsibility for user security, taking responsibility and documenting it. And Digicert did that. And I'm saying that even though I've wandered away from them, as we know, because of their pricing, but still they're, you know, they're it. So I think a great podcast for our listeners and of course a fun

Leo Laporte [00:07:48]:
picture of the week, which I have not looked at, but will at some point with you after this word.

Steve Gibson [00:07:57]:
Probably then, yes, from.

Leo Laporte [00:08:00]:
From our sponsor. We will get to that Picture of the Week in just a moment.

Steve Gibson [00:08:04]:
First, and to those who are seeing video. Yes, Leo has an apparently white shirt on.

Leo Laporte [00:08:10]:
Oh, no.

Steve Gibson [00:08:11]:
But I have to time in ever.

Leo Laporte [00:08:14]:
Oh,

Steve Gibson [00:08:16]:
there was a weird blue stripe hidden behind the microphone.

Leo Laporte [00:08:19]:
It's a Tommy Bahama Hawaiian shirt.

Steve Gibson [00:08:22]:
Some foliage down below, I guess.

Leo Laporte [00:08:24]:
It does look like a white shirt, doesn't it? I might have to retire this from the collection.

Steve Gibson [00:08:28]:
We're just so used to the collection of orange slices and I am gonna

Leo Laporte [00:08:34]:
go change my shirt after this break. I'll wear something crazy and kooky, I promise. I apologize for. Not. Now back to our man of the hour, Mr.

Steve Gibson [00:08:46]:
So I gave our Picture of the Week a title. Okay. I titled it When a Powerful Meme Creates Fertile Ground. A powerful meme Creates fertile ground. With apologies to Randall Monroe. Oh yes, of course, kcd.

Leo Laporte [00:09:05]:
All right, well let me, let me pull it up. We'll look at it together for the first time.

Steve Gibson [00:09:09]:
This is what you begat.

Leo Laporte [00:09:12]:
So you remember his famous XKCD cartoon with the blocks all teetering on one little block that was written by a single developer. Oh my God. This has gotten a little more complicated.

Steve Gibson [00:09:24]:
Wow. So this is an updated version of this ridiculous set of towering blocks.

Leo Laporte [00:09:33]:
This is hysterical.

Steve Gibson [00:09:34]:
It's wonderful. I'm not sure if those are supposed to be tombstones there at the very bottom that with Linus, Torvalds, IBM, TSMC and K& R. You know, clearly K and R is Kernigan and Richie. But at the very, very top we see a little tiny little black speck that says you are here. And then we have a zoom in window with a little guy there. That's whose thought bubble is wtf. And so there's also some guy who's like sort of teetering on the edge that is is labeled Web Dev sabotaging himself. We've got that all sitting on webassembly and there's a V8 engine and it's all bracketed saying something happening in the web.

Steve Gibson [00:10:23]:
Then a bracket on the other side that is more encompassing titled all modern digital infrastructure. Referring to all that. We've got rust devs flying in from the left. It says doing their thing and doing a loop de loop and then slamming into the Oracle block. And it looks like maybe JWT Java web tokens are above that.

Leo Laporte [00:10:46]:
Jvm, that's the Java virtual machine.

Steve Gibson [00:10:49]:
Ah, okay. Jvm.

Leo Laporte [00:10:50]:
And it's teetering. The reason I know that's teetering on Oracle, which owns it. Right.

Steve Gibson [00:10:54]:
Oh boy, perfect.

Leo Laporte [00:10:55]:
And I love the Angry Bird coming in from right the right and it's

Steve Gibson [00:10:59]:
titled whatever Microsoft is doing. Is the Angry Bird apparently going to slam into this whole thing. CrowdStrike's got its little block. I don't know what left hyphen pad is Leo.

Leo Laporte [00:11:11]:
I don't know what that means either. Yeah.

Steve Gibson [00:11:14]:
Anyway, so also wedged in now we have a new thing. We have what would be a carjack if it were. If it were going to be a parallelogram and sort of raise the whole thing. Instead this is a, a screw based wedge which is expanding and that's of course labeled AI because it's threatening to teeter the whole thing off its axis and cause it to all come crashing down. We have that one C99 project based on behavior of undefined behavior and not to be left out, we've got a cloud flare block and a, a set of four of the lava lamps, which of course Cloudflare made famous for their true random number generating system based on the wax in the lava lamps. I also like those bunch of little blue squares in the lower right. I had to figure out what I was looking at and I realized it's a shark biting an undersea data cable, which of course is going to cut off a whole chunk of the Internet if, if it chewed through the, the, the cable laying on the ocean floor. Lib curl.

Steve Gibson [00:12:33]:
Not to be forgotten is there AWS C developers writing dynamic arrays. And we are reminded that all of this is driven by, made possible by electricity. So the, the, the underlying foundation is a big block of electricity with some, you know, electric poles coming in to feed it.

Leo Laporte [00:12:56]:
Very, very funny. This is, this is not only funny, but really pretty true.

Steve Gibson [00:13:01]:
There's a lot of. Yeah, this is all the stuff we've talked about through the years on the podcast. Yeah.

Leo Laporte [00:13:07]:
Wow. Very nice.

Steve Gibson [00:13:09]:
So a fun picture and thank you to our listener who sent it to me. Okay, so there's news on the residential router front. Apparently someone at the FCC got a clue, or at least they listened to somebody who actually knew something about cybersecurity. Because last Friday they announced a reversal to their previous no updates for you policy. Tom's Hardware covered this and wrote. The Federal Communications Commission announced on Friday, May 8 through its Office of Engineering and Technology, the OET, that it was extending to temporary waivers allowing certain foreign produced drones, drone components and consumer routers to continue receive, you know, all those bad things that we think are coming from China to continue receiving software and firmware updates in the United states. In late 2025, they remind us. And early 2026, the FCC added these categories of equipment to its so called covered list which effectively blocked already authorized devices from receiving post approval software and firmware modifications.

Steve Gibson [00:14:23]:
The agency subsequently issued waivers permitting critical security and functionality updates to continue through March 1st of 2027. So here we are around the same time in 2026. So basically you get a year more of updates for consumer routers. Now under the now updated waiver, manufacturers of affected devices will be allowed to continue issuing software and firmware updates until at least the 1st of January 2029. So almost another two years provided that the devices had already been authorized for use in the US before being added to the FCC's covered list. Meaning the nothing new can come in. We talked about that before. No new model numbers, which again is Nuts.

Steve Gibson [00:15:15]:
But okay. They write, the extension also broadens the waiver to include certain Class 2 permissive changes involving software and firmware updates intended to mitigate consumer harm. In its notice, they wrote, the FCC acknowledged that continued software support remains necessary. This is them, you know, the light turning on for them. Hey, continued software support remains necessary to protect us consumers. What do you know? The waiver specifically allows updates that maintain device functionality, patch vulnerabilities and preserve compatibility with changing operating systems and network environments. But still no new models. But, oh, you can have all the firmware updates you want, which again, what.

Steve Gibson [00:16:03]:
The agency argued that the public interest would be better served by allowing these limited updates rather than freezing software support entirely. Okay. In other words, duh. Anyway, Tom's adds the waiver does not reverse the broader restrictions or remove the devices from the covered list. It only, it applies only do already authorized products and the software and firmware related changes intended to maintain safe and secure operation. Manufacturers must still comply with other FCC requirements governing permissive changes and equipment certification. Okay, so as I said, given January 1, 2029, that allows for nearly an additional two years of updates to existing routers, which, yeah, that's certainly good news, but of course, the entire thing remains unspeakably ridiculous because control over a router's firmware is all anyone needs to turn that previously authorized and approved because it existed back, you know, a year ago router into an Internet bandwidth weapon. The hardware doesn't need to change, model does, model number doesn't need to change.

Steve Gibson [00:17:32]:
It's all firmware. So either you trust the foreign manufacturer of a router or you don't. And if you do, then there's no problem. And if you don't, then limiting updates, like allowing any updates, but limiting them to the original March 1, 2027 deadline, even that one year is of absolutely zero benefit since you've given them, under the assumption that they have malicious intent, one full year to cook up some new sneaky malware update with which to infect any routers that may be updated during the period of that year. You know, in other words, none of this has ever made any kind of sense. As we saw last week, cisa, our CISA agency, has been effectively neutered. You know, our, our agencies appear to now be staffed and run by people who will not push back against policies that they know are clearly wrong. So this sort of nonsense is what results.

Steve Gibson [00:18:44]:
It's difficult to imagine this could have happened back when, you know, CISA was at its original strength and staffing. But you know, because there would have been people there would have said, what? No, I mean they're, they. One of the reasons we liked CISA so much was that they had take such, they had taken such responsibility for getting into the you must update your stuff business and, and pushing that out to all of the government agencies over which they had any oversight. Apparently that's not what we do anymore. Even though it was the right thing to be doing. Then one piece of good news, and Leo, you added a second piece of good news for th. For those who like and use Netgear routers and now the EERO products is that even before the addition of those additional two years of firmware updates, Netgear and now we know EERO had announced that it had received, they had received now the FCC's conditional approval for their routers. This meant that none of those ridiculous FCC imposed restrictions would affect any of these NetGears and ERO's router products, not those already sold and not any current or future models.

Steve Gibson [00:20:08]:
It's like this, you know, this membership on this list just doesn't exist. So they get a full pass, which also includes their right to update their firmware with abandon anytime they feel the need. So yay. Okay, so we've heard again from the guys at Aisle Security. Remember their name, AI S L E. You know, they're that commercial group who've been using their own AI, as their name suggests, to find flaws in software and who were somewhat annoyed, as we discussed a couple weeks ago, by all the hoopla that Anthropic was able to generate around Mythos. The headline of last Thursday's posting of theirs was I discovers CVE 2020 64251. A 21 year old free BSD remote code execution vulnerability.

Steve Gibson [00:21:16]:
So AI was used. This thing has been in. This has been a problem in FreeBSD for 21 years and it actually, as we'll see in a second, actually inherited it from OpenBSD. When one open source project, FreeBSD grabbed another chunk of code from a different open source project, OpenBSD, and with it came a serious problem. So this posting of IO was written by the discoverer of this flaw. He writes. FreeBSD is often described as one of the most secure operating systems in the world, with its reputation arising from its high quality networking stack, deliberate engineering and a philosophy of security through simplicity. FreeBSD's history and usage are remarkable.

Steve Gibson [00:22:11]:
It powers Netflix Open connect infrastructure, Sony's PlayStation OS, part of Nintendo's Switch OS, Yahoo's backend services, NetApps storage systems. Citrix's NetScaler has long helped form the software base of major networking platforms Cisco, Juniper and so on. WhatsApp's back end services historically and is now the focus of a substantial foundation effort to make it work better on on modern laptops. And he writes for Full disclosure remains this author's personal operating system of choice. And to that I will just add that it's also my own Unix OS of choice. As I've often mentioned, it underlies the pfsense personal firewall router system and for me it runs our DNS and our newsgroups. So you know, that's the Unix that I chose. You may remember Leo years ago a guy named Brett Glass was active in the early days of the PC industry and Brett knew his way around Unixes and I remember having a conversation with him and saying so what do you Recommend? He said, FreeBSD.

Steve Gibson [00:23:33]:
Period.

Leo Laporte [00:23:34]:
There are other BSDs, there's NetBSD and OpenBSD, but FreeBSD is the one you like.

Steve Gibson [00:23:41]:
I do and it has had some some desktop laptop orientation and some I think it's $750,000 from the some foundation affiliated with FreeBSD are making a serious push to make it more desktop and laptop friendly, adding a lot more WI fi drivers and making it a lot more hard hardware agnostic. So it's still alive and kicking. Anyway, I'll continue saying I'll discovered a remote command execution vulnerability in FreeBSD's DH client that is trivially weaponizable and wormable. Yikes. By any system on the same local network as the FreeBSD system. The vulnerability first entered FreeBSD in the 2005 that's the year we started this podcast 2005. Really? That's 21 years old. And so is this podcast 2005 release of FreeBSD 6.0 when OpenBSD's DH client was imported and laid dormant.

Steve Gibson [00:24:58]:
That is the vulnerability did until discovered by Aisle. The vulnerability also affected OpenBSD until 2012 when that operating system deprecated DhClient script completely, indirectly fixing the vulnerability. But FreeBSD didn't. The initial flaw was identified by Aisle's AI based source code analysis pipeline and then investigated by our triage agents Joshua Rogers that's actually the author, so he's referring to himself of Aisle's offensive security research team traced the relevant code paths, established the full security impact and developed a proof of concept demonstrating a complete local network to root exploit chain. FreeBSD is adding key improvements to laptop support, including greater WI FI support. So the attack surface here becomes even more relevant to everyday systems. A malicious wireless access point, or in some cases another attacker just sharing the same WI FI network able to spoof DHCP can target the exact DHCP path that Almost every wireless FreeBSD system will rely upon. Imagine you're the author of this post who runs FreeBSD on their laptop as this guy does.

Steve Gibson [00:26:26]:
You're at a coffee shop, airport or hotel and as soon as you connect your FreeBSD equipped laptop to the Wi Fi, your whole system is hijacked in secret. Imagine you have a PlayStation whose OS is locked down from any unofficial access, only to be hijacked by by connecting to a network. In other words, this vulnerability not only affects servers, but any FreeBSD machine that connects to a network using DHCP, which is the default setup case for almost everybody. The vulnerability was a logic flaw that allowed attacker controlled protocol data to be persisted into a trusted configuration like format without proper sanitization, then later reinterpreted in a privileged execution path. That is exactly the kind of bug Aisle's autonomous security platform is built to find and and get how he signs off here he says, like our recent findings in Open ssl, Firefox, Live PNG and Amazon's Crypto stack, this result came from disciplined engineering and end to end analysis, not model mythology.

Leo Laporte [00:27:52]:
Oh please.

Steve Gibson [00:27:55]:
So, okay, sounds like they may still be somewhat annoyed by the mythology of

Leo Laporte [00:28:03]:
Mythos, which is not mythology, as you pointed out.

Steve Gibson [00:28:06]:
Exactly. It's happening. But in any event, AI truly is finding serious flaws, many of which have been present for decades. In this case, we're talking about FreeBSD's DHCP client, which can be fed a maliciously formed reply DHCP reply containing code or commands and code that it will execute. As Joshua, who authored this write up noted, this could have been extremely serious if it had not been found by the good guys. At some point we may see those who claim that AI enhanced software vulnerability discovery never turned out to be such a big deal. Remember though, that's what Y2K could have been a big problem if it hadn't been caught beforehand and dealt with objective observers. Or would, I think, do well to remember all of the many critical vulnerability discoveries like this one that did serve to clean up our archeological code base before the bad guys had the chance to get in there and exploit it.

Leo Laporte [00:29:21]:
The question, of course, how long it's going to be before the bad guys get access to these models? Well, you're going to have a story about this in just A little bit actually.

Steve Gibson [00:29:30]:
Yeah. Yeah. Okay. So there was a bunch of Internet chatter last Friday about a several hours outage of let's Encrypt. Our listeners know that with so much of the web now utterly dependent upon certificates issued by let's Encrypt and with maximum certificate lifetimes continuing to drop, and especially with let's Encrypt's optional super short six day certificates now being available, any outage of the system upon which so much now depends is of interest. And again, and I'll say again that you know, unfortunately all of this, I mean I get how let's Encrypt happened, I understand the appeal, but it, it's so antithetical to the, the deliberately distributed model of the Internet. I, I hope this never comes back to bite us because it is creating a single point of failure where everything else has been designed to prevent that. In this case, this outage, such as it was, was deliberate and temporary.

Steve Gibson [00:30:41]:
It was an administrative suspension of new certificate issuance following reports of a missing extension from one class of certificates. Let's Encrypts post incident report said and you know this evolves a lot of inside baseball terminology. They said let's encrypt gen Y which is the Y E and Y R cross certified subordinate CAs were issued in violation of CCADB policy which requires that the server auth EKU extension must all caps must be present in cross signed intermediate Certificates issued since 06-15-2025 Root ye and yr were issued in 09-03-2025 and are subject therefore to the requirements. Okay, so the certificate extension in question which is to say a server off EKU where EKU is the abbreviation for extended key usage it specifies the, it. It specifies limits the imposition of limits on the application of any certificates which that CA intermediate certificate would be validating. And so the limits are things like can be used for server authentication or client authentication, you know and, or code signing and or email protection and or timestamping. So again specifies what that certificate is authenticating for what purposes and that extension was missing so it should have been there. It wasn't.

Steve Gibson [00:32:43]:
It mostly doesn't matter that it wasn't. But as we know any certificate authority must again that should be all caps m take their responsibilities absolutely seriously and let's encrypt did they immediately stopped issuing new certificates when this, when this issue came to my came to their attention and they confirmed it. They fixed the problem. They resumed reissuing or resumed issuing certificates in their words quote we temporarily disabled certificate Issuance deployed a configuration change to prevent future issuance from the cross signed Gen Y hierarchy and then re enabled issuance. So thank you very much. We fixed it. Problem solved. Nothing to see.

Steve Gibson [00:33:38]:
But again, they are, you know, taking their responsibilities. The. The authorization, essentially the trust that the entire industry is placing now on 70% and growing of all web certificates which are being signed by. Let's encrypt which we need to know that you know, they're doing that job correctly. And Leo, we're a little. After a little past half an hour in, let's take a break and then look at unfortunately why we can't have nice things.

Leo Laporte [00:34:14]:
The.

Steve Gibson [00:34:15]:
Yes, the poisoning of AI models.

Leo Laporte [00:34:24]:
I'm afraid. I want to. I don't want to hear about this, but I'm gonna have to.

Steve Gibson [00:34:28]:
You need to be careful that your agents are not pulling models from Open Claw. And

Leo Laporte [00:34:36]:
I use open router Hugging Faces and I know Hugging Face. You ha. You have to know that because Hugging Face has more than a million models.

Steve Gibson [00:34:44]:
Yes. Where'd they come from?

Leo Laporte [00:34:46]:
Yeah, people are just making them.

Steve Gibson [00:34:48]:
Do you need them?

Leo Laporte [00:34:50]:
No.

Steve Gibson [00:34:50]:
No.

Leo Laporte [00:34:50]:
Well, you need some. I don't know if I need the bad ones. We'll find out what that is in just a little bit. Steve. I've replaced the white shirt, as you can see, with a shirt.

Steve Gibson [00:35:01]:
Now we recognize you.

Leo Laporte [00:35:03]:
I got this in Orlando when we were out there for the Gator world. Yeah, this is my. From Gatorland. It's got gators on it. Yep.

Steve Gibson [00:35:11]:
Nice.

Leo Laporte [00:35:11]:
A little bit more recognizable. Not. Not a white shirt for sure. All right, now I want to hear about this AI thing.

Steve Gibson [00:35:17]:
Oh, boy. So last Friday, the Next Web posted the news of an analysis of the large language models being hosted and offered at Hugging Face and clawhub. The news is not good. Yeah, here's what they said. They said the two most important software supply chains in artificial intelligence have been systematically compromised. Hugging Face, the repository that hosts more than a million machine. More than a million where the what?

Leo Laporte [00:35:49]:
It's amazing. When you go there, it's amazing. I mean, there are, you know, the. The kind of maybe several dozen root AI models, but then people are creating their own spins of it and so forth. It's really incredible.

Steve Gibson [00:36:04]:
Repository that hosts more than a million machine learning models used by virtually every AI company on the planet, has been found to contain hundreds of malicious models capable of executing arbitrary code on the machines of anyone who downloads them. Clawhub, the public registry for open clause AI agent skills, has been infiltrated by a coordinated campaign that planted 341 malicious skills designed to steal credentials, open reverse shells, and hijack AI agents for cryptocurrency mining. The attacks are different in technique, but identical in logic. Both exploit the implicit trust that developers place in shared repositories. Both use the infrastructure that the AI industry built to. Excuse me, AI industry built to accelerate development as the vector for compromising it. Hugging Face has been aware of malicious models on its platform since at least 2024, when security firms JFrog and Reversing Labs independently identified models containing hidden back doors. My turn to have a throat tickle.

Leo Laporte [00:37:29]:
Sorry, yeah, I'm just looking right now at Hugging Face at their model repository, and this is actually kind of stunning. They list 2,869,086 different models.

Steve Gibson [00:37:45]:
My God.

Leo Laporte [00:37:46]:
Yeah, I mean, well, let's hope they

Steve Gibson [00:37:48]:
have a good search engine, because they do, actually.

Leo Laporte [00:37:50]:
They have a very good search engine. And, and the thing is, it's not every model, you know, it's not like chatgpt alone. I mean, there are models to do all sorts of things. I, I have a specific model I use from Hugging Face, that's just for text embedding. That's all it does. So, you know, and these are highly customized in many cases.

Steve Gibson [00:38:09]:
So, so, so very vertical applications.

Leo Laporte [00:38:11]:
Very vertical slices. Exactly. Exactly. Yep. I mean, it's a, It's a great repository. This is really somewhat different, though, from the openclaw registry, but I'll, I'll let you talk about this because I know.

Steve Gibson [00:38:24]:
So they said. Hugging Face has been aware of malicious models on its platform since at least. At least 2024, when security firms JFROG and Reversing Labs independently identified models containing hidden back doors. The problem has not been contained. It has scaled Protect AI, which partnered with Hugging Face to scan the platform's model library. And given its size, that's no small feat. Has examined more than 4 million models and identified approximately 352,000 unsafe or suspicious models across 51,700 models. I'm sorry, 352,000 unsafe or suspicious issues across 51,700 models.

Steve Gibson [00:39:20]:
J Frog found more than 100 models capable of arbitrary code execution. The attack technique known as null if AI Nullify nullif. Kind of, you know, a play on nullify null if AI exploits Python's Pickle serialization format, the standard method for packaging machine learning models. Attackers embed malicious Python code at the start of. Of the pickle byte stream and Compress the file using 7z rather than the default zip format which breaks hugging faces Pickle scan detection tool. Well, and that's just dumb. That Hugging face can't check 7z compression in addition to zip. The payloads are not subtle, they write.

Steve Gibson [00:40:14]:
Security researchers have documented models that establish reverse shells that okay, meaning that it connects out to a remote command and control server and says what do you like me to do now? Connecting to hard coded IP addresses, giving attackers direct access to the machine of anyone who loads the model. Others execute credential theft, exfiltrate environment variables, or download secondary malware to the user's machine. A data scientist who downloads what appears to be a legitimate model for a research project or production pipeline is in some cases handing control of their machine to an attacker. Hugging Face has responded by partnering with JFrog and Wiz Security to improve scanning capabilities. Remember that Google bought Wiz? JFrog's integration has eliminated 96% of false positives in malicious model detection. But the platform's open architecture, which is the source of its value to the AI community after all, is also the source of its vulnerability. Anyone can upload a model. The scanning catches known patterns.

Steve Gibson [00:41:31]:
The attackers who designed Nullif AI built their technique specifically to evade that scanning. Claw Hub, the registry for Open Claw's AI agent ecosystem faces a different but related problem. OpenClaw has grown to 3.2 million users and attracted partnerships with OpenAI. But its skill registry has become a target for attackers who understand that an AI agent executing a malicious skill has access to whatever the agent has access to. Which in enterprise environments can mean databases, APIs, internal networks and cloud credentials. In other words, we're giving agents. In order for the agent to have agency it, we need to give IT control and access to things. Unfortunately, the malicious skill inherits that.

Steve Gibson [00:42:29]:
Koi security audited all 2,857 skills. Thank goodness that's a manageable number on clawhub and found unfortunately, 341 malicious entries. Of those 335 were traced to a single coordinated operation called Claw. Havoc separately sinks, you know S N Y k sinks toxic skills Research examined the broader ecosystem and found that 36% of all say better than 1 out of 3. 36% of all AI agent skills contain security flaws with approximately 900 skills, roughly 20% of the total skill classified as malicious. So 1 in 5 deliberately malicious 30 skills from a single author were silently co opting AI agents for cryptocurrency mining. You know which right? Makes sense. You got a super powerful gpu, you got wow, that AI is really working hard for me.

Steve Gibson [00:43:45]:
No, it's working hard mining cryptocurrency for somebody else, they write. The Claw Hub attacks are particularly dangerous because of the nature of AI agent architectures. The rise of model context protocol and similar standards in the agentic era has created a new category of software supply chain in which AI systems autonomously select and execute tools from external registries. A compromised skill does not require a human to click a link or open a file. It requires an AI agent to select the skill as part of its workflow, at which point the malicious code executes with the agent's permissions. The hugging face and clawhub compromises are the AI specific manifestation of a supply chain attack pattern that's been accelerating across the entire software industry. In March of 2026, the Light LLM package on PyPi or PYPI was compromised, potentially exposing half a million credentials, including API keys for Meta, OpenAI and Anthropic. Meta froze its AI data work after the breach of put training secrets at risk.

Steve Gibson [00:45:10]:
In April, a bit warden as we know we covered it Command line instruction package on NPM was hijacked for 90 minutes with a payload specifically designed to harvest credentials from AI coding tools including Claude Code, Cursor, Codex, CLI and AER. Days later, the Pytorch lightning package was was compromised for 42 minutes with a credential stealing payload from the mini shy Hulud campaign. The European Commission itself was breached after attackers poisoned Trivy and we've talked about that an open source security scanning tool demonstrating that even the tools designed to detect supply chain attacks can become vectors themselves for them. The United States Department of Defense published formal guidance on AI and machine learning supply chain risks in March of 2026. Acknowledging at an institutional level that the AI software ecosystem has become a national security concern. The common thread is speed. The Pytorch lightning compromise lasted 42 minutes. The bit warden CLI hijack lasted 90 minutes.

Steve Gibson [00:46:34]:
The light LLM attack window is estimated at hours. These are not persistent campaigns that defenders have weeks to detect. They're brief, targeted insertions that exploit the automated dependency resolution systems that modern software development relies on. A developer who runs a package install at the wrong moment downloads the compromised version. The window closes, but the damage is done. The AI industry has invested hundreds of billions of dollars in model training, inference infrastructure and application development. The investment in securing the repositories through which that software is distributed has been a fraction of the total. Hugging Face has partnered with security firms.

Steve Gibson [00:47:24]:
Clawhub has implemented basic moderation package registries have added two factor Authentication requirements. None of these measures has presented the attacks documented above. State actors can already produce AI powered malware that evades conventional detection. And the supply chain attacks on AI repositories represent a natural evolution of that capability. The models and skills hosted on Hugging Face and clawhub are consumed by systems that make automated decisions, process sensitive data and operate with elevated permissions. A compromised model in a production AI pipeline is not equivalent to a virus on a personal computer. It is a backdoor into an automated decision making system that the organization trusts precisely because it appears to be a legitimate component of its AI stack. The fundamental problem here is architectural.

Steve Gibson [00:48:30]:
The AI industry built its development infrastructure on the same open registry model that has defined software development for the past two decades. Centralized repositories where anyone can publish automated tools that download and execute code from those repositories, and a culture of trust that treats popular packages and models as implicitly safe. The difference is that AI models are not just code. They're serialized objects that execute during deserialization, a property that makes pickle based models inherently more dangerous than traditional software packages because the malicious code runs the the moment the model is loaded before any human has a chance to inspect it. The AI supply chain is now the most attractive target in the software security space. The repositories are trusted, the consumers are automated, the payloads execute on load, and the industry that built these systems is spending its security budget on model alignment and prompt injection, while the infrastructure through which the models are distributed remains in the assessment of every major security firm that has examined it comprehensively compromised. So Leo, I would say that the caution and trepidation you felt and shared when you were first considering turning open claw loose and in your world was likely warranted. This is not an entirely new.

Steve Gibson [00:50:19]:
Yeah, this is not an entirely new phenomenon. Although with popularity comes, you know, increased focus by the bad guys. And we've seen this thing just skyrocket over, you know, so far this, this year.

Leo Laporte [00:50:32]:
The difference between the two hugging face and the skills repository is it's trivially easy to write skills. They're just text. You know, to make your own malware model, that takes a little bit more skill. But anybody can write a skill. It's just plain text. And so though, I mean, guess the skill involved is, is how you insert the, you know, shy hulude style, you know, bitcoin stuff. But right, it's, it's not complicated. And so that's why I think you're going to see in these, in these registries for skills.

Leo Laporte [00:51:04]:
I Never use a skill from the public. I look at skills, what I often do and I would recommend people do

Steve Gibson [00:51:11]:
this is I point them as a training base.

Leo Laporte [00:51:15]:
Yeah. I point my assistant, I say here's a GitHub repository for a skill. Assess this, tell me what you think of it and how we could apply it and then let it write a skill which I will then check. And that actually works quite well. I've built basically my own Open Claw from scratch with just the pieces that I want. And it's a lot, it's, I think,

Steve Gibson [00:51:36]:
I love that you said how we could apply it because you didn't. You weren't talking about you and Lisa, you were talking about you and the AI.

Leo Laporte [00:51:44]:
Me and Claudia, my good friend.

Steve Gibson [00:51:47]:
It is so difficult not to think of this thing as an entity. I mean it is just astonishing. Anyway, I wanted to share this with our listeners because I'm certain that, that you know, we have many listeners who are enjoying playing around with and experimenting with and perhaps even deploying systems or solutions using these openly available AI models. Please, please, please be careful because one of the problems here is this makes it, the, the way this has been built and deployed makes it so easy to use this stuff that just, I mean that, that alone should be a cause for raising a red flag in, you know, in the minds of any security aware person. So you know, as you are Leo, you know, you are, you know, looking at this and not just saying go, you're saying, you know, let's take a look at it, what should we do?

Leo Laporte [00:52:49]:
And well, you've taught me sensei, over the many years that we have done

Steve Gibson [00:52:53]:
this, I'm glad it's sunk in. That's good. But again, it's so easy in a moment of enthusiasm to say, well, and you were tempted, right when, when Open Claw happened, it's like, oh, should I, shouldn't I? But you know, somebody else is not going to be, you know, who hasn't been sitting here for the last 21 years with me. It's going to go, hey, this is great go, right?

Leo Laporte [00:53:16]:
Yeah. If you have any nervousness about it, trust your instincts because you're right. Yes, basically.

Steve Gibson [00:53:25]:
So there's some good news on the horizon. The word is that the original decade long CISA, that's CISA 2015 stands for Cybersecurity Information Sharing act, which as we have talked about on a number of occasions, expired last year because it had a decade long life from 2015 to 2025 which was then temporarily extended until this coming September is now in the process of receiving its much needed long term reauthorization. And, and remember that this is what allows private sector enterprises to share their cyber intelligence or with the government without fear of any legal blowback or reprisals. So this gives them cover. And we've, we've heard from CEOs and CIOs who've been saying, you know, you know, we, we, we really need to share some stuff that we have. Like it's important. But we can't because you know, we, we, we can't risk having oursel, you know, taken to court. So anyway, that, that hopefully another 10, another decade worth of coverage is, is coming soon.

Steve Gibson [00:54:50]:
A number of our listeners pointed me at the news that Microsoft's Edge browser is doing very little, much, much less than it could be to protect its users passwords. A posting from the SANS Institute, which I've enhanced a bit, reads, yep, it's for real. The posting wrote this started with a post in X which highlighted research by and we have a, an AT sign handle that's clearly hackerized. Living off the land with you know, ones and O's and zeros and threes.

Leo Laporte [00:55:33]:
I know, was he a 12 year old?

Steve Gibson [00:55:36]:
But that person did find this issue. The, the, the, the SANS Institute posting said Edge stores all of your browser passwords in clear text.

Leo Laporte [00:55:50]:
Oh great. Okay.

Steve Gibson [00:55:54]:
Yep.

Leo Laporte [00:55:55]:
Oh great.

Steve Gibson [00:55:56]:
Even if you have not used them in this session, you know, just in case he writes, you might want them, he said, I figured it couldn't be that easy, right? But like so many things. Yes, yes it was. To reproduce this, open Edge, don't browse anywhere, just open it, he says. Flip out to Task Manager, Search for Edge, then expand that task. Highlight the browser subtask, right click and choose Create Memory Dump. Navigate to where the DMP file is stored. If you have not used strings before, you're in for a treat. Strings is of course just part of most Linux distros, but you can easily get a copy for Windows as part of Ms.

Steve Gibson [00:56:48]:
Sysinternals. Now let's look for passwords. You could use strings and look for known credentials. Just search for a known password and you will certainly find it. Or you can take advantage of the format of the saved data, which is the URL of the site followed by its protocol, meaning like HTTPs probably then a space, then the user ID a space and the password. All of that for the site, he says. So searching for TLD protocol, right? Mean for example google.com immediately followed by HTTPs he says, which in most cases is like or just Com. Just C O M HTTPs with no spaces.

Steve Gibson [00:57:35]:
He says we'll find them and they'll all be in one nicely formatted group, no less. The command for that will be strings space, hyphen, n space, 8 space, Ms. Edge, DMP and then a space and then a vertical bar for piping. Then find double quote. Com, HTTPs double quote and then hit enter Bang. He says it really is that easy. And the ironic thing, to view these same credentials in the browser. There's a whole security theater process where Edge wants your biometrics as proof before disclosing even the user ID and site names.

Steve Gibson [00:58:23]:
You know, for security. All the while the whole shot is there in clear text, free for the looking. Also, as noted in the X post, Microsoft classifies this as intended behavior. I'm not sure what manager or lawyer he writes, decided that. Hopefully it wasn't anyone in their security team. Any logged in Windows Edge user can dump all of their stored Edge credentials with no additional rights, which means any malware that the user executes also has access to all, all of those credentials for the asking. But he says not to worry, right? It's intended behavior.

Leo Laporte [00:59:13]:
Remember, this is what Chrome did for a while. It kept it in plain text.

Steve Gibson [00:59:18]:
Well, it's, it's chromium. Edge is Chromium based.

Leo Laporte [00:59:21]:
I don't think they do now, though. That's what's surprising. But.

Steve Gibson [00:59:24]:
And he said it's intended behavior. If what's intended is also to get me to use Firefox or Chrome. Yes, it's working. Gosh. So, and I did upon that, coming to the attention of some of the guys in our Security now news group, someone thought really and did it. And he's like, oh crap. Yes, that's terrible. There's all of my username, do domains, usernames and passwords.

Steve Gibson [00:59:54]:
Basically you. You get that you can log in as that person anywhere.

Leo Laporte [01:00:01]:
Wow.

Steve Gibson [01:00:02]:
Crazy. Okay, we're at an hour. We're going to do some feedback. Leo, let's take another break and we will hear from two of our listeners.

Leo Laporte [01:00:10]:
Absolutely. I can figure out what button I need. Oh, there. Okay, I need to press it.

Steve Gibson [01:00:18]:
Is.

Leo Laporte [01:00:19]:
It is time to talk about Doppel as in, as you pointed out a couple years ago, Doppel as in Ganger. Yes. As Steve said on We Go with the show, Mr. Gibson.

Steve Gibson [01:00:29]:
So Todd Whitaker, our listener, writes, Steve, I thought this might be of interest for security. Now, the rival security group has a thoughtful follow up on the Claude Mythos FreeBSD exploit story, arguing that Mythos may not have been quite as creative as the initial coverage suggested, and he has a link to their their whole posting, he writes. Their claim, as I understand it, is not that the result is unimportant, it is that the vulnerability, the prior fix pattern, and perhaps even the exploit relevant structure may already have existed in the model's training data. The FreeBSD issue appears closely related to to CVE 2007 3, 30099 in occurring in MIT Kerberos, the same general RPC SEC GSS validation logic, same stack buffer overflow pattern and a strikingly similar bounds check fix. So Mythos may have discovered something genuinely dangerous, but perhaps by recombining known historical material rather than reasoning from first principles in the way many of us initially imagined, he writes. That still seems worrying, just in a different way. If advanced models can rediscover old vulnerability patterns embedded in the fossil record of open source code, then attackers may not need models to be brilliant. They only need them to be tireless, well tooled, and good at recognizing dangerous old ideas in new places.

Steve Gibson [01:02:25]:
And I'm going to interrupt because Todd has a little a bit more to say about something different, but I just want to say I completely agree with that. As our understanding of computer science has evolved, one of the things that's happened and it becomes it's kind of gone into the parlance now of, of computer science. We've come to notice patterns in the solutions to problems. You know, they're like a a short and small abstraction away from the concrete solution, where we see that many different such concrete solutions can be grouped together by their sharing of a common underlying pattern. So what rival security observed was Mythos Preview finding what we might describe as a flaw design pattern, a common type of mistake that coders have been making through the years, which winds up being a natural sort of mistake to make due to the underlying architecture of the computer behavior that, that we're programming. We all know that today's large language models excel at pattern discovery probably more than anything. That's what they are. So we would expect that if someone somewhere made a similar mistake and its correction that had been captured in the model's training corpus, then it would indeed be able to make the connection.

Steve Gibson [01:03:58]:
So I'd say that this is an interesting and useful observation about the underlying way in this instance Mythos discovered, maybe rediscovered, you know, the newer, similar patterned flaw. But I don't like Todd. I don't see anything taking away from the fact that, you know, as Yoda might say, discover that flaw. It did,

Leo Laporte [01:04:24]:
Yeah. I mean it's not like it knew about the flaw, it just recognized the pattern.

Steve Gibson [01:04:28]:
I mean, yeah, it said, this seems familiar.

Leo Laporte [01:04:31]:
It's like if you recognized a buffer overflow. I mean, exactly.

Steve Gibson [01:04:34]:
Yeah, exactly. So his note continues with some interesting observations from his own background about AI as a computer science educator. He writes, I would also be interested in your broader take on what this means for computer science education and the profession. My current working view is that the most productive human AI collaboration in software engineering depends on advanced judgment, architecture, design patterns. There's patterns, threat modeling, failure modes, invariance, trade off analysis, and knowing when the AI's answer is superficially plausible but structurally wrong. The problem is not that students must learn the old ways before they're permitted to use the new tools, he says. That's just reframing the old learn assembly before see argument. The real issue is that AI makes coding cheaper while making judgment more valuable, he said.

Steve Gibson [01:05:45]:
To supervise AI, to to supervise AI generated software, a person needs mental models. How state behaves where abstractions leak, how protocols fail, why concurrency is treacherous, how memory and parsing bugs become security bugs, and why a working MVP may still be architecturally unsound. Those mental models do not emerge from prompting alone. If we let AI collapse the difficult apprenticeship too early, we may produce developers who can ask for software but cannot reliably tell whether the software they received is safe, coherent, or professionally defensible, he says. I'm writing from my personal email, but my day job is in computer science education, so this one lands close to home. I spend a fair amount of time thinking about what we should still teach humans when machines can increasingly produce the code which so cool feedback, Todd. Thank you. I don't think I've I've seen any more coherent and clear description of the AI versus human coding question.

Steve Gibson [01:07:05]:
And I love his one line. The real issue is that AI makes coding cheaper while making judgment more valuable. And I think that captures where we're headed in computer science education and vocation. In the same way that any higher level language lifts its coder away from the grubby details of the specific underlying computer hardware, the use of coding trained large language models clearly lifts its users from the grubby details of the way computers are applied to problem solving. As many, you know, never before programmed anything users are discovering, they're now able to simply ask for what they want the computer to do, and the LLM will almost magically produce a potion that does that. But there are clear limits to what can be asked for. We saw a perfect example of such limits last week when those bad guys who had created that credit card clearing web portal apparently just forgot to ask for authentication to be added. Whoopsie.

Steve Gibson [01:08:22]:
What we're seeing rapidly evolve during the rush to use AI or for code generation is that for AI to be applied to the creation of any very large and complex solution, a solution architect is still required. No large problem can be dropped whole into AI's lap. At least not today, not yet, and I don't know when. Instead, for now, today, a solution architect who's trained and experienced in the application of the various higher level solution abstractions that have been developed over the years of true computer science needs to carefully decompose the larger problem into much smaller, individual, safely codable modules. These solution architects are the true core of the science of computing. You know, coding is just their implementation and these are the sorts of things, you know, that Donald Knuth and other scholars of the art have spent their lives exploring and documenting. So yeah, I think Leo, and this is what you've talked about, like the way you're now approaching the application of AI is because you understand about the way computers are applied. You're breaking the problem down into pieces that you, you intuit AI is able to do the, the, the, the grunt work on and, but you're, but you're, you know, giving it the interfaces that it needs for the various pieces of individual grunt work.

Steve Gibson [01:10:16]:
Yeah.

Leo Laporte [01:10:17]:
And I'm finding more and more. Well, I do the coding for, as a basis for things like cron jobs, like things that are going to run over and over again. And then a lot of what I'm using AI for now is text based stuff. I had IT plan our itinerary for Hawaii, for instance. And if you give it the basis, the information it needs, I have it use my Obsidian journals and things. It does really quite a good job. I sent it to my travel agent and I don't know how thrilled she was with the generated recommendations. I wanted her to say whether they were good or not, but I realized she might feel like it was kind of taking her job a little bit.

Steve Gibson [01:11:04]:
I think it's probably going to.

Leo Laporte [01:11:06]:
Yeah, yeah. I mean there's something that she does that no AI could do, which is the relationship she has with the various vendors.

Steve Gibson [01:11:13]:
Yes, yes.

Leo Laporte [01:11:15]:
And that's very, you know, that's human and only.

Steve Gibson [01:11:17]:
And things that she has heard from her other clients where, where it's like, you know, I heard about this, this, you know, you want to make sure you, you spend more time at, on, you know, at this port, because I

Leo Laporte [01:11:28]:
did I reassured her, I said, you know, this is, this is a nice starting point, but it can't duplicate what you do as a human being. And I think that's really the lesson that all of us should learn to calm down about AI is that we, we, we still need humans. Humans add something that no AI can, can, can do or will, I think, will ever be able to do.

Steve Gibson [01:11:50]:
Yeah. What I have found. I remember very early on I shared one of my prompts and where I was, you know, I went on at some length and I remember, you know, you were surprised that I was talking to it as much. Yeah, yeah. But if the more language you give it, because it is a language engine, the more language, you know, descriptive language you give it, the more it has to work with.

Leo Laporte [01:12:12]:
Yes. Yeah. I've found I'm writing more and more detailed specs and plans than, than ever because it does help it be more, more accurate if you're very clear about what you want. Yeah.

Steve Gibson [01:12:25]:
Okay. Next listener Randy Crumb says hi. Steve, in episode 1077 last week you mentioned you think companies that have closed source software should move quickly to utilize anthropics, Mythos, Claude security or similar tools as they emerge. Their closed source code is only closed source to the outside world. I think you glossed over the risk that using these online AI tools potentially exposes your closed source code to the world. They have privacy and security tools in place, sure, but their motivation is financial. They have a setting to disallow using your AI conversations to train their AI models, but you have to trust that they're actually following that setting. The same reason you trust Apple with your data more than Facebook or Google.

Steve Gibson [01:13:18]:
The use of online AI tools to review closed source code is a risk. A security breach, internal or external, could be devastating to a software company that has their code exposed. He writes. I've been experimenting with LM Studio to run local offline open source LLM models for for use with proprietary data, he says. Friends note, I work with client data, not code. The hypothesis is a local offline LLM can be safely used with confidential internal proprietary data or code. These LLM models are usually not as current as the online tools, but they catch up quickly. Also, they're not as flashy, newsworthy or or marketing hyped as strongly as the major online tools.

Steve Gibson [01:14:09]:
What are your thoughts on the risk of exposing proprietary code or data by using the major online tools? Listeners since episode one, thanks for everything you and Leo do. So Randy's right. I did gloss over those risks. So I'm Glad he brought it all up. And it's not at all that I meant to downplay them, given everything we know about cloud breaches, network data interception and decryption and so on. Even if the LLM provider did nothing wrong and made no mistakes, shipping highly valuable source code outside of a company's perimeter creates some risk. However, that said, how many firms are already doing just that by using GitHub? You know, I think that's insane myself. Is.

Steve Gibson [01:14:59]:
Yet it has become common practice to use GitHub for highly proprietary source code management. I'm not doing that, so perhaps my view is skewed, but it does mean that the company's crown jewels are already exposed outside. Randy correctly notes that sending the code up into the cloud for an LLM to rummage around in poses another level of danger. No question about it. And so I completely agree with that in principle. So the use of local models, which I have absolutely no doubt we will someday see much more in the future, makes a great deal of sense once they become as capable as what's available in the cloud. And at this point, Leo, I heard you mentioning, I didn't realize that, that there are now laptops being sold without RAM because memory has become so expensive. It's like, get your own ram, here's what you can plug it into.

Leo Laporte [01:15:59]:
I think maybe some people think. Some companies think, well, you might have some leftover RAM lying around or whatever, but they just can't get the ram, so they want to still sell something.

Steve Gibson [01:16:08]:
Or maybe they think, well, he'll take it from the previous laptop.

Leo Laporte [01:16:11]:
Exactly.

Steve Gibson [01:16:12]:
Put it in this laptop.

Leo Laporte [01:16:14]:
Right, yeah, exactly.

Steve Gibson [01:16:15]:
Wow. And so anyway, given the insane appetite that data centers have for GPUs and things that run AI, seems to me it's going to be a long time before we're able to buy things ourselves that also run AI because we're competing with the data centers that are able to, you know, purchase all of the next year's production capability. I mean.

Leo Laporte [01:16:41]:
Right, Right.

Steve Gibson [01:16:42]:
It's crazy. It's crazy. Okay, so I want to plow now into what digisert is doing. We got two breaks left. Let's take one now, even though we just did one. And then I will break in the middle of this DigiCert conversation for our final one.

Leo Laporte [01:16:59]:
Good, good. Perfect. All right, let's talk.

Steve Gibson [01:17:03]:
Okay. The first I learned of some trouble was from Someone posting to GRC's Security now news group with firsthand experience.

Leo Laporte [01:17:15]:
Oh.

Steve Gibson [01:17:16]:
Yep. Peabody, which is his handle, actual name is George. He wrote this morning. Windows Defender told me it had discovered a severe root kit on my Windows 10 laptop called Win32 CartAgent A exclamation point DHA. Now, okay, so consider that Windows Windows Defender tells you you've got a rootkit. It's like what? So you don't take that lightly, right? He says. Which it has quarantined. He wrote Searching online tells me this is happening on both Windows 10 and 11 computers worldwide, and one hash involved is that of a legitimate digicert certificate.

Steve Gibson [01:18:11]:
This is all above my pay grade, but I'm going to leave things alone for a while and see what happens. Turns out he was right. He said, apparently lots of people are reinstalling Windows because of this, but I think that's super premature at this point. Right again, he said. My guess is this is a gift from Microsoft, which they will admit to shortly, and if you reformatted your drive, they'll apologize for the inconvenience.

Leo Laporte [01:18:45]:
Dripping with sarcasm there.

Steve Gibson [01:18:47]:
And unfortunately, their apology, their apology left a lot to be desired. Yeah, I was unimpressed. So later that same day, this was this this past Sunday, sleeping computers. Lawrence Abrams was all over this and was providing answers. Lawrence's Lawrence headlined his news writing, Microsoft Defender wrongly flags DigiCert certs as Trojan colon win32/Sgtitive AException DHA. So here's what he wrote. He said Microsoft Defender is detecting legitimate digicert root certificates as and then that that Trojan name, resulting in widespread false positive alerts and in some cases, removing certificates from Windows. Removing their root certificates, by the way.

Steve Gibson [01:19:48]:
According to cybersecurity expert Florian Roth, the issue first appeared after Microsoft added the detections to a Defender signature update on April 30th. Today, administrators worldwide began reporting that DigiCert root certificate entries were flagged as malware and on affected systems removed from the Windows Trust Store. Okay, so hold on. Just to be clear, what a disaster this was. As we know, root certificates anchor the chain of trust for everything that chains down to them. With them removed from a system, nothing that chains down to them will be trusted despite having been trusted just moments before. That's the way the system works, and no one has come up with a better idea for validating signed code. Those two certificates are digicerts code signing routes and for example, all of GRCs, my signed apps are anchored by one of the two of those that were being deleted.

Steve Gibson [01:21:08]:
So the mistaken removal of those two code signing routes from the Windows trusted root store automatically and instantly renders every app that was ever signed by a digicert certificate. You know who is the As I said before, the industry's now now the industry's number one certificate authority renders every one of those invalid and untrusted by Windows. Huge, huge mess. Bleeping Computer continues Writing these false positives have led to concern, Gee, you think? Among Windows users, with some thinking their devices were infected and reinstalling the operating system to be safe, Microsoft has reportedly fixed the detections in Security intelligence update version 1.4494300, and the most recent update is now 1.449.431.0 actually, I think that should be dot 1. Anyway, reports on Reddit indicate that the fix also restores previously removed certificates on affected systems. Well, that's nice. So yeah, thank goodness for that. And it's not as if Microsoft had any choice, right, about putting them back.

Steve Gibson [01:22:36]:
It would have been a true disaster if there weren't some immediate means for reverting the specious removal of DigiCert's perfectly valid root certificates, as we'll see in a few moments, Even though digicert did suffer a breach which caused it to mis issue and a handful of code signing certificates, at no point was the removal of any of their root certificates ever warranted. I mean, that's just nuts. I hope Microsoft will put some safeguards in place to prevent such a thing in the future. Bleeping Computer continues the new Microsoft Defender updates will automatically install, and Windows users can manually force an update by going to Windows Security Virus and Threat Protection Protection updates and clicking on Check for Updates. After publishing this article, wrote Lawrence, Microsoft confirmed that the false positives were linked to detections for compromised certificates from a recent DigiCert breach. Well linked but completely blocked. Ridiculous to have deleted the roots, Microsoft told Bleeping Computer. Here it comes.

Steve Gibson [01:23:57]:
Quote this is Microsoft speaking. Following reports of compromised certificates, Microsoft Defender immediately added detections for malware in our Defender antivirus software to help keep customers protected. Earlier today, we determined false positive alerts were mistakenly triggered and updated the alert logic. Microsoft Defender suppressed and cleaned up the alerts for customer environments. Customers should update to Security Intelligence version and then we get that same version number or later, but do not need to take additional action in order. In other words, don't reinstall Windows for these alerts. We've notified affected organizations and recommended administrators look for more details in the Service Health Dashboard, the shd within the M365 admin center. Unquote.

Steve Gibson [01:25:07]:
Huh. Okay, well, that's an entirely unsatisfying answer from Microsoft, but I suppose given what Microsoft has become, it's the best we're going to get and the best we can expect. Nothing they wrote is untrue, but neither should it satisfy anyone who would have appreciated hearing them say something like Quote in response to reports of compromised certificates, Microsoft Defender was a bit overzealous and mistakenly removed some related certificates that should have remained. Microsoft Defender was immediately updated to cure that behavior and has replaced any certificates that were mistakenly deleted. You know, is that so difficult to say? It shouldn't be. Just wait till you see how thoroughly Digicert took full responsibility for for their part in this drama. Lawrence's reporting continues, writing the false positives occurred shortly after a disclosed Digicert security incident that enabled threat actors to obtain valid code signing certificates used to sign malware. The Digicert incident report explained, quote, a malware incident targeted a customer support team member.

Steve Gibson [01:26:31]:
Upon detection, the threat vector was contained. Our subsequent investigation found that the threat actor was able to procure initialization codes, which I'll explain in a sec, for a limited number of code signing certificates, a few of which were used to sign malware. The identified certificates were revoked within 24 hours of discovery and the revocation date set to their date of issuance. As a precautionary measure, all pending orders within the window of interest were canceled. Additional details will be provided in our full incident report, unquote. So that's a small sample of what good disclosure looks like, lawrence continues. According to DigiCert's incident report, attackers targeted the company's support staff, meaning Digicert support staff, in early April by creating support messages containing a malicious zip disguised as a screenshot. After multiple blocked attempts, one support analysis device was eventually compromised, followed by a second system that went undetected for a time due to an endpoint protection sensor gap.

Steve Gibson [01:27:56]:
Using access to the breached support environment, the hacker used a feature in DigiCert's internal support portal that allowed support staff to view customer accounts from the customer's perspective. While limited in scope, this access exposed initialization codes to previously approved but undelivered ev, you know, extended validation code signing certificate explained Digicert explained possession of an initialization code combined with an approved order is sufficient to obtain the resulting certificate. Since the threat actor was able to obtain these two pieces of information for a finite set of approved orders, they were able to obtain EV code signing certificates across a set of customer accounts and cas. So great explanation there, lawrence says. Digicert says it revoked 66. Zero code signing certificates, including 27 linked to a Zong stealer malware campaign, Digicert explained. Eleven were identified in certificate problem reports provided to Digicert by community members linking the certificates to malware, and 16 were identified during our own investigation. This aligns, writes Lawrence, with earlier reports from security researchers who had observed newly issued DigiCert EV certificates used in malware campaigns and reported them to Digicert, which which of course you know that's the nightmare scenario, right? I mean the re all of the reasons I had to jump through all those hoops in order to get myself an EV certificate.

Steve Gibson [01:29:51]:
Actually not even an ev, just a standard validation certificate is what all of this other mechanism is designed to prevent from happening. Researchers including Squibly Do Malware Hunter Team and Gonksha reported that certificates issued I

Leo Laporte [01:30:11]:
should make you read hacker handles everywhere.

Steve Gibson [01:30:15]:
No

Leo Laporte [01:30:19]:
scribbly do says this okay, I'm

Steve Gibson [01:30:21]:
gonna go with it. Squibbity doo. That's right. Issued to well known companies such as Lenovo, Kingston Shuttle. Now these are these are the companies to whom these stolen certificates were issued, right? Lenovo, Kingston Shuttle Inc. Pallet Microsystems being were all used to sign malware. So question what do Lenovo, Kingston Shuttle Inc. And Pallet Microsystems have in common? Posted Squibly do on X EV certificates from these companies were issued and used by a Chinese crime group, Golden Eye Dog and that's an APT known as Q27.

Steve Gibson [01:31:09]:
The malware in this campaign is named Zong Stealer, though analysis indicates it may be more like a remote access trojan than an info stealer. The researcher says the malware was distributed through the following attacks Phishing emails deliver a fake image or screenshot, a first stage executable that displays a decoy image, retrieval of a second stage payload from a cloud storage such as AWS and the use of, wait for it, signed binaries and loaders, including components tied to legitimate vectors or vendors so trusted because signed by DigiCert. After DigiCert disclosed the incident, the researcher said the incident report explains how the certificates used in these malware campaigns were obtained because like clearly illegitimate. It should be noted that the certificates flagged by Microsoft Defender are root certificates in the Windows Trust Store and do not match the revoked Digicert code. Signing certificates used to sign malware okay, so that's the great reporting posted Sunday before last by Bleeping Computers founder Lawrence Abrams. Security industry experts have been citing DigiCert's upfront incident report as a model of how this should be done. Starting 21 days ago, DigiCert began issuing Inc. A series of incident reports with each succeeding report updating the previous one, with the final report being posted seven days ago.

Steve Gibson [01:33:01]:
Exactly one week ago, DigiCert named this event, this final event, End Point two, which is that. That's the the system where this bad guy was not immediately discovered. And their final report begins with this statement. This is an updated version of our full incident report, which completes the investigation of endpoint 2. Their own overview description differs somewhat from what third parties reported and provide some additional detail. They summarize the whole thing saying, on 2026, 4, 2 so April 2nd, a threat actor contacted DigiCert support team via so a threat actor. Right. The bad guy contacted Digicert support team via a customer a customer chat channel and delivered a zip file disguised as a customer screenshot.

Steve Gibson [01:34:05]:
So they were saying, you know, I have a problem, I don't understand how your portal works. Here's a screenshot of the the problem. The file contained a scr, which we know is a screen saver executable containing a malicious payload. CrowdStrike, who is we? We know the endpoint security company that does a great job, except that they once brought down all of Windows, but that was, you know, whoops. CrowdStrike and other security measures they wrote. Successfully blocked four delivery attempts. Caught this guy said, nope, sorry, bad. A fifth attempt compromised End Point one, a machine used by a support analyst.

Steve Gibson [01:34:55]:
This delivery attempt was detected and contained by our Trust Operations team on on April 3rd. So five times, five attempts, four were blocked, one got through. The next day it was discovered f they they wrote. Following an immediate internal investigation based on the telemetry data at hand, it was assessed that the incident had been contained. Okay, so that's their summary. Then we receive an interesting narrative, some of which Lawrence posted, but the deeper details are interesting. So Digicert said. So this is Digicert, you know, writing it all up.

Steve Gibson [01:35:39]:
Digicert received the initial third party report related to this incident on April 5th. Additional third party reports are identified in the timeline. So, okay, just to recap before we go any further, the Penetration occurred on April 2nd. DigiCert's trust operations team determined it had been contained the next day on April 3rd. And the first third party report that is coming from some other outside source saying, hey, we got some malicious code here that's signed by like a certificate of yours that's fresh. So the first third party report of malicious code found in the wild, signed with a Then valid DigiCert EV code signing certificate, occurred the next day, the following day on April 4th. So that's going to like, whoa, bring Digicert to full attention. They they report DigiCert regularly receives certificate problem reports from community members and security researchers for code signing certificates and proven key compromise cases are revoked pursuant to the code signing baseline requirements.

Steve Gibson [01:37:02]:
Initial problem reports ultimately linked to this incident report fit within the normal pattern of such revocations. Okay, so they revoked the certificate. Then 10 days go by, they write on April 14, further investigation identified that endpoint two, a different machine, a machine used by another analyst, was also compromised through the same Delivery Vector on April 4th, so that this had a 10 day window. Crowd strike was not installed on that endpoint, meaning the compromise was not detected. During the earlier April 3 investigation, the machine was established more than three, meaning endpoint two that, that this newest machine they said was established, meaning, you know, set up more than three years ago. Because our end user machine logs are retained for three years, we cannot determine why CrowdStrike was not installed on this particular endpoint. Okay, so you know, at this point it really doesn't matter why. What matters is that because CrowdStrike was not installed on that second infected endpoint 2 machine, it's infected when undetected for 10 days.

Steve Gibson [01:38:33]:
But in the interest of a full forensic after the fact, how did this happen? You know, investigation, they would have liked to know exactly why that machine had apparently never been under the protection of CrowdStrike. Their records only go back three years and that machine predates that logging cutoff. So today they have no way of knowing what happened back, you know, when that machine was initially brought online, why it didn't get CrowdStrike. And now of course, given that they found one machine that was missing its protection for an unknown reason, the question becomes what other sensitive machines might also be missing their protection. You can imagine that they're going to go find out. Their reporting continues writing our Trust operations investigation found that the threat actor used the compromised analyst, Endpoint Endpoint to access Digicert's internal support portal. The threat actor used a limited function within the customer support portal which allows authenticated Digicert support analysts. You know, the people that we talk to as Digicert customers.

Steve Gibson [01:39:44]:
I've done that on a number of occasions to access customer accounts from the customer's perspective, to facilitate their support tasks. Makes sense. This access is restricted and does not permit actions such as managing accounts, users API keys, or submitting or managing orders. However, the threat actor was able to use this function to access probably meaning view initialization codes for orders that were approved but pending delivery for EV code signing certificate orders across A finite set of customer accounts. They write, possession of an initialization code combined with an approved order is sufficient to obtain the resulting certificate. Since the threat actor was able to obtain these two pieces of information for a finite set of approved orders, they were able to obtain EV code signing certificates across a set of customer accounts. And casual. Okay, now, so just to put this in context, if you're wondering about Digicert's phrase across a set of customer accounts and CAs, the notion of like, why would it be more than one like them? Ca.

Steve Gibson [01:41:03]:
The notion of differing CAs could seem strange since we're only talking about Digicert. But remember that when I was out shopping around for a new code signing certificate provider earlier this year and finally settled upon ident, I discovered that a surprising number of the many apparent alternative certificate authorities all shared utterly identical prices, terms and conditions with each other. And with Digicert. It quickly became clear that Digicert had been busily gobbling up much of the competition. So all of these alternative CAs had just become different storefronts for Digicert. And what they've just written confirms that these various fronts were all sharing Digicert's common back end. I'm not criticizing Digicert, it's, you know, smart business. But it does mean that we now have much reduced competition and that's not usually best for consumers.

Steve Gibson [01:42:19]:
Okay, so now we get some statistics and numbers. They write during our investigation, between April 14th and 17th, as DigiCert identified certificates potentially affected by the threat actors actions, we revoke them. DigiCert revoked 66. 0 certificates issued from the following CAs and there's four of them. DigiCert trusted G4 code signing and they've got a, a bunch of different specs on that and another of the same then, then one called go get SSL G4 code signing. So that's probably one of the, one of the other compromised sub cas and also something called Vero Key High Assurance Secure Code ev. So those were the, those were the root cas that, that had been used to sign those certs. They wrote 27 of the 60 revoked certified certificates were explicitly linked to the threat actor.

Steve Gibson [01:43:28]:
Eleven were identified in certificate problem reports provided to Digicert by community members linking the certificates to malware. And 16 were identified during our own investigation. During our investigation, it included review of the threat actor's activity in the support system, as well as tracing delivery to IP addresses known to have been used by the threat actor. You know, so they, they took, they, they, they, they got information about the known problem certificates Then they looked at the metadata surrounding the issuance of those certificates and then were able to use that to broaden their search and find any other certificates that that threat, the same threat actor had also managed to issue to itself. And so they were able to say the IP addresses used by the bad actor to install certificates included. And they provided in their report unredacted IPs, you know, 82-318-68 and so on. There's a bunch of them there. So Those are the IPs that, that, that the bad guy used in order to, to compromise DigiCert.

Steve Gibson [01:44:46]:
They said in addition to the 27 fraudulently issue and revoked certificates identified above, 33 of the total 60 certificates were revoked during our own investigation. As a precautionary measure for these certificates, we could not explicitly confirm customer control. In addition, pending orders were cancelled closing access to the threat actor. All identified certificates were revoked with within 24 hours of discovery, with the revocation date set to their date of issuance. So, you know, note that we keep seeing this language within 24 hours of discovery. This is DigiCert explicitly asserting that it has carefully followed the well established, you know, CA browser forum guidelines for proper CA behavior. This is where, as previously, as previously, we've seen and reported the other disgraced certificate authorities fell well short. You know, those others were, you know, under the rug sweepers, you know, first hoping that no one would notice and catch them in their mistakes.

Steve Gibson [01:45:59]:
And then once they've been exposed, you know, they worked overtime to minimize and hide their failures. You know, and you know, the truth is no one expects anyone in this arena to be perfect. Perfection is not a requirement. Proper behavior and acknowledgment of a, of a mistake, that's the requirement. So that's what Digicert is busy doing here. They wrap up their initial overview by writing the exploited certificates identified by the community member or were found to have been issued by and to, to sign the Zong stealer malware family and so forth, and basically the same stuff that Lawrence talked about. So the really interesting stuff comes next, it being how they take themselves to task over how this happened and in detail, the contributing factors that facilitated the attacker's success. So what I'm about to share is the reason I gave today's podcast the title Digicert Doing It Right.

Steve Gibson [01:47:10]:
This is written so objectively that it feels more like the work of outside auditors than DigiCert's own staff. It's just, it's just so difficult to fully disconnect one's own ego from, you know, truly self indicting statements. But to Digicert's credit, there was no sign of that. You know, the typical rolling disclosure that we've seen so many times elsewhere. You know, Microsoft for one, could certainly learn a two a thing or three from Digicert. So we're going to talk next about their headline root cause analysis, Leo, after we take our final break.

Leo Laporte [01:47:55]:
Okey dokey. Man, we live in a dangerous world.

Steve Gibson [01:48:04]:
We live in a world where a huge amount of industry is being applied by the bad guys. Yeah, I mean remember how at the beginning of this podcast we had cute little viruses and they used to infect people and we go oh, look at that. Whoa, whoa. Does it do nothing? It just propagates. Why? Well, because it can, but it makes

Leo Laporte [01:48:28]:
it defaces your web page. That's all.

Steve Gibson [01:48:31]:
Everything changed. Everything changed when cryptocurrency allowed people allowed the bad guys to get paid.

Leo Laporte [01:48:38]:
Yep.

Steve Gibson [01:48:39]:
It turned it into a business model. So under technical background they said Understanding this incident requires understanding how EV code signing certificates are issued on hardware tokens. The customer requests a code signing certificate from DigiSERT. Following validation, DigiCERT securely provides an in what they call an initialization code. We've heard that term throughout this an initial an initialization code to the customer. The customer installs or already has installed digiserts hardware certificate installer software locally. Meaning at their end the customer inputs the initialization code into the installer which generates key pairs on the hardware token and submits the public key to the ca. The CA generates the certificates against the approved order.

Steve Gibson [01:49:37]:
The installer retrieves the resulting certificate and installs it on the token. They said the process is described in a public knowledge base and they provide the link. Possession of the initialization code combined with an approved order is functionally sufficient to generate and retrieve the corresponding certificate. The initialization code operates as a bearer credential for the approved order and is single use. This feature made it apparent which initially initialization codes had been used. Okay, so I I've done exactly this in prior years with digicert and when you think about it, the process of issuing a certificate that like creating and issuing, you know, creating and signing that will be contained in a on a hardware token is little trickier than you might just imagine. At first the need is for the private key half of the public private key pair to never ever, for any reason, ever. Just to be clear, ever exist.

Leo Laporte [01:50:58]:
Never.

Steve Gibson [01:50:58]:
Yes. Thank you, Leo. Outside the hardware it it's in the key and it never leaves that means that it must be generated by the dongle inside the dongle and that the dongle will never export its ultra protected hardware key. So web server public private key pairs have no such requirement. So a web server just uses the underlying operating systems cryptographic system to synthesize a key pair. A public key pair for holds onto the private key while the public key is placed into a csr, a certificate signing request which the CA signs and returns. But forcing the private key to never leave the hardware dongle very much complicates matters. The point here is, is that DigiCert issues these initialization codes against a customer's account, sends them to the customer, who then uses them with DigiCert's own hardware certificate installer app running on the client's machine with the hardware dongle plugged into it.

Steve Gibson [01:52:20]:
The code validates, you know, the, the code which is ingested by their app, which they provide, validates their right to have a certificate by communicating on the back end with DigiCert APIs and servers. Then it triggers the key pair generation on the hardware dongle, the public then, then the hardware dongle does allow the public side, the public key to be sent out, which the, the app then uploads to DigiCert and the install and the installation of the signed certificate. So digasert then signs that public key, sends it back to the app, which then installs it back into the hardware. So a lot is going on that the user never sees. From the user's perspective, it just kind of is magic. You, you enter your code and, and you say go baby. And a minute or two later it says, okay, you got your key, you know, you're all set to go. But the, you know, the point is that the stateful nature of the operation creates some points of exploitability.

Steve Gibson [01:53:38]:
There exists a window from the time that IT initialization code is issued to the time it is that it. It is actually applied where the bad guys who are able to get it could install it in their own hardware rather than the customer installing it in their hardware. And these are big companies, right? Lenovo for example, where they, you know, they've got teams doing things they probably and they have got initialization codes that are just. Have been issued but haven't been used yet. And so the bad guys took advantage of that window. So digicert states four what they called contributing factors. Things that things about the way their system is and actually now was that helped this to happen. Contributing factor one, they said inconsistent or incomplete endpoint detection coverage.

Steve Gibson [01:54:39]:
Well, we know that Right. They said security tooling CrowdStrike was not uniformly configured by DigiCert across the user population exposed to the attack. The CrowdStrike prevention setting on endpoint one was below the intended organizational standard at the time of the initial compromise, allowing the malicious payload to execute before blocking engaged. The CrowdStrike sensor on endpoint two was not installed. As a result, no detection fired on the compromised machine. Logs for end entity machines are retained for three years. Since this machine was set up more than three years ago, our security team could not determine why this particular machine either did not have did not have an installed crowd strike sensor. They also said the sensor not being installed was identified on April 14 during the expanded investigation triggered by the third party report.

Steve Gibson [01:55:46]:
You know, they thought they had it contained on the fourth. Ten days later, it's like, oh, crap, something's b big and bad has happened. They said the Original investigation on April 4 did not include a check of EDR, you know, endpoint detection enrollment status for all exposed users. If the sensor had been installed on endpoint 2, the connection on endpoint 2 would likely have been detected and contained in the same time frame as the other targeted machine. That was M Point one. This created the window during which the threat actor was able to access the portal function. And then we're about to talk about that in the next contributing factor and harvest initialization codes, which actually is the third contributing factor. Okay, so the second contributing factor, insufficient privilege minimization in the support portal function.

Steve Gibson [01:56:44]:
Again, taking full responsibility for how the bad guy was able to get up to as much as they were. Insufficient privilege, so minimization. So they wrote, DigiCert's internal support portal includes a function that allows authenticated support analysts to proxy into specific customer accounts to facilitate customer support. You know, like viewing it the way the customer does. I don't really understand what's going on. Can you show me what I'm supposed to be seeing? And so the support guy says, okay, let me get onto your account, and goes, ah, I see what you mean. So they wrote, in this mode, certain functions are masked from the analyst. However, access to initialization codes for pending code signing certificate orders was not among the masked data elements.

Steve Gibson [01:57:37]:
And they're saying it could have been leaving those codes accessible to support an analyst operating in a proxied session. They said the portal function had not been formally classified. The portal function had not been formally classified within DigiCert's privileged access management PAM framework. The definition of privileged access was primarily scoped to direct access to CA and did not encompass this indirect account management function that had a path to certificate issuance. As a result, the portal function was not subject to the PAM controls applicable to privileged users under the cabnet security, including formal threat modeling against misuse scenarios, least privileged design review and access recertification. In other words. In other words, they missed this and they recognize that this did not have to be they said the portal function is a long standing feature on April 14th and 15th, following the discovery of the incident, we deployed a code change to mask initialization codes from proxied users on both our US and EU platforms using either the UI or the API. The absence of this initialization code masking was identified during the investigation triggered by the third party report on April 14 and finally interaction with other factors.

Steve Gibson [01:59:21]:
This factor you know, this second contributing factor defines the scope of the damage enabled by by the contributing factor number one, which was the lack of endpoint coverage. Without the EDR gap, the dwell time would have been minimal and the number of initialization codes accessible would have been limited. This factor also interacts with contributing factor three as the absence of masking is a direct consequence of the codes not being recognized as requiring credential tier protection. Which brings us to contributing factor number three. Initial initialization codes not protected as bearer credentials, meaning the initialization codes did not were not considered to have sufficient need for protection, they wrote. The EV code signing certificate pickup workflow was designed with a threat model that assumed initialization codes would only be accessible to the validated subscriber, delivered via a secure channel, and entered by the subscriber into their local hardware certificate installer. The threat model did not account for the scenario in which initialization codes stored within DigiCert's internal support portal could be viewed by a compromised Digicert analyst account operating through the portal function. Again, whoopsie.

Steve Gibson [02:00:53]:
Therefore, initialization codes were classified as intermediate workflow data rather than bearer credentials, which would have elevated the their need for protection to a higher level. This initialization code workflow they wrote was designed at the time the EV code signing token issuance process was developed. The issue was identified during the investigation triggered by the third party report on April 14 was not identified through any previous design review prior to this incident. So this just kept getting missed because again, you can look at things but sometimes you just don't see them. And their point is that this mistaken definition allowed it never to be properly classified and then it did not automatically fall within the proper security context and constraint. So everything we're seeing here is the work and it should sound like it of true forensic security professionals patiently working to understand step by step not only what happened, but why a supposedly carefully designed security system that was even designed from a theoretical premise that are theoretical premises of what we want. Even so, how this was how that allowed this to happen. So the result is guidance about what can be changed to prevent successful exploitation at each stage.

Steve Gibson [02:02:31]:
And this brings us to the fourth and final contributing factor which explains how the bad guys managed to infect two to infect DigiSearch 2 analysts in the first place, they wrote overly permissive file transfer capability in customer facing support channels. They said digicearch customer support chat channel and Salesforce case attachment workflow permitted delivery of inbound file attachments from external parties, including the general public to CA support staff with insufficient restrictions on file type, automated sandboxing and content inspection. This created a direct delivery path for malicious executable content to personnel having privileged access. The support chat channel had not been adequately evaluated as a potential attack surface for malware delivery against certificate Authority staff. The support chat channel and Salesforce case attachment integration were operational prior to this incident. The delivery vector was confirmed on April 4th and 5th, at which point additional malicious zips were identified across other Salesforce cases and removed. Which is significant, right? There were other infections that hadn't had a chance to like take hold. Corrective controls, file type restrictions, sandboxing evaluation are in progress as described in Action Items.

Steve Gibson [02:04:17]:
The number of channels by which customers can reach support staff has grown. The file controls on the chat channel were believed to be sufficient prior to this incident. This factor is the initial attack vector that enabled all subsequent factors to come into play. And this brings us to the lessons learned conclusion. You know with what went well and and what did not go well and where they got lucky. So they explained under what did go well Rapid initial containment on Endpoint machines where EDR was working as intended for these machines. DigiCert's Trust Operations team completed containment process termination, registry cleanup and artifact deletion within hours of detection on April 3. Also what went well? Proactive identification of the full delivery chain.

Steve Gibson [02:05:23]:
They were able forensically to catch that the investigation identified the Salesforce case attachment auto conversion mechanism as the delivery path and located additional malicious zip files files across other cases before they could be opened, preventing further compromises from the same campaign. So when they talked about earlier they did not throw Salesforce under the bus, but they talked about how additional points of entry have been added. Apparently it was the incorporation of the Salesforce services that allowed this stuff to get in that way and it sort of snuck by, they said. Also, what went well? Same day remediation of contributing vulnerabilities during incident response Critical fixes were implemented without deferral. Crowdstrike prevention settings corrected on April 4th. OCTA Fast Pass disabled and multi factor authentication tightened on April 14th. Initialization code masking deployed across US and EU environments on the 14th and 15th and confident attribution of the second compromise through forensic analysis. They did figure out exactly what happened after, you know, with that endpoint 2 machine.

Steve Gibson [02:06:47]:
They said linking across compromised machines activity logs to the same threat actor required an analyst across identity analysis across identity events, endpoint telemetry and support workflow artifacts. So if that all went well, what did not? File type controls turned out to be insufficient on customer facing support channels. Inconsistent and incomplete EDR coverage enabled a blind spot that was unknown prior to the incident and that directly enabled the attacker's dwell time, you know, giving them 10 days. Initialization codes were not protected as bearer credentials as they should have been. The portal function exposed initialization codes that are functionally equivalent to the certificates they enable because they were classified as intermediate workflow data rather than credential material requiring masking and credential tier protections. Also, what did not go well? Device bound authentication Okta FastPass was used as an MFA, a multi factor authentication bypass. FastPass allowed a threat actor operating on a compromised device to inherit the device's authenticated session and satisfy multi factor authentication requirements without a genuine second factor enabling access to the portal initialization codes after the initial endpoint compromise, compromise and finally following endpoint containment, that end point one containment on April 3rd, the investigation concluded, you know, obviously incorrectly, that compromise attempts had been neutralized without validating all endpoints exposed through the same delivery vector. So somebody a little too quickly said, okay, we're done here.

Steve Gibson [02:08:46]:
In retrospect, of course, I imagine that they now wish that after that first attack, which was thwarted by CrowdStrike's endpoint defense, after a brief window that they had checked for other instances of that malicious zip file across the rest of their network, we now know that they did that later and did find a handful of other instances of it. So it's sort of unclear why that did not happen at the time. And I imagine somebody's asking somebody. So they finally share two points in conclusion, where they felt they got lucky. They said a community member involved in security research reported the evolving pattern of misused certificates and engaged in dialogue with our support with our support team. Without that report, the undetected compromise of N2 and the associated MIS issuance might have remained undiscovered for a longer period. Nice of them to admit that our investigation also they got lucky. Where our investigation indicates that the threat actor's activity was focused on gaining access to code signing certificates, a differently motivated threat actor might have attempted to use the compromised account for broader actions.

Steve Gibson [02:10:07]:
Several of our action items are designed to address this risk. And then they conclude with a final three points. This incident demonstrates that internal support tooling with indirect paths to certificate issuance must be subject to the same security scrutiny as their certificate authority. Issuing infrastructure tools that were designed for legitimate operational purposes can become high value attack targets. Second, the incident also illustrates the importance of defining privileged access broadly enough to encompass any system or function with a path to certificate issuance, not just those with direct access to HSMs or hardware security modules or signing infrastructure. And finally, the dwell time underscores the importance of comprehensive post incident investigation scope, meaning don't quit your investigation prematurely and continuous EDR coverage monitoring like make sure all your endpoints are actually being monitored. A single missed endpoint can negate the value of rapid containment elsewhere. So you know, they, they go on to actually enumerate 21 individual action items, many of which they already articulated or implied.

Steve Gibson [02:11:37]:
So I'm not going to take us thankfully through each of those. Suffice to say that there's little doubt that Digicert was extremely unhappy and embarrassed by this event. Right. Everything is any certificate authority is about and a huge amount of security focused design and third party contractor support from the likes of CrowdStrike and several others. All of this was intended to prevent anything like this from ever happening. But it did anyway. But at no point did we see what we've previously observed from so many other certificate authorities who have been struck by similar abuse or even by Microsoft who you know, has nothing to lose. No one's going to leave Windows almost invariably they first hoped, you know, the other guys, these, these, these really, you know, denial certificate authorities first hoped that nobody would notice.

Steve Gibson [02:12:41]:
Then when someone did, they would downplay the severity of the incident, hoping that that would be it. Then when additional evidence of further exploit came to light and surfaced, oh, they'd apologize with some lame excuse about having intended to mention that too. Uhhuh. Yeah, right. What we have from Digicert, the industry's largest commercial certificate authority, second only the let's encrypt, which you know, is free and is full public disclosure and responsibility taking. The result has been a deep analysis followed by true action to prevent like at multi multiple levels and stages. Not just like closing the front door, but all the hallways in between, you know, to prevent anything like this from transpiring again. And as usual, I think that's more to recommend them than not.

Steve Gibson [02:13:37]:
If their code signing certificates were not so unreasonably expensive, I would not have invested so much time earlier this year preparing to jump ship and find another supplier. I'm glad I'll be moving to ident, but I'm only doing so because I object on principle to, you know, unconscionable costs for certificates, not the quality of their work. But as for Microsoft, given what we now know, it is unbelievably difficult to un, to understand, to explain, to explain away, to excuse how Microsoft could have possibly fumbled their end of this so thoroughly. There, there would presumably have been some dialogue between Microsoft and DigiCert where DigiCert provided Microsoft with the thumbprints or serial numbers of the 60 certificates they had revoked and blacklisted so that Microsoft could do the same with Mike with Windows Defender. You know, as we know, revocation is an imperfect answer with certificate management, but it's all we've got. So Microsoft would have definitely wanted to add Those certificates, those 60 certificates to Windows Defenders existing code signing deny list so that nothing signing them would have been allowed to run on Windows. But that just entails checking thumbprints against Defenders deny list. How Microsoft could possibly have fumbled this into the removal of DigiCert's root certificates for all certificates ever issued in the world ever is it's just impossible to understand.

Steve Gibson [02:15:40]:
It feels very much as though someone in control of that important process doesn't know what they're doing, which is horrifying. I hope AI didn't do it anyway. Fortunately that too was fixable and it was quickly fixed. So we go on Leo to the next adventure.

Leo Laporte [02:16:02]:
I love it that Microsoft kind of the way they phrase this sounds like a blaming Defender. They say in response to reports of compromised certificates, Microsoft Defender was a bit overzealous. Zealous Defender was overzealous. Well, it certainly acted poorly, but I think somebody told it to do so. So this is interesting. So they were initially compromised through customer support. Chat files uploaded through chat.

Steve Gibson [02:16:31]:
Maybe Chat or it sounds like maybe they've added some Salesforce software as a service thing. Right, because there was a mention of Salesforce and they may. That may be the chat that the. Exactly. It might be Salesforce Chat. And so so that it did get in that way. So some somebody said to, to the support guy, hey, I don't understand what I'm seeing here. Let me send you a screenshot.

Leo Laporte [02:16:57]:
Yeah, here's. Yeah. Or here's a zip file of something. Yeah. And then the, the CSR who had escalated privileges, which is a problem, unzipped it. It attacked. They responded very quickly. But this show.

Leo Laporte [02:17:10]:
This is the issue with certificates, unlike, say, a ransomware attack, although that can happen pretty quickly. But if I have 10 minutes with digicert and I can get their root certificates, I'm golden. I don't need more than that or time than that.

Steve Gibson [02:17:24]:
Yeah.

Leo Laporte [02:17:25]:
Wow.

Steve Gibson [02:17:26]:
And I mean, it. Is it. One of the things that I did think was that we clearly have a system. We have so many security firms that are. That are looking for malware. When they talked about, you know, an industry partner or a third party, you know, it was some security firm who they didn't identify that. That called up and said, hey, we're seeing some malware that was signed an hour ago by Lenovo and it's your cert. So what do you think about that? And then, so, I mean, so we have know there's a lot of closed loops here that are.

Steve Gibson [02:18:03]:
It's good that they're closed and that they're looping.

Leo Laporte [02:18:07]:
There's really a huge kind of web of threat analysis.

Steve Gibson [02:18:12]:
It's a big ecosystem.

Leo Laporte [02:18:13]:
It's a really big ecosystem, and they know that they have to work hand in hand, they have to work together. So in some ways, you know, that's been the response to all of these attacks is a much improved, I think, early warning system, which, know, here I

Steve Gibson [02:18:28]:
am trying to publish my little software and being blackballed by Windows Defender.

Leo Laporte [02:18:34]:
You.

Steve Gibson [02:18:34]:
You'd think that with. With this system working as well as it is, they could wait until actually seeing malware and then, then, you know, you know, you know, call Digicer to report me. But no, Defender says, I don't know about this.

Leo Laporte [02:18:52]:
They removed the entire brain. Instead of just the, instead of just the tumor, they just said, just take the whole brain out. You don't need that, do you?

Steve Gibson [02:19:00]:
Yeah, we decided we're not going to trust anything Digicert has ever signed. I mean, it's unbelievable.

Leo Laporte [02:19:05]:
Yeah, that's a very bad response. It's almost, you know, it feels like a panic response. Like they were so freaked out that they overreacted. And it wasn't Defender doing this. It was some human or maybe some AI. I don't know.

Steve Gibson [02:19:20]:
I don't think it sounds like somebody really took two to go in and remove a root. You had to know you were removing. At no point did the response to Digicert require removing anything. It meant checking thumbprints. Does this thumbprint match? That's all. So how it got extended, I mean again, I do actually wonder. I had the thought before I was recording this with you, Leo. Could this have been AI that that hallucinated a root cert removal? Maybe that what we're now do with that what we're now going to be dealing with.

Leo Laporte [02:19:59]:
If so, it's not the case because.

Steve Gibson [02:20:00]:
Get your seat belt.

Leo Laporte [02:20:02]:
Yeah.

Steve Gibson [02:20:03]:
Buckle up baby. Yeah.

Leo Laporte [02:20:05]:
Well, what a good story though. And I'm glad digital did the right thing.

Steve Gibson [02:20:09]:
Oh, I, I again, I just, you know, a hat tip to those guys. The fact that they ruined ruthlessly. Not only, you know, investigated, but then self reported. Yeah, I mean you, no one could ever take issue with, with the way they're behaving. I'm sure that was their goal too. Right. I mean they, they don't want anyone to have any doubts about them. And who would after this? I would trust them more now.

Leo Laporte [02:20:35]:
Right. Good stuff. Thank you, Steve. Steve Gibson does this every Tuesday, as you probably know, I'm sure. Or you make this a regular stop on your podcast list. You can catch the show live if you want to get it right away. We, we stream it live as we're doing it Tuesdays right after Mac break weekly. That's 1:30 Pacific, 4:30 Eastern, 20:30 UTC.

Leo Laporte [02:20:56]:
The live streams of course in the Club Twit, discord, but also YouTube, Twitch X, Facebook, LinkedIn and Kick. So watch wherever you want. If you aren't around on a Tuesday morning then, or afternoon or evening, depending on where you are. We always. It's a podcast. You can always get copies of the show. Steve's got actually completely unique versions of it. So depending on what version you want, you would.

Leo Laporte [02:21:20]:
It's the same show but he's got the 16 kilobit audio versions. Admittedly not the highest fidelity, but very small, 64 kilobit which is full fidelity. He's got the show notes which are amazing and I think a lot of people like to read along as they're listening. Well, it's also a great reference to have. He's very complete in the show notes. 22 pages today of show notes. You'll get those either via email or directly from his website, GRC.com he also has transcripts take a couple of days because they're written by a human. They go up a couple of days after the show.

Leo Laporte [02:21:56]:
So that's a good way to search or again, read along as you listen. All of that@grc.com if you do want to get the email transcripts, go to GRC.com email and there's, there's a. The initial point of this page was to whitelist your email so you can send Steve questions, comments, suggestions, picks for the picture of the week, that kind of thing. So there's an email form and he'll validate your email. But below it, since you're providing the email, there's also two checkboxes. One for the weekly newsletter that contains all of the show notes, the other for a very infrequent email when he's got a new product like, of course, the world's best mass storage maintenance and recovery utility, Spinrite, currently 6.1, and his DNS Benchmark Pro, which just came out a few months ago, that is $9.99 and very much worth it. You know, frequently we say, oh, the Internet's slow today. And it's not really the Internet that's slow, it's your DNS server that's slow, slow.

Steve Gibson [02:22:58]:
Something's going on with Quad nine, by the way. If anybody's using Quad nine, I, I've noticed it's, it's cached response is very fast. But it's when it, when the Quad 9 server doesn't have something in its cache, it needs to go out. And it is really. It's like taking half a second for it to get a response back. So.

Leo Laporte [02:23:17]:
And that's something DNS Benchmark Pro would let you know.

Steve Gibson [02:23:19]:
That's how I found out about it.

Leo Laporte [02:23:20]:
Yeah. It would also give you good alternative. Right. If you, you know, they're faster. So GRC.com, the Gibson Research Corporation, you can go to our website, Twitter TV SN for the 128 kilobit audio, which is, to be honest, not any better than the 64 kilobit audio. But we need to do it because Apple down samples and they need a high quality one to down sample it. It's a long story. We also have video which Steve does not have, and that is @Twit TV SN.

Leo Laporte [02:23:50]:
There's also a YouTube channel for the video. Great way to share clips. I think a lot of times people want little clips of pieces of this to send to the boss or your IT team or friends, whatever. That's the easiest way to do it. Everybody can watch YouTube, but the easiest way to make sure you don't miss an episode is to subscribe in your favorite podcast client. That way you'll get automatically as soon as we've polished it all up. And, you know, we working on that this afternoon. Steve, thank you so much.

Leo Laporte [02:24:17]:
Great show. And we'll see you next week on

Steve Gibson [02:24:19]:
security now, episode 1079 coming up.

Leo Laporte [02:24:26]:
Security Now.

All Transcripts posts