Transcripts

Security Now 1042 transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.

 

Leo Laporte [00:00:00]:
It's time for Security Now. Steve Gibson is here. He has a lot to talk about. Why did Byte magazine go out of business? We think we might know a certificate authority who has completely Destroyed security around 1.1.1.1. And why this happens. Steve's got a good theory on this. Are artists being blackmailed over using their art to train AI? And then finally, we're going to talk about using private companies to attack our nation's enemies. Is that a good idea? Cyber warfare on the agenda.

Leo Laporte [00:00:39]:
All of that and more coming up next on Security Now. Podcasts you love from people you Trust. This is TWiT. This is Security now with Steve Gibson. Episode 1042, recorded Tuesday, September 9, 2025. Letters of Mark. It's time for. Yes, you waited all day for it, all week.

Leo Laporte [00:01:09]:
Security now the show. We cover your security, your privacy, your safety online. And this is the man in charge, Mr. Steve Gibson. Hi, Steve.

Steve Gibson [00:01:19]:
Hello, Leo. Great to be with you again. As we are plowing through September and. Oh, I forgot to update the date on the top of the show.

Leo Laporte [00:01:30]:
No, it's the ninth, my friend.

Steve Gibson [00:01:32]:
Right. 0909 25. But it's. So just, just ignore that the number is correct. I got that. It's episode 1042, titled Letters of Mark, spelled M, A, R, Q, U, E. Which comes from some interesting evolution that we're going to talk about, which is. Well, it's controversial.

Steve Gibson [00:01:58]:
I think, Leo, you and I are going to have a lot of fun talking about this at the end of the podcast.

Leo Laporte [00:02:02]:
I know what Letters of Mark are because I read the Patrick o' Brien Aubrey Maturin series about. About a British man of war in the Napoleonic wars. And it's a wonderful book series. I don't know if you're familiar with it. It's kind of like Horatio Hornblower. These are two buddies. He's the captain, Aubrey, and Maturin is his doctor. And.

Leo Laporte [00:02:28]:
Oh, my God, it's just a wonderful novel. And one of them is actually called. It's 21 novels. I've read them all. One of them is actually called the Letter of Mark.

Steve Gibson [00:02:38]:
Ah, well, in this case, we're going to explore the idea of our government permitting private corporations to go on the offense.

Leo Laporte [00:02:52]:
Yes. Piracy in. In the days of Aubrey Matcher, and it was piracy, but.

Steve Gibson [00:02:56]:
Yeah, right. Privateers, as they were called.

Leo Laporte [00:02:59]:
Yeah.

Steve Gibson [00:02:59]:
Because they were private enterprises who were also then, you know, being allowed to. To arm their ships. Anyway, we're gonna. I want to talk about my. Sort of an update on my experience with x vs email since we now have a lot of baseline of results from that Google. Their, their TIG group, their threat information group or threat something group is being blackmailed to Fire2Security researchers.

Leo Laporte [00:03:33]:
Oh please.

Steve Gibson [00:03:33]:
By a consortium. Get this. A consortium of. Of the well known malicious actors that we've identified in the past.

Leo Laporte [00:03:42]:
And what are they going to do to Google?

Steve Gibson [00:03:45]:
Well they par. They. They say they've got data that they're going to release. Also the, the well known one. 1.1.1 DNS TLS certificate was misissued. I have a theory as to why that I have not read in any of the coverage. So maybe something new here. Artists are being blackmailed with threats of training AI on their art because that would be bad.

Steve Gibson [00:04:14]:
Firefox announced the extension of its Windows 7 support is the renewal of cybersecurity info sharing coming soon. That was where cybersecurity come or well, where private industry wanted to be able to share its events with the government. But there was a concern because this 10 year long agreement is expiring at the end of the month. What's happening with that? Trend micro looks at whether security analysis should be censored now due to the emergence of Vibe coding and how that shifts the difficulty of re implementing. Yeah, I know. Also turns out that maybe UK versus Apple's not settled after all despite Tulsi Gabbard's tweet also. Oh boy, that's.

Leo Laporte [00:05:05]:
That's what we've come to by the way, in the world. I know Tulsi Gabbard's tweet.

Steve Gibson [00:05:10]:
Holy cow.

Leo Laporte [00:05:11]:
Holy cow.

Steve Gibson [00:05:13]:
And it'd be nice to actually have some information about this but unfortunately all of this is under, you know, mutual gag orders so we're just all left to guess. It's like, well all you can do is like look at their actions and then infer what that presumably means. So anyway, we have another very serious supply chain attack that causes us to look again at the problems like the endemic problems we have with, with our current open source repository based trust everybody supply chain.

Leo Laporte [00:05:46]:
I know which one you're going to talk about. When I saw the number I was like oh this is bad.

Steve Gibson [00:05:52]:
There was, there was an initial announcement and then there was a follow up. So it'll be interesting to see which of those two you saw. Going to wonder whether the whole system of our current approach can ever be trustworthy. Also we got a brief some interesting editorial from the editor of why exactly Byte magazine died. I'm going to mostly just give people a link to that because he has a very interesting faq. And then we're going to look at what happens if Google and others go on the attack because Google announced just, just over. It was a little bit before last week during a conference about this topic that they were going to be creating a division for that.

Leo Laporte [00:06:38]:
Oh, and no. Oh yes.

Steve Gibson [00:06:41]:
I don't know.

Leo Laporte [00:06:41]:
Yes, I like that at all.

Steve Gibson [00:06:42]:
Ah, it's. We're gonna have an interesting time talking about that Leo at the end of the podcast, but not until we look at one of the best pictures of the week we have had in a long time. No. Which you have. You've said you have not looked at yet. So our audience will get the opportunity to hear you. This is one of those you will get instantaneously and oh, it's a goodie.

Leo Laporte [00:07:05]:
Well, we will look at that together in just a moment. But first a word from our sponsors. We get underway with security now on this Tuesday afternoon or evening, depending on where you are. Our show today is brought to you by us, the number one Microsoft Unified support replacement. We're talking for a few months now about US Cloud. They're a global leader, probably the global leader in third party Microsoft support for enterprises. I mean, they now support 50 of the Fortune 500. Switching to US Cloud can save your business 30 to 50% over Microsoft Unified and Premier support.

Leo Laporte [00:07:45]:
But it's not just less expensive, it's faster. Two times faster on average. Time to resolution versus Microsoft twice as fast. Well now US Cloud is excited to tell you about another reason you might prefer them over Microsoft. This is something I don't think Microsoft would ever do. US Cloud's Azure cost optimization services that is not in Microsoft's interest to optimize your Azure costs. I mean, after all, Azure has that tendency. It's so useful to kind of grow like Topsy.

Leo Laporte [00:08:17]:
When was the last time you evaluated your Azure usage? If it's been a while, you probably have some Azure sprawl, right? Some spend creep going on. It happens. Well, here's the good news. Saving on Azure is easier than you might imagine with US Cloud. US Cloud is offering an eight week Azure engagement. It's powered by VBox. It identifies key opportunities to reduce costs across your entire Azure environment. And of course, as always with US Cloud, you're going to get expert guidance from US Cloud's senior engineers with an average of over 16 years with Microsoft products.

Leo Laporte [00:08:53]:
Another reason US Cloud's the best at the end of this eight week engagement, you're going to get an interactive dashboard that'll Identify, rebuild and downscale opportunities and unused resources, allowing you to reallocate those precious IT dollars to towards needed resources. And you know, there's always something you need. But if I may make a suggestion, you could keep the savings going if you just take those savings and put them into US Cloud's Microsoft support. That's what a few other US Cloud customers do. And then you completely eliminate your unified spend. Ask Sam. He's a technical operations manager at Bead gaming. He gave us Cloud 5 stars, saying, and this is a direct quote, we found some things that had been running for three years which no one was checking.

Leo Laporte [00:09:41]:
These VMs were, I don't know, 10 grand a month. Not a massive chunk in the grand scheme of how much we spent on Azure. But once we got to 40 or $50,000 a month, it really started to add up, end quote. It does. It adds up, right? It's easy to get there. Don't feel, don't feel bad. Everybody's in this position, but you don't have to be. Stop overpaying for Azure.

Leo Laporte [00:10:07]:
Identify and eliminate Azure creep and boost your performance all in eight weeks. That's all it takes with USCloud. Visit uscloud.com, book a call today, find out how much your team can save. That's uscloud.com to book a call today and get faster Microsoft Support for less. Uscloud.com all right, Mr. G. So before.

Steve Gibson [00:10:33]:
You ready, I need to tell you that I gave this picture the title. Have you ever wondered whatever became of Microsoft's Clippy?

Leo Laporte [00:10:43]:
All right.

Steve Gibson [00:10:44]:
Have you ever wondered. I have ever behave of Microsoft.

Leo Laporte [00:10:47]:
Scroll up. We have the answer. I see. I see.

Steve Gibson [00:10:50]:
Oh.

Leo Laporte [00:10:54]:
He'S. He's. He's hard at work, isn't he? Holy cow. Wow. That was. That's. That is pareidolia in an unusual form.

Steve Gibson [00:11:04]:
Isn't that great? It's. I mean it looks so much like Microsoft's Clippy. Anyone who has ever assaulted by him on. What was that Office? I think it was.

Leo Laporte [00:11:15]:
Well, I think it was all of Windows or. Oh, okay. You will have PTSD for sure if you. But the funny thing is this is a toilet paper holder.

Steve Gibson [00:11:26]:
Yes. Somebody in the bathroom a. Yes. Somebody took a. It looks like a steel rod and bent it in a very clever shape to be able to have a toilet paper roll perch on its extension and sort of extrusion and then against the wall is. It is secured to the wall by two Phillips head screws through the rod which really does looks like eyeballs bring one back to Clippy.

Leo Laporte [00:12:00]:
So what you hope though Dr. Do said this is. It's not saying, hey, I see you're trying to. Oh no, oh, never mind. By the way, the toilet paper has run out, which is pretty much the way it worked with Clippy as well.

Steve Gibson [00:12:15]:
Yeah, that's right. That's right. Wow.

Leo Laporte [00:12:17]:
So that is a good one. You're right.

Steve Gibson [00:12:19]:
Just got a kick out of that. So just a heads up that between last week and this week, my blue check mark on X was taken away.

Leo Laporte [00:12:31]:
Oh, now you didn't pay for it in the first place, right?

Steve Gibson [00:12:35]:
I think I did a year ago when I had the choice. And so I presume that my yearly premium subscription just expired.

Leo Laporte [00:12:44]:
Right.

Steve Gibson [00:12:44]:
And I'm pleased that I was not automatically charged for another year or without my knowledge or permission.

Leo Laporte [00:12:51]:
See, I have a free one that I don't really want because at some point Elon went through the list of people with lots of followers and decided some people deserve a blue check whether they pay for it or not. Which I kind of like because it gives me access to Grok Pro and all of that.

Steve Gibson [00:13:10]:
Yeah. And I remember a year ago thinking, oh what the heck, you know? But the problem is we're not a year ago. Since its inception, GRC's email system has proven to be a total success. Yes. And at the time it was brand new. Last Monday afternoon, which is to say yesterday, the email System sent out 18465 podcast summary emails, each one containing the short bullet pointed episode summary, the picture of the week, and a link to the complete show notes. All 18,465 pieces of email were delivered with none bouncing for any reason other than out of mail, auto reply or mailbox full or an arguably unavoidable reason. And so sending weekly mass mailings has now free and effortless, cost me nothing.

Steve Gibson [00:14:10]:
And it's, I mean it's just, literally I just press a couple buttons because I did take the time to automate the whole process. So also between that Monday and our last podcast and Sunday afternoon two days ago when I'm writing this, I have received so like over the past week I've received 97 pieces of terrific feedback mail from our listeners. Now, by comparison, while I do still see receive some wonderful pieces of of pictures of the week ideas from a couple of longtime associates over on X, including that's where this week's wonderful picture came from. I no longer receive any feedback of much other value through X. It only comes through email. And so that's a huge change from 10 years ago, when all of our podcast feedback was coming to me through X through DMs, because as you used to tell everybody, I have open DMs, anybody can send me a DM even though I'm not following them. Because I don't follow anybody. I just use X as a broadcast medium.

Steve Gibson [00:15:23]:
So today email makes far more sense, you know, and I see when @SGGRC is mentioned, it appears in my timeline, but they look like sort of generic conversations involving other people who just sort of mention me in a list of people that they're wanting to. To mention. So they're not of much value either. So, you know, I will continue to post the weekly show notes to X as I have been, you know, ever since the world called it Twitter. And that was 15 years ago when I registered at SGGRC back in 2010. But I don't feel like paying X $7 per month for the privilege of having a blue check mark because basically that's all it means. You know, I'm able to, you know, get messages out to our audience. Anybody who wants to is able to sign up.

Steve Gibson [00:16:24]:
And, you know, you're nice enough to remind everybody, Leo, every week that they can go to GRC.com mail in order to sign up. So, anyway, and the other thing, too, I guess I'm old school, actually. I know I am. But I was never able to feel completely comfortable receiving or replying to someone who chose the moniker Stinky Bits, you know, over on X. That's just. I don't know, that's just not me. So anyway, while I was writing this, actually, while I was writing this, I received a piece of feedback through email with the subject line, an apology. He says, although it's kind of your fault, and a thank you.

Steve Gibson [00:17:08]:
This was our listener, John Wayward, who wrote Steve, I have to apologize for missing the last three episodes of Security Now. Well, you know, he doesn't have to apologize to me. I. I didn't. I wasn't hurt by him missing them. Right. But he said, however, it's kind of your fault. I typically listen on long car journeys each week between London and Plymouth.

Steve Gibson [00:17:30]:
However, after you repeated. After your repeated recommendations of Project Hail Mary, I downloaded the audiobook and have been absolutely addicted to it. I have listened to nothing else. What a book. Thank you so very much for the recommendation. Just. Wow. I'm now going to go back and listen to to the Martian, albeit in slower time.

Steve Gibson [00:17:54]:
With Security now mixed into. Well, we'll see about that. He might get similarly addicted. He Said I don't know how the movie can possibly match up to the sciencing the crap out of it detail of the book, but I'm excited for it nonetheless. All the very best. I now have 3s eds to catch up on. John. So anyway, you know, as I said in years past I would receive feedback like that through X, but I suspect that those listeners have moved as I and John have to email.

Steve Gibson [00:18:28]:
So anyway, I just wanted to give everybody an update. I'm, I'm. My presence at X is nominal and email is really the place to have an interaction with me these days. It doesn't cost you anything. You, you know, if you ask me, you that if you tell me they want to be anonymous, you heard me, you know, anonymize our listeners who don't want to be identified, that's fine. I'm happy to honor that. And it's just a, you know, it's a smoother medium and I don't talk about the fact that I'm also responding to a lot of the feedback that I get. Just quickly back through email thanking people or acknowledging that I've read their note and so forth.

Steve Gibson [00:19:08]:
So anyway, thank you all for the, for the, for the connection through email. It really, I think it helps the podcast to be a lot more of a live thing than it would otherwise be. Okay, so here's a weird one. Cybersecurity news reports that Google is effectively being extorted to terminate the employment of two of its employees. And the more you learn about this, the weirder it gets. Here's what is being reported by cybersecurity news. They wrote a group claiming to be a coalition of hacking groups has issued an ultimatum to Google, threatening to release the company's databases unless two of its employees are terminated. The demand, which appeared in a telegram post named Austin Larson and Charles Carmichael, both members of Google's threat intelligence group tig.

Steve Gibson [00:20:07]:
In addition to that, according to a post seen by Newsweek, the self proclaimed hacking collective which calls itself, get this, Scattered lapsus hunters, which is a conglomeration of their names, right. Also insisted that Google suspend all investigations by its threat intelligence group into the network's activities.

Leo Laporte [00:20:32]:
I'd give those two guys a raise. I think this is a testimony to their effectiveness.

Steve Gibson [00:20:38]:
Yeah, so, but that's all like. So this group apparently feels that they've got the goods on Google. So they're saying terminate these guys and you must stop investing investigating us. So the group, the group's name they write is an apparent reference to its composition, which it claims includes members from established hacking communities such as Scattered Spider, Lapsus and Shiny Hunters. Thus the the hybrid name Scattered Lapsus Hunters. So far, the group has not provided any evidence to substantiate its claim of having accessed Google's databases. Furthermore, there have been no recent confirmed breaches of Google's internal information systems. This threat emerges, however, in wake of a separate incident disclosed by Google.

Steve Gibson [00:21:31]:
In August, the company confirmed that Shiny Hunters, remember, they're the big fishing group that are seeing so much success through fishing. Shiny Hunters, one of the groups allegedly part of the new coalition, had successfully obtained data from Salesforce. Salesforce, being a third party vendor, provides various services to Google and the breach occurred within the vendors systems, not Google's own infrastructure. The formation of a supergroup such as Scattered Lapsis Hunters, they write, would represent a significant escalation in the cyber threat landscape. Scattered Spider is known for its sophisticated social engineering tactics, while Lapsus gained notoriety for its aggressive, high profile attacks on major tech companies. And Shiny Hunters has a long history of large scale data breaches and selling stolen information on the dark web. The potential collaboration of these entities could pose a formidable challenge to even the most well defended corporations. So, you know, pooling skills, you know, skilled resources is not something that we would like, we would like to have happen.

Leo Laporte [00:22:47]:
But this Salesforce thing is just out of control.

Steve Gibson [00:22:52]:
It just never stops. It is a mess. Yeah. Newsweek has reported, reportedly reached out to Google for a statement regarding the alleged threats, but a response was not immediately received as the request was made outside of standard business hours. And, and this news just broke. The situation remains under observation as the tech community awaits Google's official response and further developments. So.

Leo Laporte [00:23:16]:
Well, they're not going to fire somebody because that is ridiculous.

Steve Gibson [00:23:21]:
Only it is. You're absolutely right. There's no way that they're going to put this group of, of malicious criminals in charge of their human resource branch.

Leo Laporte [00:23:31]:
In fact, it's an endorsement for those guys. I mean, if, if these guys want, get, want them to get rid of them, they must be good, they must.

Steve Gibson [00:23:39]:
Be doing something right. Yeah, exactly. So anyway, we're just going to need to wait and watch. And as, as you said, Leo, I cannot imagine that Google could possibly capitulate in any way to this group or any group. Regardless of what the group might have obtained. The Google, you know, might well wish to remain private. You know, if anything, the proper response would be for them to turn up the heat on the various members of the group to cause them to regret ever floating the threat.

Leo Laporte [00:24:08]:
There you go.

Steve Gibson [00:24:09]:
Don't don't threaten us or else. Yeah, okay. So certificate authorities are trust in their proper actions and the chains of trust they anchor is crucial to so much of the operation of the Internet that this podcast has spent a great deal of time through its two decades of reporting examining the protocols, the technologies and the operation of every aspect of the certificate trust system. You know, we followed my fascination with the challenge of certificate revocation. It turns out that once a certificate has been issued, it's surprisingly difficult to unissue it now due to the way the entire system functions. Over the past 20 years, we followed the industry flip flopping back and forth as it tries one thing after another. It abandoned the original certificate revocation, lists the the CRLs in favor of Online Certificate Status Protocol OCSP, then came up with a better solution for CRLs using Bloom filters, which we talked about previously, and then abandoned OCSP over its privacy concerns.

Leo Laporte [00:25:23]:
So who knows, if you listen to this show, you're an expert in this whole endless saga. That's true certificate revocation.

Steve Gibson [00:25:33]:
Yeah, it is true. And in fact, I, I wrap up today's podcast sharing how uncomfortable the whole topic of cyber war makes me be for one reason. There's, you know, it's above our pay grade, Leo, you know, like, there's a lot that we don't know about what's going on. And I'm so much more comfortable being able to talk about hard technology like, okay, this is how this works, this is what this does, this is why it's broken and so forth. So, you know, it's where we're going to stay. At the same time, what's happening on the cyber war front is important. So we're going to talk about it today. But you know, it just leaves me feeling queasy.

Steve Gibson [00:26:18]:
Whereas being able to talk about, okay, here is the cool technology of a Bloom filter, which frankly, our listeners feel the same way. We got so much feedback about, you know, all of our deep dive podcasts through, through the years. So anyway, given how difficult revocation has proven to be, it's good news that certificates are not often misissued, considering how many there are and how often with increasing frequency now, due to the fact that the CA browser forum keeps restricting how long they're allowed to live, how often they're being reissued, we don't see that much trouble. A great deal of time, talent and attention has gone into securing the issuing process. And for example, we recently looked at how certificate authorities are now being required. That is, they're requiring themselves because they're part of this coalition to perform domain control checks of servers from several widely dispersed vantage points to prevent them from being misled by any sort of local attack on their own bandwidth. Where where if they were only verifying domain ownership and control from a single vantage point, that would represent a single point of failure. So the lesson here is that the certificate authority industry has gone, you know, to great lengths to assure that certificates are never misissued, given that revocation remains challenging and a mis issuance is avoided at all costs.

Steve Gibson [00:27:54]:
The news of three certificates being misissued and it turns out it was more than three, but we'll get there for as prominent a domain as 1.1.1.1 is both surprising and worrisome. So what happened and how did it happen? So here's what's been reported so far by Ars Technica. They wrote People in Internet security circles are sounding the alarm over the issuance of three TLS certificates for 1.1.1.1, a widely used DNS service from content delivery network Cloudflare and the Asia Pacific Network Information Center APNIC Internet Registry. The three certificates issued in May can be used to decrypt domain lookup queries encrypted through DNS over HTTPs or DNS over tls. That's because, you know, DNS over udp. Old school. DNS doesn't use certificates, it's not encrypted and it's not authenticated. Both DNS over HTTPs and DNS over TLS use TCP connections over tls, which provides privacy and verification of who you're connecting to unless it's a it's a fraudulently released certificate.

Steve Gibson [00:29:24]:
And that's the concern here, ours wrote. Both protocols provide end to end encryption when the end user devices seek the IP address of a particular domain they want to access. Two of the certificates remained valid at the time this post went live on ours, and that was last Wednesday, September 3, when ours wrote this. Although the certificates were issued four months ago, their existence came to public notice only on Wednesday, that is Last Wednesday in a post on an online discussion forum. They were issued by FINA FINA RDC 2020, a certificate authority that's subordinate to the root certificate holder, FINA root ca. The Fina root CA in turn is trusted by the Microsoft Root certificate program, which governs which certificates are trusted by the Windows operating system. Microsoft Edge accounts for approximately 5% of the browsers actively used on the Internet. In an emailed statement sent several hours after this post went live, cloudflare officials confirmed the certificates were improperly issued, meaning they didn't issue them.

Steve Gibson [00:30:42]:
They own that domain. If any certificates that were going to be issued for 1.1.1.1 had to be done by them, cloudflare wrote in part. Cloudflare did not authorize FINA to issue these certificates. Upon seeing the report on the certificate transparency email list, we immediately kicked off an investigation and reached out to FINA, Microsoft and FINA's TSP supervisory body who can mitigate the issue by revoking trust in FINA or the MIS issued certificates. At this time we have not yet heard back from fina, ours wrote. Microsoft said in an email that that it has, quote, engaged the certificate authority to request immediate action. We're also taking steps to block the affected certificates through our disallowed list to help keep customers protected, unquote. Ours wrote.

Steve Gibson [00:31:38]:
The statement didn't say how the company failed to identify the properly issued certificate for such a long period of time. Representatives from Google and Mozilla said in emails that their Chrome and Firefox browsers have never trusted the certificates, meaning anything from fena, and there was no need for users to take any action. An Apple representative responded to an email with this link to a list of certificate authorities Safari trusts. FENA was also not included. So okay, that's interesting. For whatever reason, Microsoft and thus Windows has been trusting any certificates issued by this apparently flaky certificate authority, whereas Google, Mozilla and Apple all apparently never saw the need, ours wrote. It wasn't immediately known which organization or person requested and obtained the credentials. Representatives from FINA did not answer emails seeking details As I was reading this I had the thought that perhaps FENA should be renamed FE and the industry should just be done with them.

Steve Gibson [00:32:52]:
What's curious is why Microsoft appears to be pussyfooting around with these clowns. I certainly would not want my Windows OS to be trusting any certificate that FENA might issue, which it currently does. But Microsoft trusting FINA means and this is what we talked about a long time ago, one of the mixed blessings of this of the CA system that we have is and it it bears everybody remembering this, all of the CAs that we trust are trusted regardless of what they sign. Meaning that any certificate authority we trust is allowed to sign a certificate for any property, any dom, even if they don't. So even if they exactly just as FINA did. FINA has nothing to do with 1.1.1.1. I imagine Cloudfield Air never even heard of them or considered them. Yet there are valid certificates from for for Cloudflare's domain that have been issued.

Steve Gibson [00:33:58]:
So so similarly, the fact that Windows is trusting these clowns, right means that everybody running Windows trusts anything that FINA signs as being legitimate. And Ars Technic continues to buy a bit more additional background writing the certificates and this is stuff we know, but it's worth covering the certificates are a key part of the transport layer security protocol. They bind a specific domain to a public key. The certificate authority, the entity authorized to issue browser trusted certificates, possesses the private key certifying that the certificate is valid. Anyone in possession of a TLS certificate can cryptographically impersonate the domain for which it was issued. Okay, now we know that's all true. Thus the power of a certificate and why the industry has gone to and goes to such lengths to make sure only the the signatures of trustworthy entities are trusted, that is only the signatures of trustworthy CAs. Ours explains the holder of the 1.1.1.1 certificates could potentially use them in active adversary in the middle attacks that intercept communications passing between end users and the Cloudflare DNS service.

Steve Gibson [00:35:25]:
From here, attackers with possession of the 1111 certificates could decrypt, view and tamper with traffic from the Cloudflare DNS service. All true, ours wrote Wednesday's discovery exposes a key, pardon the pun, weakness of the public key infrastructure that's responsible for ensuring trust of the entire Internet. Despite being the only thing ensuring that gmail.com bankofameric.com and any other website is controlled by the entity claiming ownership, the entire system can collapse with a single point of failure. Again, sobering but true, Cloudflare's statement observed. So Cloudflare weighed in on this also, of course, and and and we we quoted them earlier in that in ours piece they said the CA ecosystem is a castle with many doors. The failure of one CA can cause the security of the whole castle to be compromised. CA misbehavior, whether intentional or not, poses a persistent and significant concern for Cloudflare. From the start, Cloudflare has helped develop and run certificate transparency that has allowed this MIS issuance to come to light.

Steve Gibson [00:36:49]:
On the other hand, not very quickly, which is we're going to be getting to that in a second, ours adds. The incident also reflects poorly on Microsoft for failing to proactively catch the MIS issued certificates and allowing Windows to trust them for such a long period of time. Certificate transparency A site that catalogs in real time the issuance of all all browser trusted certificates can be searched automatically the entire Purpose of the certificate transparency logs is so stakeholders can quickly identify MIS issued certificates before they can be actively used. The MIS issuance in this case is easy to spot because the IP addresses used to confirm the party applying for the certificates had control of the domain was 1.1.1.1 itself. The public discovery of the certificates four months after the fact suggests the transparency logs did not receive the attention they were intended to get. It's unclear how so many different parties could miss the certificates for such a long time span. All that's right the next day, which was last Thursday 4th September, Cloudflare themselves the owner, the rightful owner of the domain 1111. And you know that's the only as as such, the only entity that should be able to issue certificates posted their own piece about this under their headline addressing the Unauthorized Issuance of of multiple TLS certificates for 1111.

Steve Gibson [00:38:39]:
I'm only going to share the top of their posting, but I placed a link to the entire thing in the notes. Cloudflare summarizes the situation by writing over the past few days, Cloudflare has been notified through our Vulnerability Disclosure program and the Transparent and the Certificate Transparency mailing list that unauthorized certificates were issued by FINA CA for 1111, one of the IP addresses used by our public DNS resolver service. Then they write, get this, from February 2024, okay? Not. Not this past February 2025. February 2024 to August 2025 issued 12 certificates for 1.1.1.1. Without our permission, we did not observe unauthorized issuance for any properties managed by cloudflare other than 1.1.1.1. We have no evidence they wrote that bad actors took advantage of this error to impersonate Cloudflare's public DNS Resolver 1111. An attacker would not only require an unauthorized certificate and its corresponding private key, which the issuer would have but attacked users would also need to trust the Fena CA.

Leo Laporte [00:40:12]:
True.

Steve Gibson [00:40:13]:
Furthermore, traffic between the client and 1111 would have to be intercepted. All correct. So first of all, that means only Microsoft clients and there'd have to be an adversary in the middle.

Leo Laporte [00:40:28]:
So somebody's asking if you didn't use a Microsoft Edge browser, if you used Chrome or Firefox, you wouldn't be at risk.

Steve Gibson [00:40:35]:
Mozilla brings their own Chrome is now using I believe Chrome is now using Windows for a while, the Chromium had their own. But for example Chromium and Bing, I believe they're both now using Microsoft's Root Store. There's been Some change with Chrome over time and I may be out of sync with that.

Leo Laporte [00:40:58]:
I also wonder if you don't want to do a union of the two certificate sets only because there may be local certificates trusted by the operating system the browser doesn't know about. Right. So it seems to me that you would want to, as a browser, trust both the set that you brought to the table, but also the set offered by the operating system in case there's, you know.

Steve Gibson [00:41:23]:
Right. And in fact the domains and stuff, the root stores do have essentially a directory hierarchy that allows you to put your own local routes in a separate location.

Leo Laporte [00:41:38]:
And I presume you could also edit the Fino routes out.

Steve Gibson [00:41:43]:
Yes, you are able to delete them, although every Windows Update refreshes that place, so.

Leo Laporte [00:41:49]:
Oh, great.

Steve Gibson [00:41:49]:
Yeah. So see, Cloudflare said while this unauthorized issuance is an unacceptable lapse in security by Fina, I mean, and really this. They ought to just be booted out of any root store. We should have caught, they're saying. Cloudflare is saying we should have caught and responded to it earlier. Like, yeah, last February, a year ago, A year and a half ago. 11 Certs this has happened to they said after speaking with Fina ca, it appears that they issued these certificates for the purposes of internal testing. However, no CA should be issuing certificates for domains and IP addresses without checking control.

Steve Gibson [00:42:33]:
At present, all certificates have been revoked. We're awaiting a full postmortem from Fina. While we regret this situation, we believe it is a useful opportunity to walk through how trust works on the Internet.

Leo Laporte [00:42:49]:
It's a teaching moment. That's what it is.

Steve Gibson [00:42:51]:
Exactly. We're all going to learn a lesson from this. So anyway, there are the rest of their post that I have a link to in the show Notes goes through that for anybody who's interested. And one of the things that Cloudflare shares is a list of all the various domains that were present in these 11 certificates we see there fina hr ssl test 5 no with no top level domain on it. Test fina hr test hr test 1 hr test 11 hr test 12 hr test 5 hr test 6 with no tld test 6 hr test ssl fina hr test ssl finatest hr I only have a few more test ssl hr test ssl 1 finatest hr and test ssl 2 test hr point is they.

Leo Laporte [00:43:58]:
All have Test in the name, right?

Steve Gibson [00:44:01]:
Yes. Looking at that list and thinking about what Cloud Flare wrote after speaking with Fena ca, it appears that they issued these certificates for the Purposes of internal testing. I would bet a month's pay that the reason 11.1.1 appeared in any FINA issued test certificate is for the same reason Cloudflare chose it as the name and address of their DNS service. It is super short, easy to enter and easy to remember.

Leo Laporte [00:44:38]:
So in other words, People's passwords are 1.1111111, right?

Steve Gibson [00:44:43]:
Yes. I would happily wager. Did I just break Steve? That at no point. Are you there? Hello? Huh. Hi, Steve. I'm not sure what happened to Leo. Okay, yeah, everything looks good here. So.

Steve Gibson [00:45:08]:
And I don't know if, you know, he was having all kinds of weird problems this morning, you know, at the beginning of the Mac Break weekly. Oh, I didn't, I didn't know that. Yeah, there was. He. I think he had a power failure and his stuff had not recovered and so they were trying to set the, the conference up, the, the, Their, their communication up in some particular way, but they ended up falling back to something different or something or unless he lost power then I don't know what the la. The last thing I heard him say was did I break Steve? And that was when we. We. It looks like we broke Leo actually.

Steve Gibson [00:45:56]:
Yeah, Anthony's saying it might be a power outage. Yeah, he, he had said that he had installed two new 20amp circuits. I guess the, that you know, he's putting a load on the right now. So they turned, they turned everything off for that. Yeah, he had, I guess that, you know, he's up in, in the attic, I think of his new place or that, that place and that I think that it wasn't, you know, powered for the amount of equipment looks like adding. Yay. Oh no, it's Anthony. Oh, sorry I tried to turn you on, Anthony.

Steve Gibson [00:46:37]:
There you go. Hey, Anthony, can you hear me? Yep, yep, I hear you.

Leo Laporte [00:46:42]:
Yeah, you might want to. Unless he was going to do an ad right now, maybe like grab like continue on with the story and then hopefully he'll.

Steve Gibson [00:46:50]:
Yeah, okay, we'll do. I think that, I think that makes the most sense. Okay, so, so yeah, I, I would bet a month's pay that the reason 1111 appeared in any FINA issued test certificate is for the same reason Cloudflare chose it as the name and address of their DNS service. It is super short, easy to enter and easy to remember. In other words, I would happily wager that at no point was any of this mis issuance in any way malicious. You know, some tech guy inside FINA was just Playing around with issuing test certificates and the numeric only IPv4 domain 1111 was super quick and easy to enter. Now having said that, you know, the big no no was that these test certificates were being signed with the same private key that Microsoft and also the EU's trust service provider. Those are the two entities we know of who trust this FINA, Ca.

Steve Gibson [00:47:58]:
They both trusted anything signed with that private key. That should never have been done. Even though Fina says, and I am inclined to believe them, that none of those certificates ever left their control, it's still unnerving that they were created. And so it, it would certainly have been possible to be using a test private key to sign those test certificates, which is in no one's root store, in which case no one would have a complaint. They'd just be like doing purely internal testing. The mistake they made was in using the publicly trusted, even though with limited trust, you know, Microsoft and the, and this EU organization only still that, that should have never happened. How did it.

Leo Laporte [00:48:46]:
So if they're using it for testing, why would they make it public?

Steve Gibson [00:48:50]:
Right? So. Well, they, they didn't. What happened was the way the certificate transparency system works is that the down in the automation of, of the signing of any certificate. The, the act of signing a certificate sends the event into the trans. Into the common public trans. Certificate transparency log. So, you know, and what this, what this highlighted is that certificate transparency logs, which we went to all this trouble to create and, and to maintain, no one's looking at them.

Leo Laporte [00:49:29]:
Right.

Steve Gibson [00:49:29]:
Because so, you know, so Cloudflare ought to have an automated process that is looking at the logs and checking to make sure that, that, you know, no certificates appear in them that, that are for domains they control. And frankly, you know, I could be doing that GRC if I was worried about somebody issuing a certificate in my name, I would have something that was scanning the logs, checking to make sure that GRC.com didn't, you know, wasn't issued by anyone anywhere at any time.

Leo Laporte [00:50:05]:
So this really underscores how, what a position of trust being a CA authority is.

Steve Gibson [00:50:09]:
I mean it's yes, very powerful and we've, and again it's one of the things that we spent a lot of time looking at because it's a fascinating aspect of the way the Internet works. Right. Is that, and we've seen CAS lose the ability to sign certificates when they abuse the, abuse the privilege. Basically they're as I, as I put it before, they're being allowed to print money. Yeah, we're saying, hey, you, you know, you get to print money. You basically for, For. For assigning some bits that, that people present you in return. You have to do so responsibly.

Leo Laporte [00:50:48]:
Yeah, fair.

Steve Gibson [00:50:49]:
Really interesting case. But I. I think that you know, the I at first, you know, the first time you hear about us, like 1.1.1.1 certificates were like maliciously created. It's like.

Leo Laporte [00:51:00]:
It sounds malicious. Yeah, yeah.

Steve Gibson [00:51:03]:
Or it's easy to imagine because it would be bad. But yeah, probably just, you know, just a techie, you know, as you said, Leo. So why. You know, passwords are 1111111.

Leo Laporte [00:51:14]:
Right. It's not just a techie. It was. It's practically an intern. I mean it was obviously some low level nitwit who didn't know. First of all, they be automatically published.

Steve Gibson [00:51:25]:
Right, right, right. Although they've been doing it since last February and nothing happened. So. Cloud flare, you're not. You're not checking your logs, are you?

Leo Laporte [00:51:35]:
Yeah, but who does? I don't. You don't. We don't. No, no, no. It's. I mean cloud flare needs to. That's seems important thing. You want to take a break?

Steve Gibson [00:51:46]:
Perfect timing.

Leo Laporte [00:51:47]:
Let's do it. I thought so. I thought you might. I just wanted to show you something that.

Steve Gibson [00:51:52]:
Oh, I know what that is.

Leo Laporte [00:51:54]:
Oh, you know what that is? Looks like an external USB drive. It is not, my friends. It is a thinkst Canary. And our sponsor, of course, for this segment of security. Now this is the. Oh, wait a minute. My Canary says there's been an incident. I know what the incident was.

Leo Laporte [00:52:09]:
I just lost power. But let me just. Let's just click. Yeah. Network settings rollback because I just lost power and I had to fail over. Comcast died on us. When you get an alert from your Thinks Canary, it's going to be the alerts that matter things. Canaries are honeypots that can be deployed in minutes.

Leo Laporte [00:52:27]:
You can see that my Thinks Canary, for all intents and purposes looks like Ms. SharePoint 2010. And I mean down to the Mac address. It has a Microsoft SharePoint Mac address. The ports, you can tell it which ports you want to have open. Look at all the you can do. I want port scan detection. You bet I'm going to turn that on.

Leo Laporte [00:52:50]:
Oh, well, maybe I shouldn't. It's a noisier network. Trigger low quality alerts. All right, I won't. This is the thing I love about the things Canary. It really helps you do the right thing. It is a honeypot. That is simple and easy to deploy.

Leo Laporte [00:53:03]:
In fact, so easy to deploy that you can change the configuration at will to any number of things. It used to be a Windows server, it could be a Linux database. It could be a Cisco router, a Fortigate security device, it could be Sophos firewall, it could be a SCADA device, a Hirschman RS20 industrial switch. We had to roll back this network settings to previously known state. Oh, okay. This is the thing. It really tells you what's going on on the thinkscanary, so you always know exactly what's happening. You can also.

Leo Laporte [00:53:38]:
Let me show you this. There's a little breadcrumb thing. This is great. So the thinkscanary itself is a hardware device sitting on your network that looks valuable, doesn't look vulnerable. It looks like something a bad guy wants to get into, whether It's a NASA SharePoint server. You can also create breadcrumbs, which is great, which are basically little files that you can scatter about. Here's a shortcut that you just put somewhere. An attacker might click it and say, oh, ah, I found something of value.

Leo Laporte [00:54:09]:
You're basically tempting them. You can also create files, a variety of files. They call them Canary tokens. It could look like a credit card, an AWS API key, an excel document, a WireGuard configuration file. The kinds of things hackers just drool over. So you set it up. Then you sit back and you relax. You wait.

Leo Laporte [00:54:34]:
Your honeypot has been deployed. If someone's accessing those Canary tokens, the Lore files or the breadcrumbs, or trying to brute force your fake SharePoint server, your thinks Canary will immediately tell you you have a problem. No false alerts, just the alerts that matter. You choose a profile for your Things Canary device. You see how easy it is? You register it with a hosted console. That's what I was in the console for. Monitoring and notific. And by the way, notifications can also be by text message, email, Slack.

Leo Laporte [00:55:05]:
They support web hooks. They have an API, I mean, syslog, whatever you want, basically any way you might ever think. Beeper. If you want a beeper, it'll notify you on your beeper. Then you sit back, you relax. Attackers who breached your network, malicious insiders, and other adversaries cannot help but make themselves known by accessing that. Thanks to Canary, they don't know it's a honey pot. It just looks like something really good to get into.

Leo Laporte [00:55:31]:
Visit Canary Tools twit. For just $7,500 a year, you can get five things to Canaries your own Hosted console. You get upgrades, support and maintenance. Oh, and if you use the code Twit in the how did you hear about us? Box, you'll get 10% off the price for life. You can always return your thinks canaries with their two months money back guarantee for a full refund. I should tell you, however, during all the years we've partnered with Things Canary, their refund guarantee has never been claimed. Not once. Visit Canary Tools Twit.

Leo Laporte [00:56:04]:
Enter the code TWIT in the how did you hear about us? Box. The thinxed Canary. It doesn't look vulnerable, it looks valuable. And hackers cannot resist it. This is one of the most important things you can do in your layered security strategy. It's. It's very hard to know somebody's in your network. You know, you can have all the best perimeter defenses, but if they're, if they're penetrated, how do you know there's somebody in your.

Leo Laporte [00:56:28]:
Hackers are very good at covering the tracks. They can't resist the Things Canary. Canary tools/twit. Don't forget the offer code TW I T. Now I have to reset my thinkst canary because I set off all the lines. Yeah, get it reprimed.

Steve Gibson [00:56:49]:
I love it.

Leo Laporte [00:56:50]:
All right, Steve.

Steve Gibson [00:56:52]:
So this appears to be the week for wacky events. Here's another one.

Leo Laporte [00:56:59]:
Oh boy.

Steve Gibson [00:57:00]:
Bad guys have cooked up a new way to extort artists by threatening to submit their stolen original artwork to AI for training.

Leo Laporte [00:57:10]:
Oh, that's hysterical. Oh God.

Steve Gibson [00:57:13]:
Gosh, I'm not kidding. The ransomware group Lunalock compromised a commission based web platform that connects artists with clients. The group said that if it was not paid a ransom on time, it would share the data with AI companies, thus adding all of the artists work to their massive large language model data sets.

Leo Laporte [00:57:37]:
Bonito says they're trying to extort broke artists. You picked the wrong people to extort, buddy.

Steve Gibson [00:57:48]:
I doubt there's much money at that site. On Aug. 30, a message appeared on the artists and clients website stating that it had been hacked by a ransomware group. One of the website's users noticed the message and shared the news on Reddit. They were redirected to a page with a ransom note indicating that all the databases and files, including all of the artwork, had been stolen and encrypted. In return for the stolen data, the. The group is asking $50,000. They had a little like a, a nice little web page with selectable drop down expandable FAQ items.

Steve Gibson [00:58:30]:
One of them says, I have it in the show. Notes says this is my website. What if I don't pay the ransom? And the answer is all files, including personal user data, will be leaked or on various dark web forums and torrents. Artwork will be submitted to AI companies for inclusion in training data sets. So anyway, I suppose the genesis of this is that, you know, some bad guys hacked into a not very super secure website that matches up clients with artists, where artists have their portfolios online for perusal and you know, clients are then able to submit, you know, or you know, offers to commission original works. And actually this logo that I've got here, the, the Squirrel logo, came from such a site. That's how I had that made. Yeah.

Leo Laporte [00:59:25]:
Yeah.

Steve Gibson [00:59:26]:
So wow. Again, wacky stuff. Last Thursday, Mozilla posted the headline extended Firefox ESR115 support for Windows 7.8 and 8.1. Oh, and Mac OS 10.12 through 10.14, they wrote. Mozilla has continued to support Firefox on Windows 7, Windows 8 and Windows 8.1 long after these operating systems reached end of life, helping users extend the life of their devices and reduce unnecessary obsolescence. I like that term, unnecessary obsolescence, they said. We originally announced that security updates for Firefox ESR115 would end in September 2024. So a year ago, later extending that to 2025 this month.

Steve Gibson [01:00:21]:
Today we're extending support once again. Firefox ESR115 will continue to receive security updates on Windows 7.8 and 8.1 until March of 2026. So six months from now, they said. This extension gives users more time to transition while ensuring critical security protections remain available. That's the key. We still strongly encourage upgrading to a supported operating system to access the latest Firefox features because those are not being updated and maintain long term stability. Note that this extension is also applicable for Mac OS 12 10.12 through 10.14 users running Firefox ESR 115. Now.

Steve Gibson [01:01:11]:
I for one appreciate this since I'm still spending my days in front of a Windows 7 machine. Actually that's what I'm in front of right now, which is working quite well. And I need to confess, or at least update everyone, that since that last version of the Brave browser that supported Windows 7 was released way back on January 25th of 2023. So we're coming up on three years old. Three years ago, after using Brave for a while and appreciating its clear commitment to honoring my privacy, I have returned to trusty old Mozilla Firefox so Mozilla's announcement that they will be keeping my Firefox 115, which is what I use here, patched for another six months is welcome news. I'm sure they have telemetry, which actually is interesting feedback, right? That's informing them that I am not alone in continuing to run their lovely Firefox web Browser on Windows 7. Or I'm not using 8 and 8.1, but I imagine people are also as someone who has written at least my fair share of Windows apps, I'll take a moment to just note this whole notion of an app carrying all that much about which platform it's running on, like which platform version it's running on, is at least a bit overblown. When I opened up the DNS benchmark source code to begin working on its version two, I discovered, somewhat to my amusement, that it would run on every version of Windows.

Steve Gibson [01:03:04]:
That is GRC's DNS benchmark on every version of Windows from Windows 95 through Windows 11. Microsoft goes to great lengths to not break old applications on new editions of Windows. So Windows 95 would have been what, 1995? So that's a 30 year span of Windows. All this nonsense about no longer supporting an operating system because the OS platform itself is not supported is, as I said, quite overblown. But I expect that my Windows 7 machine will be retiring at year's end. Everybody who's watching the videos of this podcast will see my location change as I consolidate two locations into a new third location. And at that time, given how long I tend to keep my cars and my computers, and for that matter my iPads and my Palm Pilots, I fully expect that I'll be setting up what will become my final PC, and that one will be running Windows 10, which is where I'm going to be staying. As I've noted before, the two greatest attack surfaces of our PCs are email, where we make the mistake of clicking on a sour link, and our web browsers, whether we go somewhere malicious directly by mistake, or by clicking a link and following a link that we receive in email.

Steve Gibson [01:04:40]:
So these days I am far more glad to be running web browsers and email clients that are, as my old version of Firefox still is and will be for another six months, being kept up to date and secure than an operating system that was deliberately abandoned by its publisher many years ago. So there we were, late in that, taking that last break, Leo. So we're kind of running a little bit behind. Let's take another break because I'm also a little parched and I've got some coffee right here that needs.

Leo Laporte [01:05:18]:
Oh, you lucky dog. I could use some coffee right now. All right, let's talk about our sponsor for this section of security now, the good folks at BigID, the next generation AI powered data security and compliance solution. BigID is the first and only leading data security and compliance solution to uncover dark data through AI classification. It can identify and manage risk, remediate the way you want, map and monitor access controls and scale your data security strategy. Along with unmatched coverage for cloud and on prem data sources. Bigid also seamlessly integrates with your existing tech stack and allows you to coordinate security and remediation workflows. You could take action on data risks to prevent against breaches.

Leo Laporte [01:06:09]:
You can annotate, delete, quarantine and more based on the data, all while maintaining an audit trail. Partners include, well, it's all the companies you use. ServiceNow, Palo Alto Networks, Microsoft, Microsoft, Google AWS, and on and on. With big IDs advanced AI models, you can reduce risk, accelerate time to insight and gain visibility and control over all your data. That's probably why Intuit named it the number one platform for data classification in accuracy, speed and scalability. Now if you want scalability, just think who what organization would have the most dark data? Big ID equipped the US Army 250 years of dark data. They helped them illuminate that dark data, accelerate cloud migration, minimize redundancy and automate data retention. They got this great testimonial from U.S.

Leo Laporte [01:07:02]:
army Training and Doctrine Command. This is what they said. The first wow moment with Big ID came with being able to have that single interface that inventories a variety of data holdings, including structured and unstructured Data across emails, zip files, SharePoint databases and more. To see that mass and to be able to correlate across those is completely novel. I've never seen a capability that brings this together like BigID does.

Steve Gibson [01:07:31]:
Wow.

Leo Laporte [01:07:32]:
End quote. That's a pretty nice testimonial. CNBC recognized BigID is one of the top 25 startups for enterprise. They were named to the Inc 5000 and Deloitte 500 four years in a row. The publisher of Cyber Defense magazine says Big ID embodies three major features we judges look for to become winners. Understanding tomorrow's threats today, providing a cost effective solution and innovating in unexpected ways that can help mitigate cyber risk and get one step ahead of the next breach. Start protecting your sensitive data wherever your data lives. @bigid.com SecurityNow get a free demo.

Leo Laporte [01:08:13]:
See how Big ID can help your organization reduce data risk and accelerate the adoption of generative AI. Again, that's BigID B I G I D.com SecurityNow also get an exclusive invite to BigID's Virtual.com Summit that's coming up October 9th where you can hear a keynote featuring Forrester Research plus panels with expertise experts from JP Morgan, Manulife and Nokia tackle the most urgent challenges in AI security and risk@bigid.com security now. Thank you Big ID for supporting Steve and the work he's doing on security now. Now fully caffeinated and refreshed.

Steve Gibson [01:08:54]:
Mr. Steve so we recently noted that private industry had begun withholding information from the federal government over its concern that the blanket information sharing protections provided by the 10 year long so called Cybersecurity and Information sharing act of 2015 which would be expiring at the end of this month of September, might not be renewed. Oh this huh. This 10 year duration act allows private sector providers to freely share cyber threat intelligence with government partners under the guarantee of liability protections. As we've talked about this before, Leo, a lot of the so called, you know, critical infrastructure is privately owned and run. You know it's, you know, the elect, the electric, the electricity grid and, and the Internet itself is, you know, that's all private ownership. So it's important for the government to, and, and, and, and the private sector to be able to compare notes and confess when there's a problem without worrying.

Leo Laporte [01:10:06]:
About going to jail.

Steve Gibson [01:10:07]:
Yes, yes, exactly. So unless this act is renewed by Congress, which has not yet happened and we only have three weeks and counting remaining, the act's expiration would mean that a vital source of cybersecurity intelligence for, for our government would dry up overnight. Because you know, already there's been some like whoops, we can't talk about this because we're not sure we're going to get protected. Nextgov offers some interesting background about the act, its past and the hurdles it currently faces in today's quite messy Washington climate. Under their headline House Panel Advances Bill to Extend Bedrock Cyber Info Sharing Law. They then lead with the tease. Some Republicans want to ensure there's language that would prevent the nation's core cyber defense agency from engaging in alleged censorship of Americans free speech. So you know, this is this concern that, that, that CISA was getting itself involved in, you know, was, was, was being accused of election interference previously.

Steve Gibson [01:11:21]:
And so there are some Republicans who are saying okay, well we want to you know, take this opportunity to fix that. So anyway, next Gov wrote the House Homeland Security Committee on Wednesday, which is last Wednesday approved a measure that would renew a cornerstone cybersecurity law designed to optimize the exchange of cyber threat information between the private sector and the U.S. government. Okay, so that's, that's the House Homeland Security Committee, which is a committee obviously of the House, but then this still has to go through both houses of Congress. So they said. The original law, the Cybersecurity and Information sharing act of 2015, lets private sector providers freely transmit cyber threat intelligence to government partners. With key liability protections in place, it's set to lapse September 30th unless renewed by Congress. The extension dubbed the Widespread Information Management for the Welfare.

Steve Gibson [01:12:25]:
Oh my Lord. It's the wimwig. The wimweg. The wimweg act is widespread information management for the welfare of infrastructure and government. The Wimweg act extends the law another 10 years and now moves to the full House for consideration. Why don't they just make it permanent law? I mean, then we wouldn't have this problem. They could always amend it or adjust it if they wanted to, but having these deadlines, everyone has to scramble around anyway. I'm not there, they wrote.

Steve Gibson [01:12:59]:
Technical amendments were introduced to the bill, which were met with little pushback in committee. Top of mind for some Republicans on the panel were concerns that the cybersecurity and infrastructure security agency CISA would be enabled to censor Americans protected speech. That concern extends to the Senate Homeland Security Committee where Chairman Paul Rand, a First Amendment hawk, has said he would add language in the Senate's version of the reauthorization that would bar CISA from carrying out alleged censorship of free speech. Although actually under CISA's current management, that doesn't seem to be a problem. But okay. CISA has faced mounting Republican criticism over allegations of censorship tied to its efforts to combat election related disinformation in and around 2020. GOP lawmakers contend this amounted to unconstitutional government pressure on private companies to suppress speech, particularly conservative viewpoints. In the early 2000 and tens, legislative efforts to establish a cyber threat information sharing framework had been underway for several years.

Steve Gibson [01:14:12]:
But even. But back then, right now, 2010s, that's not today. So a lot of has changed in the last 15 years. But back then, they said faced major hurdles amid public skepticism over government privacy abuses following Edward Snowden's 2013 global surveillance disclosures. That view, however, shifted after the Office of Personnel Management suffered that massive security breach in 2015, which compromised the personal information of over 21 million current and former federal employees. Whoops. This galvanized support for the law as it stands today. You know, so there was the government saying, well, why didn't we, why weren't we told about this? Well, it turns out the private industry had some information that they were afraid to share because they didn't want to get, you know, in trouble.

Steve Gibson [01:15:04]:
So, so a law was created that said, it's okay, you could tell us. We promised not to be mad. The article finishes. Stakeholders say the liability protections in the data sharing law are critical because they shield companies from lawsuits and regulatory penalties, which, when sharing cyber threat indicators with the government. Oftentimes cyber threat data includes specific names of individuals or sensitive business information, depending on what hackers target. Robert Mayer, US Telecoms Senior Vice President of Cyber Security and Innovation so he's a classic private sector guy, right? He's the guy who would want to be able to share things with the government. Said, quote, by reauthorizing the law, this bill preserves the trusted framework that enables industry and government to share critical threat information quickly and securely. For the telecom sector, where our networks are on the front lines of cyber defense, this legislation is essential to protecting the infrastructure Americans depend on every day.

Steve Gibson [01:16:11]:
So the renewal of the bill, largely as written, would appear to be of vital importance. And, you know, even today, more than, even more so than 10 years ago when it was first passed, we need this. So I expect that within a few weeks we'll be observing its passage through both houses of Congress and that President Trump will then sign it into law and that will be good for everyone. But it hasn't happened yet. And, you know, the politicians want to take this opportunity, especially something that everybody understands has to pass in order to, you know, squeeze their amendments in and, you know, get the provisions added that they need. So, but still, I expect it to happen. Oh, this is really interesting. Trend Micro decided to test whether today's availability of AI assisted code generation, you know, popularly known as vibe coding, changes the balance to weigh against the security community's long standing practice of publicly disclosing and sharing their detailed analysis of known attack code strategies and malware.

Steve Gibson [01:17:27]:
You know, we know that when proofs of concept are made public, those who might never have been able to create such attacks from scratch themselves are suddenly empowered with the ability to do so. So the very real concern and question which Trend Micro wanted to ask was, does AI's ability to generate code change this equation? And actually I've even leo heard the term prompt kitties.

Leo Laporte [01:17:59]:
Yeah, well, sure, yeah, yeah.

Steve Gibson [01:18:03]:
So here's what Trend Micro wrote and what they did. They said security companies routinely publish detailed analysis of security incidents, making, you know, attacker techniques, tactics and procedures, so called ttps tactics, techniques and procedures widely known and visible. These reports often provide comprehensive insights into specific vulnerabilities that are or could be exploited. Malware delivery mechanisms and evasion techniques. This transparency is crucial for the cybersecurity community, enabling organizations to understand the evolving threat landscape so they can implement more effective defenses for themselves. But it has evolved into a double edged sword. The benefits of detailed security publications significantly outweigh the risks they contend. For example, providing defenders with up to date TTPS enables proactive security measures.

Steve Gibson [01:19:06]:
Building awareness across the security community strengthens collective defenses. Developing and improving security platforms and detection capabilities is critical. Enabling threat intelligence sharing benefits the entire ecosystem and supporting incident response teams with actionable intelligence enhances their effectiveness. So lots of things over on the pro side of this that are that are benefits to an industry that is open and sharing what it finds, they wrote. It's not a secret that attackers follow security blogs and read posts about them and other threat actors. The Conti leaks, which is to say leaks from inside Conti, for example, contain several discussions of such public posts. So they were in Conti, they were talking about public posts. They often learn from these posts about the defenders techniques and use this information to evolve and improve their own attacks.

Steve Gibson [01:20:10]:
To test whether our industry's practice of providing detailed TTPS enables the creation of malware itself, we conducted an experiment using Trend Micro's published analysis of the Earth Alex Alux Earth Alex Threat Actor Espionage Toolkit. We attempted to recreate similar capabilities using AI assisted vibe coding. In this experiment we used Claude AI, the Claude 4 opus in combination with with Visual Studio code and Klein. The platform readily began generating code that simulated the attacker's communication patterns including the emulation of a first stage backdoor persistence mechanisms and the attacking components. Upon initial inspection, this approach was very straightforward. Bypassing anti malware creation guardrails was quite trivial using a number of uncensored models readily available in platforms like Hugging Face. In other words, you can't ask chat GPT but you can use uncensored platforms. They said the first generation or the first version was generated in Python and for the sake of flexibility we regenerated the code in C.

Steve Gibson [01:21:36]:
But did it resemble the original code? Yes, but only up to the point of how much was disclosed in the actual blog post. More importantly, while even more detailed technical reports help large language models generate even more accurate code. The generated code was. The generated code itself was not perfect. For most security reports, some level of skill and understanding of the code will still be needed to finalize that code into a working tool. The AI provides a significant starting point, but technical expertise remains essential for creating functional malware. And I'll just intercept or interject here to note that. What is it? It is September 9, 2025.

Steve Gibson [01:22:32]:
Remember, our byword here is anything. Anytime we are ever talking about anything to do with AI, you know, you need to time stamp the conversation because it ain't going to be true in 90 days or 180 days. So in fact, Leo, I'm sure you'll be talking about this tomorrow on, on intelligent machines, but, but OpenAI just released a report where they have, they believe they have. They now understand why AI hallucinates. And to test their their hypothesis, they changed the way they trained a chat GPT5 mini model and eliminated hallucination.

Leo Laporte [01:23:19]:
So yeah, they say it's a bug in effect, right?

Steve Gibson [01:23:21]:
Yeah, yeah. Actually they. A perfect example is if you asked, if you asked it what someone's birthday was. Well, even if it didn't know, if it guessed a random day, it has a 1 out of 365 chance of being right, which is better than saying I don't know, which is zero. And so they have inadvertently trained it to guess because they reward answers more than they reward it, saying I don't know. And you know, as our listeners know, I don't know is one of my favorite phrases because that's the start of going to find out, right? Rather than, you know, guessing. So I had a, I've told the story many times. I had a, back in the early days of grc, a really neat techie that was working with me on the early versions of Spin, right? And he just, he had this propensity for guessing what a bug was.

Steve Gibson [01:24:24]:
And I said his name was, name was James. And I said, well, James maybe, but now we have a problem because now your ego is involved in something that is not a subject of ego. There's a bug. We need to find it. Now you want to be right and I just want to find the bug. So I don't know why there's a bug and you don't either. But now you have a vested interest, so it's much easier, it's much better just to say I don't know and let's go find out. So anyway, I thought it was interesting.

Steve Gibson [01:25:00]:
And so the point is here we're talking about this, right? And these guys are saying, you know, trend micro is saying, well, you know, it's not good enough to to be a to allow prompt kitties to create malware. But again, September 9, 2025, probably not 2026. So they said, having highlighted the risks of AI assisted code generation, we must also consider how this capability muddies the waters for attribution. Ah, that's interesting. The ability to directly copy malware characteristics described in security reports creates significant challenges for threat hunters and investigators. Attribution has always been challenging in cybersecurity, right? Who done it? Attackers have long employed various techniques to deliberately confuse investigators, such as software component reuse using tools associated with other groups, infrastructure reuse deliberately using domains and hosts linked to other threat actors, TTP mimicry copying the operational patterns of other groups false flags, deliberately adding misleading artifacts such as North Korean APT groups adding Russian language artifacts to their binaries and living off the land techniques using only tools available on target machines to just not give anyone a clue. So they wrote. With Vibe coding tools, creating copycat campaigns becomes significantly easier.

Steve Gibson [01:26:42]:
Even non programmers can build a somewhat functional code by providing simple text instructions. This democratization of malware development poses new challenges for developers, they said. We've already observed that some APT groups have become early adopters of AI and LLM technologies. This trend will likely accelerate as Vibe coding tools, which support the rapid prototyping of software based on textual descriptions, continues to evolve. Two key issues with security bugs in the world of AI today are they ease. I'm sorry, security blogs in the world of AI, are they ease the process for attackers to copy the techniques of other groups and quickly get up to speed and they muddy attribution efforts when analysts rely solely on ttps or indicators of compromise. So should threat publication stop? No. Threat publication is more critical than ever, but it does need to adopt.

Steve Gibson [01:27:54]:
I'm sorry to adapt. For our fellow defenders and the broader industry, this means criminal adoption of AI complicates cybercrime defense. Our industry needs to be more active than ever in educating and supporting our readership. LLM generated code from blogs is a good start for an attacker, but it's not perfect. Publishers should factor in the ways that LLMs could be possibly be misused during the publication process and test how their detailed descriptions might be exploited. So in other words, while authoring and publishing technical security research, keep in mind now that AI will almost certainly also be ingesting that research and you may be seeing signs of it in the future, they wrote. Vibe coding Copying Muddy's attribution Although this only applies for those with a primitive view of attribution based solely On TTPS or IOCs Indicators of Compromise, we recommend best practices using more sophisticated attribution, which must evolve beyond simple indicator matching. Publications remain essential.

Steve Gibson [01:29:13]:
Security publications are a side effect of the research developed to enable leading security platforms to defend those same platforms will always be the first line of defense. The knowledge sharing that occurs through these publications strengthens the entire security ecosystem. That is, you know, all of the security companies cross sharing the the result of their research so they don't all individually have to research everything. I mean it's, it's, it's perfect you know, community functioning and, and these and trend is saying we, we depend upon it, so we provide it. In other words, you know, while it's true that the bad guys will gain, so too will the good guys from this publication, but it does make things more difficult. They finished saying Vibe coding or Vibe programming represents a paradigm shift in software development through AI assisted code generation. This approach significantly simplifies and speeds up the process of writing executable code, removing barriers for non programmers. However, as we've demonstrated, it also empowers prompt kitties, individuals without deep technical knowledge to misuse the technology for wrongdoing.

Steve Gibson [01:30:35]:
In this article, we've only scratched the surface of the use and possible abuse of Vibe coding generative tools for malicious purposes. The evolution of these tools creates new challenges for threat hunters and defenders. Simple and blind correlation of attacks by matching ttps will no longer be effective because there's going to be TTP tactics, techniques and procedures blurring as a consequence of this because large language models are going to be incorporating all this. They said defenders will need to embrace leading methods of attack clustering and attribution based on the attacker's intentions, objectives and targeting. Shifting attribution techniques to focus on the primary objectives of the attacker might make it harder for attackers to plant false flags. That said, security is always a cat and mouse game and with every step forward we evolve into another cycle of innovation and adaptation. So basically there as I've been summarizing through this, is, you know, the security firms are going to continue publishing with the knowledge that large language models are going to be training on this publicly available information and that bad guys are going to be given a bit of a leg up through Vibe coding just like all other programmers are and that it will mean that because the code is actually coming out of LLMs rather than out of specific coders, the habits of specific coders which were being reflected by their code I Mean, I guarantee you if you look at my code, you would tell it's me. Yes, there are things I do over and over and over, you know, my so called patterns of coding which absolutely fingerprint me as the author of, of that code.

Steve Gibson [01:32:34]:
And I'm sure that that is like, and you know, we don't see that talked about publicly. But I, but what the, what trend micro is saying is they've been using that they, they similarly pick up on, on code approaches that they see in the disassembly of malware and they're able to say, oh, we know that that came from Vladimir, it's a signature. Yeah, sure, yeah, it is. And you know, my code clearly carries my own signature. I, I, I'm, I'm well aware of that. And so if code largely now comes out of LLMs and then is tweaked in order to fix it, where it's, where it doesn't quite work, then you're not going to have, you know, that same code signature showing really interesting, or.

Leo Laporte [01:33:28]:
It might be somebody else's code signature that the LLM learned when ingesting their code.

Steve Gibson [01:33:33]:
My guess is it would get blurred I think, I mean, because it's going to be sucking in so much code and homogenizing it essentially. So yeah. Some reporting I encountered after I recounted a tweet made by the current U. S. Director of National Intelligence, Tulsi Gabbard, which in all fairness didn't really provide anything more than a somewhat nondescript boast, appears to have been incorrect, at least according to, to some newer information. Unfortunately, I don't maintain a subscription to the Financial Times, but it is, and boy, they are like very powerfully paywalled now. But the Financial Times is a credible source of information. So all I can do again is share what I have found, which reads as follows.

Steve Gibson [01:34:26]:
The fight between Apple and the UK government over lawful access to icloud user data has not been resolved despite media reporting last week. The Financial Times this week reported on documents filed with, with the Investigatory Powers Tribunal, an independent judicial body that examines complaints about UK intelligence services. Okay, so we knew that Apple was going to appeal to the UK's Investigatory Powers Tribunal, so that's not news. But perhaps the fact that this is still ongoing, at least as of last week, means that the issue is not yet as resolved as we may have believed. The reporting continues. Back in January, Apple was provided with, and we know this with a government order known as a technical capability Notice, the tcn. The Financial Times now suggests the TCN required Apple to provide broad access to icloud data, including messages and passwords. I don't know if we knew about that.

Steve Gibson [01:35:37]:
Quote, the obligations included in the TCN are not limited to the UK or users of the service in the uk. Well, that we did know they apply globally in respect of the relevant data categories of all icloud users. Uncle. Quote the IPT filing ads. Okay, but that's not news either. We knew all that. Again, the point might be that the Financial Times is reporting on a leak from the supposedly secret tribunal. The reporting about this finishes with a sentence.

Steve Gibson [01:36:11]:
So despite what Tuli Gabard says on social media, this is still a live issue. Okay, so I'm thoroughly confused. My intention here was just to take back what was reported a week or two ago. I don't have any first hand or even secondhand knowledge of what Tulsi Gabbard may or may not know. So perhaps our takeaway should be, you know, to take tweets for what they are remembering, also what they're not, which is anything definitive and actionable. And again, the problem is Apple can't say anything because they're under a gag order. All we know is that I guess we have the Financial Times reporting that they have in fact filed with the tribunal. Whether that means they're closing, you know, closing the loop, crossing their T's and dotting their eyes, I don't know.

Steve Gibson [01:37:07]:
Again, we don't know. But I just wanted to mention that, you know, it may not be resolved, which is unfortunate because we want it to be resolved. Okay, who? We had a very serious supply chain attack. And Leo, let's take our second to last break before we get into our main topic because I want to spend some time on this.

Leo Laporte [01:37:32]:
I do subscribe to the Financial Times.

Steve Gibson [01:37:35]:
Ah, good.

Leo Laporte [01:37:36]:
I do have have the article here. They say they saw the legal filing and as you said, that according to legal filing, they wanted access to all of the standard icloud stuff, which we know Apple does have the keys to.

Steve Gibson [01:37:52]:
I don't think I knew that they had people's passwords. To me that's a little surprising because.

Leo Laporte [01:37:59]:
Yeah, I don't know if they do. I mean, you can ask for it, but Apple could say, well, sorry, we don't have it. If you use Apple's password manager, your passwords are synced to icloud.

Steve Gibson [01:38:11]:
Yeah, right. It's in the keychain, presumably.

Leo Laporte [01:38:16]:
But I don't know if Apple has the key to that. They shouldn't. That should be generated on your local device. Using the secure enclave and not available to Apple at all. Maybe when they said passwords, they meant the Apple. Apple's, you know, the customer's password to their Apple account. I don't know.

Steve Gibson [01:38:36]:
Or the ability to unlock the phone. I mean, I could see. We know that Apple's able to unlock a phone, but that's a little different than like saying, yeah, we actually.

Leo Laporte [01:38:44]:
Is Apple able to unlock the phone? We do know that.

Steve Gibson [01:38:48]:
I think we do.

Leo Laporte [01:38:49]:
See, I feel like that those kinds of things should be generated on device and stored in the secure enclave and not available to Apple. Yeah, I don't know. I'll have to look. Yeah. And the fact that this is ongoing tells us that whatever DNI Gabbard said, it's not over, not resolved yet.

Steve Gibson [01:39:13]:
And that's, that is the message I wanted to convey is that there's some doubt now about what. What was previously tweeted. So we're gonna, you know, unfortunately we're back on Tenderhooks because, you know, this, we, we. This needs to get fixed.

Leo Laporte [01:39:26]:
Well, anybody who lives in the UK doesn't have avail, doesn't have the advanced protection anyway because Apple did withdraw that.

Steve Gibson [01:39:34]:
Well, they, they can't turn it on. Have, did they do an update and take it away.

Leo Laporte [01:39:41]:
Oh, I wonder. You're right. If it was previously turned on. Oh, that's an interesting question. Yeah, you can't do it. Oh, that's an. No. I doubt Apple retroactively would disable it, but who knows? Yeah, this is all a mystery because none of this is exposed.

Steve Gibson [01:39:58]:
Just little random leaks from the corners.

Leo Laporte [01:40:01]:
Annoying. Our show today, brought to you by a client of ours that we love, Zscaler, the leader in cloud security. Hackers are using AI, we know this. To breach your organization. We just were talking about it. We know this is a big problem, but AI is kind of. It gives and it taketh away. So on the one hand, the bad guys are using it, but how many companies are using AI to power innovation, to drive efficiency? So AI can help bad actors deliver more relentless and effective attacks.

Leo Laporte [01:40:36]:
Yes, but it also can make you more a better company. You know, this is a scary stat. Phishing attacks over encrypted channels increased by 34.1% last year, fueled by the growing use of generative AI tools and phishing as a service kits. And now, thanks to Trend Micro, we know that it's going to get even worse and we won't even be able to say, you know, oh, this looks like lapsus. You won't know scary organizations in all industries, on the other hand, are using AI. They're leveraging it to increase employee productivity, using public AI for engineers with coding assistance. Just like the bad guys are using coding assistance for vibe coding, so are the, so are companies, right? Marketers are using AI for writing, finance is creating spreadsheet formulas. Companies are automating workflows for operational efficiency across individuals and teams.

Leo Laporte [01:41:35]:
They're embedding AI in applications and services that are both customer and partner. Facing AI is helping so many companies ultimately move faster in the market and gain competitive advantage. But what, what is the price you're paying for this? Is the data that you're providing that AI is, is it proprietary? Is it safe? Is the AI company keeping it private and secure? Companies have to really rethink how they protect their private and public use of AI in addition to how they defend against AI powered attacks. So you can see there's opportunity everywhere and there are threats everywhere. That's why so many companies choose Zscaler for their security. Jeff Simon, who's a senior Vice president and CISO Chief Security Officer at T Mobile, uses Zscaler. He said, quote, zscaler's fundamental difference in the technologies and SaaS space is it was built from the ground up to be a zero trust network access solution, which was the main outcome we were looking to drive, end quote. That's, that's what we've talked about before, how powerful Zero Trust is in this world.

Leo Laporte [01:42:43]:
The problem is that traditional perimeter defenses, firewalls and then the VPNs you need to get through them and the public IP addresses that expose and of course as soon as you've got Attack Surface exposed, they're no match for hackers in the AI era. It's time for a modern approach. That's why you're going to want to take a look at Zscaler's comprehensive zero trust architecture and AI that ensures safe public AI productivity, protects the integrity of private AI and stops AI powered attacks. It does. All three thrive in the AI era with Zscaler, Zero Trust plus AI to stay ahead of the competition and remain resilient even as threats and risks evolve. Learn more@zscaler.com security that's zscaler.com security we thank them so much for their support of security. Now Steve, back to you.

Steve Gibson [01:43:40]:
Okay, so first we're going to talk about what happened and then about the problem. This is another illustration of just how vulnerable the open source software system is to sadly, malicious abuse. The announcement of what one developer discovered was posted on Substack with the title. We just found malicious code in the popular Error ex NPM package the developer wrote. Before you read any further, go to the website for the NPM package error exist. Look at the number of weekly downloads. You will likely see a number north of 47 million. Okay, 47 million downloads per week for.

Leo Laporte [01:44:39]:
It's right now 49 million.

Steve Gibson [01:44:42]:
Yep. And if you, if you put your cursor on that little bump, you'll see that it's. I think it's 67 or something. If, if it's live. It was live on, on mine.

Leo Laporte [01:44:51]:
What little bump? What are you talking about?

Steve Gibson [01:44:54]:
In, in. In the chart when I did it. That chart.

Leo Laporte [01:44:58]:
Oh, oh, I see what you're saying.

Steve Gibson [01:44:59]:
Yeah, yeah, yeah, there it is.

Leo Laporte [01:45:00]:
Oh. At one point it was 60. 64. 62 million 64.

Steve Gibson [01:45:04]:
7. 64. Yep.

Leo Laporte [01:45:06]:
So it goes up and down.

Steve Gibson [01:45:08]:
That's per week. Right? Okay, so this guy wrote, this isn't a headline grabbing framework like React or Express. It's a tiny one line utility package buried deep within the dependency trees of tens of thousands of projects across the globe. He said, we know that because the builds of those projects caused this tiny one liner package to be downloaded more than 47 million times per week. This is the kind of package you inherit without ever knowing it exists.

Leo Laporte [01:45:43]:
Yeah, you don't, you don't explicitly call it NPM does.

Steve Gibson [01:45:46]:
Yeah, exactly. It's down some depend, some dependency tree when you're rebuilding the thing that it, that uses it. Wow. They said for a short period it was compromised, turning its massive reach into a ticking time bomb for a significant part of the Java ecosystem. A single line of malicious code in a package this ubiquitous has a blast radius that is difficult to comprehend. It has the potential to compromise CICD pipelines, production servers and the laptops of developers at countless companies from small startups to Fortune 500s. For us, this global threat did not announce itself with a bang. It started with a whisper, a cryptic build failure in our pipeline.

Steve Gibson [01:46:39]:
The error was reference error. Colon fetch is not defined. Our investigation into the build failure took a dark turn when we traced it back to this tiny trusted dependency. Our package lock JSON clearly specified we were using the stable version 1.3.2. However, by running npm space ls space error EX in our build environment, which is going to pull the most recent one, we found that version 1.3.3 was being installed. This version had been published just a day earlier. Curiosity turned to alarm when we inspected the code of version 1.3.3, the malicious one. While version 1.3.2 was a single clean line of code.

Steve Gibson [01:47:39]:
The new version contained this JavaScript and they highlighted in their posting. I've reproduced it here in the show notes. It's just garbly gook. I mean it's just junk. Yes, they said this is heavily obfuscated code designed to be unreadable but buried. And I'll I have a lot more to say about that in a minute. But buried within the mess was a function name that made our blood run Cold check Ethereum W the attacker had injected malware into the package, very likely designed to detect and steal cryptocurrency from the environment it was running in. The fetch call that was breaking our build was probably the malware attempting to send stolen data to the attacker server.

Steve Gibson [01:48:30]:
Our build failed simply because our Node JS version was old enough not to have a global fetch function. In a different environment. The attack could have gone completely unnoticed. Okay, now the posting continues to talk about this further, but everybody gets the idea. I went over to NPM to check out the Error X package and in the first place, as I said, as we saw, sure enough, one week in June, the Error X package was downloaded more than 64 million times. Currently it's at 47,177,455 times per week. The Error X package was first released 10 years ago. Okay, so it's 10 years old.

Steve Gibson [01:49:20]:
It went through a series of post release updates and quickly stabilized, now having a historical grand total of 16 releases. Its second to last release was nine years ago and its current release 1.3.2, which is what everyone is using, is seven years old. So it has not changed a legitimate bite in the past seven years. But then someone somehow, and actually we now know how I'll get that. Get to that in a second. Someone somehow managed to maliciously replace that one.3.2 release with a bogus one.3.3 release, and the dependency managers saw that a newer release had become available and grabbed it so that anyone building anything that used it would be using the latest and the greatest. And as as if the plot were not already thick enough, it gets still thicker. Because while the package itself has one single dependency, meaning that it relies upon and pulls in one other package named is hyphen arrayish.

Steve Gibson [01:50:41]:
This error x package is directly dependent upon by 1544 other packages. That means that anytime any of those 1544 other packages is rebuilt, and as we know that happens between 47 million and 64 million times per week globally. The repository for this error x package will would be queried and the latest and greatest release meaning now or while that was active, the malicious version 1.3.3 would be obtained and incorporated into every single one of those newly rebuilt holes. At that point, that newly rebuilt package hole would have unknowingly and unwittingly incorporated malicious code that apparently looks for and steals its developer Ethereum cryptocurrency. And as I said, we'll get to more of that in a second. The developer who found this wrote anyone using a newer release of Node JS would have had a successful build and would be and might well still be completely unaware that their library or application, which they may have then pushed out to who knows who now contains malware. We can only be glad that scouting around for Ethereum is all that this thing did, because it could have been far more malicious. And that of course takes us back to that recent reporting that the Pentagon is actively using open source software that might be modified maliciously at any time.

Steve Gibson [01:52:30]:
We don't know how many of those 150444 packages may have been rebuilt during the window of opportunity while this malicious one. 3.3 was live. The the repository was immediately rolled back to 1.3.2 as soon as this problem was found. But with 47 million downloads of this per week, that would have been an average of 6.7 million downloads per day. So that many individual instances of that malware could be in use right now. Our takeaway here is that a very nice open software package sharing system originally created by and for computer hobbyists, has gradually been adopted for serious use all the way up to the Pentagon and everywhere in between. And at no point has anyone really stopped to say now hold on a second. Is this safe? No one wants it not to be safe because the entire system works so well and is so darn useful right up until the point where it takes a critical missile guidance system offline because a Pentagon subcontractor was building their software the way everyone else is these days.

Steve Gibson [01:53:57]:
So a note of thanks to our listener Kevin White, whose email about this arrived while I was assembling the show notes yesterday. But wait, there's more. Or as Jobs might have said, one more thing. A few hours later I received another note from another listener of ours, Sasha Lopez in South Wales, uk. That feedback contained a link to the larger story. Not one but 18 Leo extremely popular packages were compromised using this same malware from the same attacker. Listen to the packages and their weekly download Counts backslash is 26 million downloads per week. Chalk template 3.9 million downloads per week.

Steve Gibson [01:54:49]:
Supports hyphen hyperlinks 19.2 million downloads per link has ANSI 12.1 million downloads per week. Simple swizzle 26 plus million.

Leo Laporte [01:55:02]:
I'm not gonna swizzle anymore. Gosh.

Steve Gibson [01:55:05]:
27 million downloads. Error X There's our 47 million downloads. Color name 191 million downloads per week is a Rayish 73.8 slice ANSI 59.8 color convert 193 million downloads per week. Rapancy 197 almost 98 ANSI regex 243.6 million downloads per week supports color 287 million downloads per week. Strip ANSY 261 million downloads per week. Chalk 300 million downloads per week. Debug 358 million downloads per week. ANSI styles 371 million downloads per week.

Steve Gibson [01:56:00]:
Altogether these packages have more than 2 billion downloads per week.

Leo Laporte [01:56:08]:
They're not being downloaded by individuals. I know you explained this, but just for people who are going, well, I don't understand how that could be. So they're downloaded by automated tools that.

Steve Gibson [01:56:18]:
Are building software, updating the most current version.

Leo Laporte [01:56:23]:
So they got. So this tool thinks it's rebuilding something bigger and larger. And this is the problem that many of these tools have these dependencies built in and the CICD pipeline pulls in these dependencies as it's rebuilding. And the author of the. Or the person who's using those libraries might just be a web designer who said, yeah, I want to include a nice little feature on here and puts it in his. In his code. He's not really paying attention to all the other stuff that's being downloaded and doesn't want to.

Steve Gibson [01:56:57]:
I mean, that's part of the benefit of this.

Leo Laporte [01:57:00]:
Right?

Steve Gibson [01:57:00]:
He just, he just includes this module call some functions of color string or simple swizzle and takes advantage of whatever that package.

Leo Laporte [01:57:09]:
He's probably not even calling it himself. It's this package that's calling it. He probably. Yeah, I've seen that you see this all the time where you install a package and this is a real problem, by the way, in all of these systems. And then they go out and get 50 dependencies.

Steve Gibson [01:57:23]:
Yeah.

Leo Laporte [01:57:24]:
And install them.

Steve Gibson [01:57:25]:
Well, and we talked about this back with the log 4J problem, because that was. The problem was log everything was a library that everything else was what was using. So it wasn't until all those other packages would be rebuilt that the problem would be able to get out of the system. So here's what we know. We know what happened. The maintainers of those packages were phished.

Leo Laporte [01:57:53]:
Plain and yeah, has to be because you can't commit into these packages unless you have commit privileges.

Steve Gibson [01:57:59]:
They got fished. Okay, so the malware which so, so now we know what this is. They were all, all these packages were maliciously updated to contain code that would be executed on the client of a website which silently intercepts crypto. That is the malware silently intercepts crypto and web3 activity in the browser, manipulates wallet interactions and rewrites payment destinations on the fly so that funds and approvals are redirected to attacker controlled accounts without any obvious signs to the user. So this is a massive attack on cryptocurrency wallets, anything web hosted. The malware, which has now been fully reverse engineered, turns out to be extremely sophisticated. It's a browser based interceptor that hijacks both network traffic and application APIs. It injects itself into functions like fetch XML HTTP request, which is how browsers do remote queries and common wallet interfaces, then silently rewrites values in requests and responses.

Steve Gibson [01:59:22]:
That means any sensitive identifiers such as payment destinations or approval targets can be swapped out for attacker controlled values before the user even sees or signs them. To make the changes harder to notice, it uses string matching logic that replaces targets with lookalike values. It's extra dangerous because it operates at multiple layers, altering content shown on websites, tampering with API calls and manipulating what users apps believe they are signing. Even if the interface looks correct, the underlying transaction can be redirected in the background. One of the maintainers who was compromised explained that he had been fished. He wrote and. And he he wrote about this and sent a screenshot from the email which I have in the show notes. It says hi Kix qix as part of our ongoing commitment.

Steve Gibson [02:00:27]:
Oh this. This appeared to come from NPM themselves legitimately as part of our ongoing commitment to account security or we are requesting that all users update their two Factor Authentication credentials. Our records indicate that it has been over 12 months since your last two factor authentication update. To maintain the security and integrity of your account, we kindly ask that you complete this update at your earliest convenience. Please note that accounts with outdated 2fa credentials will be temporarily locked starting September 10th. Okay, today is the 9th. This all happened yesterday on the 8th to prevent unauthorized access. And then there's a link update 2fa now and said if you have any questions or require assistance, our support team is available to help.

Steve Gibson [02:01:20]:
You may contact us through this link. So note that one of the giveaways of the phishing email is the cutoff date, just two days from the notice date.

Leo Laporte [02:01:33]:
Urgency.

Steve Gibson [02:01:33]:
Creating, creating a sense of urgency is one of the ways to get recipients to forget their own safety protocols. And get this, the domain from which the email had been sent, which appeared to be NPM themselves, it was npmjs.com.

Leo Laporte [02:01:57]:
Gee, that sounds right.

Steve Gibson [02:01:58]:
Had just been registered three days before on September 5th. So here's another example of an instance where checking the registration duration of anything we're replying to or relying upon is such a simple thing to do and really represents powerful protection. We keep seeing that attacking domains are very fresh, they've just been registered. I look forward to the day when our browsers start and our email clients start putting a big red flag in front of us saying, whoa, just so you know, this domain is three days old. Does that make sense to you?

Leo Laporte [02:02:42]:
Yeah, that's what next DNS does. And I, I turned it off because there were sites that I wanted to access that were brand new. And then I realized maybe I should turn that back on again.

Steve Gibson [02:02:51]:
Yeah, I think, you know, just clicking through those makes sense and, and you know, pausing to ask whether, you know, whether there's something there makes sense. So the fate of many hundreds of millions of users of this handful of 18 mpm npm packages, critically, think about that critically depends on upon the package maintainers not falling for basic phishing attacks. Basically this is, think about how similar this is to the certificate authority model, which is, you know, really struggles and works at maintaining its security. However, there we have a relatively small number of trusted root authorities that are able to be all that know that their existence, that their trust relies upon not making a mistake. Here we've got the guy in Nebraska who's maintaining the random one line, you know, error ex package and oops, up pops a note and he doesn't want to get locked out, so he clicks the update to FA and ends up turning his credentials over, loses control of his repository, malware gets injected and 2 billion downloads a week.

Leo Laporte [02:04:21]:
So, you know, and this is the problem in general is that we've got all of these automated systems to make our life easier. You know, when you want to install something using Docker, it downloads a manifest and you just watch it as it downloads stuff, loads stuff, all this stuff spews by and it's, oh, this is great. Boy, that was easy. As I started off all this, I.

Steve Gibson [02:04:47]:
Guess I know as I started off saying, without ever intending to and with only ever having the most altruistic and best of intentions, we have slowly over time built not just a house of cards, but a massive kingdom castle out of cards. It's a system that we cannot stop using because over time we've become utterly dependent upon it. Yet it's security. Which is to say it's frankly shameful. Lack of security. I mean actual security really ought to be keeping anyone who's using it up at night. You know, everyone has only the best of intentions. Of that there's no doubt.

Steve Gibson [02:05:29]:
But you know, an old familiar saying might apply here. The road to hell is paid with good intentions. And unfortunately we have a system that, you know, everybody is using and it's just not secure One piece of listener feedback from Bill Allen he said the mention of Byte Magazine in Security Now 1041 caused me to remember how I became introduced to Spin right in the first place. 1988 was the year I upgraded my dual floppy dual floppy drive IBM XT clone PC to a 32 megabyte MFM RLL mini scribe drive. Yep, remember that? I had one of those too, he said. I was desperate to maintain it, optimize it and otherwise keep it alive. I was also already a subscriber to Byte Magazine. I now remember reading that review article about Spinrite and right away contacted GRC to get a copy.

Steve Gibson [02:06:35]:
I know I started with Spin right one, so that seems about the right time frame. Anyways, that was a welcome trip down memory lane. I'm still using Spinrite today with version 6.1 and have introduced it over the years to new generations of technicians. There's a little sadness in all this though, as it also reminded me of the sudden demise of Byte magazine in 1998. I remember being rather devastated since it was my primary source of tech news and views at the time, he said. Here's a bit on that from Tom R. Halfhill, Byte Magazine Senior Editor from 1992 through 1998, and he includes a link in his note signing off. Best Regards, Bill Allen, Crowley, Texas Spin right one SN1 so the link that Bill provided is very interesting to anyone who loved Byte Magazine.

Steve Gibson [02:07:33]:
The page explains the sequence of events surrounding the end of Byte Magazine in detail, which amounts to CMP purchasing all of McGraw Hill's publications and just not caring about Byte Magazine. But for example, it mentions our old friend Jerry Pornell writing, after the 1998 shutdown, the Byte website continued to draw about 600,000 page views a month, even without ever being updated. Obviously many people still wanted the kind of information Byte provided. This unrelenting traffic prompted CMP to revive Byte As a web only publication. In 1999, CMP convinced longtime Byte columnist Jerry Pornell to resume his Chaos Manor column on the new byte.com website, lending some credibility to the effort. However, Pornell left byte.com in 2006. The underfunded website lacked Byte Magazine's breadth and depth of technical content, and it vanished in 2009. So anyway, there's much more there, including a detailed FAQ created by, as Bill noted, Tom Halfhill and then the senior editor of, of Byte magazine's print edition at the time of its demise.

Steve Gibson [02:09:03]:
I invite anyone who might be curious to follow the link in Bill's feedback. And thank you, Bill. Although Byte Magazine might not still be around and able to help, I'm delighted that spin. Right. Still is. And I have some big plans for its future. Yeah, I remember scrolling Leo there. There's an FAQ there.

Steve Gibson [02:09:22]:
Like, what happened about the layoffs? Apparently everybody got fired two days before it was shut down with like no notice.

Leo Laporte [02:09:30]:
Yeah, it's sad.

Steve Gibson [02:09:32]:
And one techie was allowed to stay and he, he, he, he resigned in protest because it's like, I'm not going to stay here when all my friends have just been laid off.

Leo Laporte [02:09:40]:
And bite.com was actually pretty awful. I remember.

Steve Gibson [02:09:44]:
I think that's the case. I imagine that Jerry just did it because he had the habit and they probably paid him something for a while.

Leo Laporte [02:09:50]:
Yeah, exactly. Yeah, yeah. I think they suckered him a little bit. Yeah.

Steve Gibson [02:09:54]:
Yeah. Okay, we're going to talk about letters of mark after our last sponsor.

Leo Laporte [02:09:59]:
Good.

Steve Gibson [02:10:00]:
And look at what happens when the government might be giving our private companies the permission to attack the people who are attacking them.

Leo Laporte [02:10:10]:
To be privateers.

Steve Gibson [02:10:11]:
It's even worse than that, Leo. It goes way beyond that.

Leo Laporte [02:10:16]:
Oh, I can't wait. This is fascinating. We'll get to that in a second. But I want to mention our sponsor for this segment of security now. Express VPN. ExpressVPN is. How can I put this? Like tinted windows for your Internet connection. You can see out, but they can't see in.

Leo Laporte [02:10:37]:
It seems like you'd want the same kind of privacy online as you might have in your limousine. ExpressVPN is the only VPN I use and trust. And you better believe when I go online, especially when I'm traveling in airports or coffee shops in other countries, ExpressVPN is my go to. Everyone needs ExpressVPN because all your traffic flows through their servers. Internet service providers, including mobile network providers, know every single website you visit. They have the keys. Right. And in the US ISPs are legally allowed to sell that information to advertisers, so you better believe they collect it.

Leo Laporte [02:11:15]:
Well, not when you're using ExpressVPN, because, yes, you're going through your ISP, but what they see is encrypted ExpressVPN reroutes 100% of your traffic from your computer through secure encrypted servers and then out onto the public Internet. All your ISPCs is encrypted gobbledygook. They don't know what you're seeing, what you're using. But then now you're going to say, well, why do you use ExpressVPN? Why is ExpressVPN the best VPN? Because it hides your IP address, making it extremely difficult for third parties to track your online activity. When you emerge on the public Internet, it's not your IP address they see, it's ExpressVPN's. Plus, I have to say, ExpressVPN, unlike many other VPNs, really invests in their infrastructure. They rotate their IP addresses so it's not obviously a VPN IP address, it just looks like a normal IP address. They do so many things to protect your privacy, and they make it really easy.

Leo Laporte [02:12:10]:
And that's important, too. It's easy to use. You just fire up the app. You can click one button to get protected. It works on every device. You've got phones, laptops, tablets. They even sell routers. With ExpressVPN built in, you can put it on many routers.

Leo Laporte [02:12:24]:
So you can stay private on the go. You can even stay private at home. And it's rated number one by the top tech reviewers like CNET and the Verge. Protect your online privacy today. Visit expressvpn.com securitynow that's exprest vpn.com securitynow to find out how you can get up to four extra months free. Expressvpn.com security now. We thank him so much for supporting Steve and the work he's doing on security now. Steve.

Steve Gibson [02:12:57]:
Okay, so one of the interesting and somewhat delicate topics we've touched on from time to time is the question of whether, and if so, when, it might be okay for good guys to do things that are not technically legal, but with good intent and for a hopefully good cause and outcome. In other words, making the world a better place, even if the means to do so would mean breaking a few rules along the way. An early instance of this was way back in the Internet Worms era, with the likes of Code Red and nimda. In those cases, compromised Servers were actively scanning for other servers that had not yet been compromised, and the source IPs of those scans were not being and could not be spoofed. As a consequence, security firms who were running honeypots were collecting a comprehensive list of worm compromised servers, since compromised servers were reaching out at random to see whether or not an as yet compromised server might reside at some given IP address. So the question then became, would it be okay for the good guys to use the same now well known flaw in Microsoft's IIS server, which was enabling the worms in the first place, to reach out and proactively and remotely disinfect that machine? A bad worm had infected it. Why couldn't someone who knew where an infected machine was located by its IP address on the Internet reverse that and disinfect it? I was participating on the conference call with Washington when that idea was floated to the proper person at the Department of Justice. At the time, she made it quite clear that doing so would be against the law, plain and simple.

Steve Gibson [02:15:01]:
And in listening to her carefully, it was clearly not a wink wink. She was not saying no, but hoping that we would go ahead and do it anyway. This was not one of those let's not ask for permission, we'll ask for forgiveness instances. And since then, there have been many other instances where it is so, so tempting to allow good guys to remotely fix problems that they almost certainly could. How many consumer routers have been found to be vulnerable? How many random application packages could be patched with the knowledge of a problem before that problem's public release? When Plex found a critical remotely exploitable vulnerability in its publicly exposed media server, it could have proactively reached out and fixed it before bad guys were able to use that flaw to install a keystroke logger into a LastPass developer's machine at home and give LastPass the biggest black eye of its life. In most cases, anything a bad guy can do remotely, a good guy could remotely patch to close the back door long before bad guys are given the information they needs they need in the first place. But it doesn't happen because it's just as illegal for good guys to hack a network, even if the intention is to help that network's owner, as it is for bad guys to hack the same network. And that's the way things have been since the beginning of all this global networking business.

Steve Gibson [02:16:42]:
We're talking about this today because under our current political administration, things may be changing. Anyone who has been following US News will likely have heard that President Trump has decided to rename the U.S. department of Defense the Department of War. That certainly reflects a change in attitude somewhere. And I was put in mind of all this when I saw a story in cyberscoop carrying the headline Google Previews Cyber Disruption Unit as US Government and Industry Weigh Going Heavier on Offense and the teaser at the top of their story notes there are still impediments to overcome before companies and agencies can get more broadly aggressive in cyberspace, both legal and commercial impediments. I'll say, like all those pesky laws we were just talking about, since this could change everything we know about the status quo. I want to share what cyberscoop wrote, they said Google says it is starting a cyber disruption unit, a development that arrives in a potentially shifting U.S. landscape toward more offensive oriented approaches in cyberspace.

Steve Gibson [02:18:05]:
But the contours of that larger shift are still unclear, as is whether or not and to what extent it's even possible. While there's still some momentum in policy making and industry circles to put a greater emphasis on more aggressive strategies and tactics to respond to cyber attacks, there are also major barriers. Sandra Joyce is the vice president of Google's Threat Intelligence Group. You know that's tig, you know, who as we noted earlier, has recently been extorted to fire those two guys. She said at a conference last Tuesday that more details of the disruption unit would be forthcoming in future months. But the company was looking for, quote, legal and ethical disruption options as part of the unit's work, she said at the center. And we'll be talking about this a lot. Ccpl, the center for Cybersecurity Policy and Law event where she called for partners in the project, quote, what we're doing in the Google Threat Intelligence Group is intelligence led, proactive identification of opportunities where we can actually take down some type of campaign or operation.

Steve Gibson [02:19:25]:
We have to get from a reactive position to a proactive one if we're going to make a difference right now, unquote, cyber Scoop wrote. The boundaries in the cyber domain between actions considered cyber offense and those meant to deter cyber attacks are often unclear. The trade off between active defense versus hacking back is a common dividing line. On the less aggressive end, active defense can include tactics like setting up honey pots designed to lure and trick attackers. At the more extreme end, hacking back would typically involve actions that attempt to deliberately destroy an attacker's systems or networks. Disruption operations might fall between the two, like Microsoft taking down botnet infrastructure in court, or the Justice Department seizing stolen cryptocurrency from hackers, Trump administration officials and some in Congress have been advocating for the US Government to go on offense in cyberspace, saying that foreign hackers and criminals are not suffering sufficient consequences. Much criticized legislation to authorize private sector hacking back has long stalled in Congress, but some have recently pushed a version of the idea where the President would issue letters of marque like those for early USC privateers to companies authorizing them to legally conduct offensive cyber operations currently forbidden under US Law. Whoa.

Steve Gibson [02:21:15]:
So this would not be getting a get out of jail free card. This would be a preemptive pardon for anything illegal that might be done in the security interests of the United States. So at this point, I was curious about these letters of Mark, so I asked our AI Oracle about them and learned the following. It said, a letter of Mark is an old legal instrument from the age of the sale. It was essentially a government license authorizing a private ship owner, a privateer, to arm their vessel and attack the shipping of an enemy nation during wartime. Mark means seizure. The letter granted the holder the right to capture enemy vessels and cargo. The captured ships, called prizes, would then be brought back to port, condemned in a prize court, and sold with profits shared between the privateer and the government.

Steve Gibson [02:22:15]:
The system blurred the line between the Navy and piracy. Without a letter, you were a pirate. With one, you were a lawful privateer. In the US context, the US Constitution, Article 1, Section 8 explicitly gives Congress the power to declare war, grant letters of marque and reprisal, and make rules concerning captures on land and water. So legally, only Congress, not the President, can authorize letters of mark. That means the phrase presidential Letter of marque is technically a misnomer. The President cannot independently issue them. Congress would have to approve.

Steve Gibson [02:23:04]:
Now, the reason Chat GPT offered that was that not knowing any better and following from cyberscoop's article which said that, quote, some have recently pushed a version of the idea where the President would issue letters of Mark. My questioning prompt to Chat GPT was, what is a Presidential Letter of Mark? As we learned, there is no such thing. On the other hand, our experience during President Trump's second term suggests that this president would not let that stop him finishing up with ChatGPT's reply because it's interesting, it said, the last US letters of marque were issued during the War of 1812. After that, the US Navy became strong enough that privateering was no longer needed. International law, the 1856 Declaration of Paris, abolished privateering among most major powers. The US never signed, but has honored the ban in practice. In modern discussions, you'll sometimes hear about reviving letters of Marque in the context of cybersecurity, for example, allowing private companies to take offensive action against foreign hackers. But that's purely theoretical and would require an act of Congress.

Steve Gibson [02:24:26]:
So to answer directly a presidential letter of marque would be a government license to privately wage war on behalf of the United states. But under U.S. law, the president alone has no authority to issue one. It would require congressional authorization, except of course that our current president has shown himself to be quite willing to retest many of the nation's long standing protocols and assumptions. Our Supreme Court as a consequence has been unusually busy. Returning to cyberscoop's reporting, they wrote, experts say that the private sector has some catching up to do if there's to be a worthy field of firms able to focus on offense. John Keefe, a former National Security council official from 2022-24 and National Security Agency, you know, NSA official before that, said there had been government talks about, about a narrow letters of mark approach with the private sector companies that he said, quote, with the private sector companies that we thought had the capabilities. The concept was centered on ransomware, Russia and rules of the road for those companies to operate.

Steve Gibson [02:25:46]:
Speaking like others in this story, at Tuesday's conference, Keefe said, quote, it wasn't going to be the Wild west, unquote. John McCaffrey, chief information security officer at Defense Tech Company and a real industries said that the company's with an emphasis on offense, largely have only one customer and that's governments. He said, quote, it's a really tough business to be in. If you develop an exploit, you get to sell to one person legally and then it gets burned and you're back again. And that actually is what we talked about recently, about the whole government market for, for, for zero days. So CyberScoop said by their nature, offensive cyber operations in the federal government are very time and manpower intensive, said Brandon Wales, a former top official at the cybersecurity and information Infrastructure Security Agency CISA and now Vice President of Cybersecurity at Sentinel 1. Private sector companies could make their mark by innovating ways to speed up and expand the number of those operations, he said overall, among the options, companies that could do more offensive work. Andrew McClure, managing director at Forge Point Capital, said the industry doesn't exist yet, but I think it's coming.

Steve Gibson [02:27:13]:
Brandon Wales, now at Sentinel One, said that Congress would have to clarify what companies are able to do legally as well. But that's just the industry side. There's plenty more to weigh when stepping up offense. Megan Stiffel, chief strategy officer for the Institute for Security and Technology, said. However we start, we need to make sure that we're having the ability to measure impact. Is this working and how do we know? If there was a consensus at the conference, it's that the United States, be it the government or the private sector, needs to be doing more to deter adversaries in cyberspace by going after them more in cyberspace. One knock on the idea has been that the United States can least afford to get into a cyber shooting match, since it's more reliant on tech than other nations and an escalation would hurt the US the most by presenting more vulnerable targets for enemies. But Dmitri Alpertovich, chairman of the Silverado Policy Accelerator, said that that idea was wrong for a couple of reasons, among them that other nations have become just as reliant on tech too.

Steve Gibson [02:28:33]:
And quote, the very idea that in this current bleak state of affairs engaging in cyber offense is escalatory, I propose to you, is laughable, he said. After all, what are our adversaries going to escalate to in response? Ransom more of our hospitals? Penetrate more of our water and electric utilities? Steal even more of our intellectual property and financial assets? Alpertovich continued. Not only is engaging in thoughtful and careful careful cyber offense not escalatory, but not doing so would be okay, so this was just one article in cyberscoop, but the consequences of this recent conference captured the attention of the entire industry. 3 days ago in Law in In Lawfare the the headline was Google Sharpens its cyber knife. 4 days ago the publication Today's General Counsel's headline was Google Is Forming a Cyber disruption unit also 4 days ago ooda loop.com the perils of Precedent Could Google's Disruption Unit invite retaliation? 5 days ago in SC media Google to launch cyber disruption unit 6 days ago the Digital Watch Observatory's headline was Disruption Unit Planned by Google to Boost Proactive Cyber Defense seven days ago in Homeland Security Today, Google Google Previews Cyber disruption unit as US debates stronger offensive measures. Even Tom's Hardware back on August 28th carried the headline Google Is Getting Ready to Hack Back as US Considers Shifting From Cyber Defense to Offense. So from the perspective of someone keeping up on on cybersecurity news here in the US it does feel very much as if we are under continual assault from what we're told are aggressive and hostile state sponsored hackers operating out of Russia, China and North Korea. We know that hospitals and schools are being hacked, having their networks taken down and their services, whether it be health care or education impacted and interrupted.

Steve Gibson [02:31:01]:
And we know that our U S corporations now live under the constant threat that some well meaning but momentarily inattentive employee may click on a link, they receive an email, which results in a network compromise, the exfiltration of the corporation's data, exposure to extortion demands, and public humiliation followed by share holder lawsuits. So no way do I or I'm sure anyone think that public or private entities in the US should indiscriminately attack Chinese, Russian or North Korean institutions or enterprises. That's not us, and I was assuming that was not what anyone was talking about. It doesn't sound like Google is, but there is an area of this that makes me feel somewhat queasy, which is the use of the term retaliation. That term is being bandied about in Washington policy circles. It's one thing for Google to disrupt an adversarial attacker's illegal operation, which is in the process of attacking us. I'd be inclined to call that very proactive defense. But it's another thing entirely to use the threat of wholesale unfocused reprisal as a deterrent, which is also being discussed the conference that recently brought all this to a head was held by the center for Cybersecurity Policy and Law, which I mentioned is the CCPL.

Steve Gibson [02:32:40]:
Exactly two weeks ago on August 26, the conference announcement said, quote, ccpl will convene cybersecurity leaders from government, industry and policy disciplines to delve into core questions raised in the recent CCPL report. Quote, to hack back or not hack back. That is the question. Or is it? Okay, so I've linked to this 7 page PDF in the show Notes and for anyone who's interested in this whole topic, believe me, this report will not bore you. It's quite chilling under its section why are we Talking about this now? The report writes, the arrival of a new administration and the growing complexity of the cyber threat landscape have reignited discussions around the use of offensive cyber operations. The White House has suggested that such tactics could be a valuable part of the U S National security toolkit, particularly to counter cyber threats from China. Proponents highlight major incidents including the SALT typhoon and VAULT typhoon campaigns and the recent breach of the U.S. department of the treasury as clear indications that stronger deterrence measures are necessary to combat cybercriminals and state sponsored threat actors.

Steve Gibson [02:34:13]:
Though not a new debate, they write, some senior officials and agencies are signaling renewed interest in expanding offensive cyber capabilities, including potential involvement by the private sector. The US Cyber Command, US Cybercom has emphasized the need for more proactive actions, especially in defending critical infrastructure. The goal is to use offensive cyber tools not just in retaliation, but also as a deterrent to prevent future attacks. Okay, now I hope it's clear to everyone that this really changes the game. Maybe it's a good thing I can't judge that. Perhaps officials in China, Russia and North Korea have been laughing at the U.S. and at our quaint Constitution, which so often ties our hands and prevents ad hoc retaliation during a fit of pique. If that's been the case historically, I I would at least imagine that they are likely laughing less loudly with President Trump sitting in the Oval Office, where fits of pique appear to be the order of the day.

Steve Gibson [02:35:33]:
If having President Trump's finger on the button gives them pause, it's not clear to me that's a bad thing. But to be very clear, what this policy exploration paper is examining is not some Google disruption of a specific targeted foreign criminal enterprise. It's exploring a significant escalation in US Cyber posture. The why are we talking about this now? Section of the paper continues saying Pentagon Acting Chief Information Officer Katie Arrington stated her role includes removing policy barriers that limit the Department of Defense, now known as the Department of War, ability to counter adversaries, stressing the need for enhanced offensive capabilities. Similarly, CIA Director John Ratcliffe has expressed support for developing offensive cyber tools and establishing a comprehensive cyber deterrence strategy. Former National Security Advisor Mike Waltz has also endorsed the use of offensive cyber operations to impose greater costs on threat actors like China. Katie Sutton, the nominee for Assistant Secretary of Defense for Cyber Policy, pledged during her confirmation hearing to review National Security Policy Memorandum 13. That's actually kind of infamous.

Steve Gibson [02:37:08]:
It's referred to in that seven page document which governs the DoD's authority to conduct offensive cyber operations. Originally issued under the Trump administration in 2018 and revised by the Biden administration in 2022, NSPM 13 provides, quote, well defined authorities to the Secretary of Defense, now actually known as the Secretary of War, to conduct time sensitive military operations in cyberspace. According to a 2020 speech given by Paul Nay, the former DoD general counsel, Congress is also revisiting the role of offensive cyber operations. Congress is Although the bipartisan, quote, Active Cyber Defense Certainty act, introduced in 2019 by Representative Tom Graves, failed to pass the 116th Congress, it has helped revive the debate. The bill aimed to amend the Computer Fraud and Abuse act to grant legal authority for organizations to engage in active cyber defense. Active cyber defense including offensive measures to protect their networks. Okay, now none of us are on the inside in the way these government officials are. So it's not possible to fairly armcheck armchair quarterback what they want to do to judge how much more freedom they need and what they would do with what they were given.

Steve Gibson [02:38:45]:
And it does appear that even if President Trump were to issue letters of marque, they would only be serving as a bridge to where it certainly does appear. The country's current intelligence and defense agency heads and many legislators want to take the country. Sentiment appears to be moving in a cyber aggressive direction. And as for cyber as a deterrent, I'm not sure about that. I don't like the idea of any conflict being escalated, whether cyber or conventional. The US has a massive conventional military that we've been relatively restrained in deploying, and it's likely that the knowledge of its potentially overwhelming strength has served as an effective deterrent to those who have ambitions to exercise more of their own lesser power. The problem is military parades appear to attempt to demonstrate impressive, impressive military hardware which is visible and can be counted as can stockpiles of weapons and warheads. It's not clear to me that cyber is at all the same.

Steve Gibson [02:40:03]:
Can having an impressive cyber capability form a deterrent? I don't see how having warheads whose permanent destructive potential is well understood, along with a fail safe system for their deployment, can serve as a powerful deterrent because they do not need to be used to be appreciated. By comparison, the only way to appreciate a nation's cyber offensive capability is for it to be used against an adversary. You know, that's not the definition of a deterrent in that sense. Cyber capabilities, unfortunately, are more like biological weapons, which the various superpowers all assume each other have, but no one dares to use. They're not something that can be paraded in the streets or counted in silos. They are simply feared, while their existence is adamantly denied. Following that analogy, fear of what such a weapon might do if it were ever to be released does serve as a deterrent. I suppose so.

Steve Gibson [02:41:12]:
Might the various other cyber warfare nations, China, Russia and North Korea, be fearful of the United States cyber warfare capabilities? I have no idea. As I said at the top, this entire area of offensive cyber war is largely classified, unknown, uncomfortable and unexplored territory whose exploration produces more questions than answers. It is also, as they say, and as I said, far above my pay grade. I am much more comfortable exploring browser cookies, certificate revocations, and the mechanics of other tangible technologies. But the fact is, conferences like the one that was just held two weeks ago today during which Google announced their formation of a disruption unit are occurring. And as all the other headlines clearly showed, it's big news. So as uncomfortable as it may be and as many questions as we may be left with, I think we should at least be aware of what's percolating out there on the cyber warfare front. It may well change the way the world is organized.

Steve Gibson [02:42:26]:
So.

Leo Laporte [02:42:29]:
It'S an interesting question. I mean, when you compare it to biological warfare, there's very clear we have a long tradition with the Geneva Conventions and other agreements between nations of staying away from stuff that could really escalate into something nightmarish.

Steve Gibson [02:42:47]:
Right.

Leo Laporte [02:42:48]:
Do you feel that cyber warfare is that risky?

Steve Gibson [02:42:53]:
There's a little bit of a problem with containment, I think. I mean, okay, so say for example, that the US has the ability to shut off the electrical power for Beijing.

Leo Laporte [02:43:08]:
Right.

Steve Gibson [02:43:10]:
It's. It would be a huge embarrassment if it was actually happened. Deaths would occur. Right. If, like all power, they'll mean all.

Leo Laporte [02:43:19]:
It would be an act of war.

Steve Gibson [02:43:20]:
It would be an act of war. Yes. And I mean, and so it's extremely escalatory by nature now. So maybe you do something smaller. You, you shut off the power for a small province somewhere that's like, you.

Leo Laporte [02:43:36]:
Know, almost do a demonstration to see, to show that we have the capability. Don't, don't see. I, I think that I, it's an interesting.

Steve Gibson [02:43:45]:
Or maybe, maybe we, maybe we attack a lesser country.

Leo Laporte [02:43:52]:
Well, we did that. When the Ukraine war began, we engaged in offensive cyber warfare on their behalf. We know we did that. And I suspect that that's one of the things the Ukraine war has been all along is a proxy for these superpowers to demonstrate capabilities in an attempt to deter each other.

Steve Gibson [02:44:16]:
Right. Like.

Leo Laporte [02:44:17]:
Right.

Steve Gibson [02:44:17]:
We really don't want you to go any further, Vladimir. You know, we're happy with you getting closer to the NATO borders. So.

Leo Laporte [02:44:24]:
Right. You know, and, and the, the risk is it, if you do a large scale look, the Chinese are probably much more able to disrupt our grid than we are able to disrupt their grid. Right.

Steve Gibson [02:44:35]:
That's my worry is that. And so we only have a US Centric view, and I, I talk about that all the time. We know how much we're being attacked. I don't know. Well, and I don't think that, that, I mean, and, and, and they're attacking us indiscriminately. Right. They're state sponsored and they're, they're, they're ransoming our enterprises, they're attacking our hospitals and our school systems. I mean, I hope we're not doing that.

Steve Gibson [02:45:04]:
That's mean. That's disgusting.

Leo Laporte [02:45:06]:
Yeah, that's the other thing is, you know, we often believe that we in the United States have a higher calling.

Steve Gibson [02:45:14]:
And don't want to engage in pesky constitution.

Leo Laporte [02:45:17]:
Yeah. But at the same time, I can understand the urge and I do feel you're right, that where we stand right now, I mean, just nicknaming the Department of Defense, Department of war kind of tells it says it all. We are in a very much of a saber rattling situation. And I'm sure there are people in the government who are saying, let's go, let's go. Why are we not doing it regardless of the consequences? I think the sensible thing would be to demonstrate the capability as a deterrent, just as you hope with.

Steve Gibson [02:45:47]:
Then how do you.

Leo Laporte [02:45:49]:
Well, you do things like small scale, targeted, and you do them in a way that is clearly a demonstration. So you don't, you don't. You know, it would be enough to put on Vladimir's screen, hey, we're watching you, so knock it off. That would harm no one, but it would be an effective demonstration of our capabilities. I think there are ways.

Steve Gibson [02:46:16]:
Russia is still on the Internet. It would be nice if we just denied, you know, just like, like Russia.

Leo Laporte [02:46:21]:
Well, that's what. I don't think you want to do something so dramatic. I think what you want to do, and I think there are ample opportunities to do this in cyber warfare, is demonstrate a capability and then say, so what? That's effective, I would think. And it just. As you say.

Steve Gibson [02:46:36]:
Well, a beautiful example is what we did with stuxnet. The world went, oh, my goodness.

Leo Laporte [02:46:42]:
Right.

Steve Gibson [02:46:42]:
You know.

Leo Laporte [02:46:42]:
Well, Israel has very clearly shown that it has a broad range of offensive.

Steve Gibson [02:46:48]:
Pagers exploding and, and, you know, being targeted by, by your, your, your bodyguards.

Leo Laporte [02:46:55]:
But even Israel is reluctant cell phone to, To. To do kind of engage in wholesale.

Steve Gibson [02:47:00]:
To show all their cards. Yeah, right.

Leo Laporte [02:47:03]:
Because it's dangerous, it escalates, it gets out of control. And we are as this is the real point is we are as vulnerable as anybody, maybe more so. So it would eventually come back to us. Collateral. This paper you referred to, which is very interesting, has a number of risks, including collateral damage and risk of retaliation. And I think those are very serious concerns. I hope that the people who are in charge of this pay attention to this. I think the worst thing to do would be to give private companies this capability.

Leo Laporte [02:47:39]:
Just as we don't give private companies nuclear weapons and tanks. We should. This is something the government should be responsible for and should develop this capability and not give letters of mark if.

Steve Gibson [02:47:53]:
Effort were put into securing what we have. The way we talk about being able to do all the time, I mean, all we have to do is, you know, fix the security of our border devices, you know.

Leo Laporte [02:48:09]:
Well, we have issues with our grid because the way we've set up the grid, there is no national grid.

Steve Gibson [02:48:14]:
Yep, yep.

Leo Laporte [02:48:15]:
And so it's, it's privately held by a vast number of smaller companies with, with differing priorities. And we.

Steve Gibson [02:48:25]:
Yeah, and we've also talked about how there is load sharing across grid. Across sub grid boundaries and if you take out a chunk of it, there is a cascading effect.

Leo Laporte [02:48:36]:
We've seen it happen. Yes, we've seen it happen. So I think we are vulnerable and I think we should be very. You're right. That's the NSA's mission. It's a twofold mission. One is to protect us and it is the other to develop offensive weapons, which they are doing. I mean, we know they do it from the Snowden revelations.

Leo Laporte [02:48:53]:
We know they do it and I hope they're doing their job, but I hope we don't engage in outright cyber warfare because that would not end well for anybody.

Steve Gibson [02:49:04]:
Well, as you know, I have been approached and said, no.

Leo Laporte [02:49:08]:
Yeah, you're like Good Will Hunting. You say, no, I'm not going to work for the.

Steve Gibson [02:49:13]:
I just, I.

Leo Laporte [02:49:16]:
It's a really interesting subject and I do hope sanity and cooler heads prevail because the risks are high, just as they are to using nuclear biological weapons. It's a, It's a dangerous, dangerous game.

Steve Gibson [02:49:32]:
Yeah. I worry that there's a. There's a, a sense of nothing explodes in the same way that a bomb does. You know, it seems like, oh, well, my, my aunt got hacked and, and she's fine. It's like, okay, careful.

Leo Laporte [02:49:52]:
I suspect that the next five years, and we'll be here to report it, there will be an incident and created by a nation state that will be dramatic and very provocative and perhaps cause loss. Great loss of life. That will be considered an act of war, just as 911 was. And I, I fear the result of that. I think we're going to see that in the next few years. We'll be here to talk about it.

Steve Gibson [02:50:19]:
During, during the podcast life.

Leo Laporte [02:50:21]:
Yeah, during the podcast lifetime. I really do. The only way to win is not to play the game.

Steve Gibson [02:50:27]:
Yes.

Leo Laporte [02:50:29]:
Steve Gibson, always fascinating. I really appreciate the time you spend putting this together every week. Steve works really hard. I don't know if you all know that to put together an amazing couple of hours. And I'm glad you listen. Now, here's the deal.

Steve Gibson [02:50:47]:
I do know from the feedback, Leo. I know from the feedback.

Leo Laporte [02:50:50]:
Yeah.

Steve Gibson [02:50:51]:
That how much this. This.

Leo Laporte [02:50:52]:
It's important matters to us.

Steve Gibson [02:50:54]:
It's important. So I get that. I thanked everybody.

Leo Laporte [02:50:57]:
It's important work. I think the whole reason TWIT exists from day one was you and I and the rest of the TWIT team felt this was important work and we care very much about it. So that's why I encourage people to support your work. Go to GRC.com, pick up a copy of Spinrite, the world's best mass storage maintenance, performance enhancing and recovery utility. It even works on Kindles, boys and girls. GRC.com while you're there. That's Steve's bread and butter. But while you're there, there's a lot of free stuff you could check out.

Leo Laporte [02:51:27]:
Spin right, Shields up. He's very. Not just Spin right, Shields up. He's very generous. I'm looking forward to the DNS benchmark. There's a current version out now. He's working on a new one. You can also get.

Leo Laporte [02:51:38]:
If you want to make comments. There's a couple of things. He's got a great forum. Grc.com is it/forum. I can't remember.

Steve Gibson [02:51:47]:
Forum.grc forum.grc.com okay.

Leo Laporte [02:51:51]:
Plural forums.

Steve Gibson [02:51:52]:
Yeah. Very active and really good people there.

Leo Laporte [02:51:54]:
Really good. Yeah, I know that. You can also send them emails if you validate your address by going to grc.comemail at that point. You can also sign up for his newsletters. They're unchecked by default, but if you check them, I would recommend it. You can get the weekly show notes, which I got yesterday, so the day before, usually of the show notes, including the picture of the week. You have time to comment on it. There also is a second mailing list which he uses.

Leo Laporte [02:52:22]:
He's only used once, very infrequently, but I suspect we'll be getting an email soon for the DNS Benchmark Pro. Anyway, sign up. You're going to want that email. You can also get the show from us. Oh, I didn't mention. Steve's got two, three, maybe four if you count them. All unique versions of the show. He has a 16 kilobit audio version, which really, unless you have no bandwidth, you shouldn't get.

Leo Laporte [02:52:47]:
But if you. If you really are bandwidth impaired, get it. He also has a 64 kilobit. That sounds fine. That's fine. He also has the show notes. As I said, he works very hard. They are very complete.

Leo Laporte [02:52:59]:
They're. It's like a novel every week. And he also has transcripts written by Elaine Ferris, so you can read along as you listen or you can use the transcripts to search. I think everybody listens to the show should have a complete collection. You should have 10, 42 episodes. You should have all the transcripts, you should have all the show notes. You should bind them and put them on your bookshelf. You never know when you're going to need them.

Leo Laporte [02:53:21]:
And when you're in retirement, as I am soon to be, I hope you could just put it on loop and you'll never run out of security now for the rest of your life. We have copies of the show. We have two unique versions. We have 128 kilobit audio. Don't ask, it's a long story. We also have video, which is worth it just to see Steve's beautiful mustache. So that's a GRC. I'm sorry, Twitter TV SN.

Leo Laporte [02:53:47]:
There's also a YouTube channel that also has the video. That's really a most useful. If you want to share a clip, if you wanted to share a clip with your senator or your member of Congress of what we just talked about, cyber warfare, that'd be a great example. Just send them a YouTube clip. Everybody could see it, even a member of Congress. And then, you know, you can express your opinions this way or share important information with friends and family and coworkers. There's also, of course, the best way to subscribe. It's an RSS feed.

Leo Laporte [02:54:17]:
Subscribe in your favorite podcast client. You'll get it automatically as soon as it's available, audio or video or both. And if you do subscribe, do me a favor. Leave a five star review for Steve so that others can discover this amazing resource that Steve worked so hard to put out. You also support us, of course, if you're a member of the club, we'd love to have you. Twit TV Club. Twit. That pays a quarter of Steve's salary.

Leo Laporte [02:54:42]:
It pays 25% of our operating expenses. It's a very important resource. We are seeing as we fully expected with the shaky economy, we're starting to see advertisers kind of pull back a little bit. And that means we're going to have a, a financial crunch towards the end of the year. We don't want to cut back. We really want to keep going. Your help makes all the difference. And I know that means you're having a financial crunch, too, so if you can't afford it, don't worry.

Leo Laporte [02:55:08]:
We will always offer this show for free. But if you can, please support us by going to Twitter. TV Club Twit. I think I've said everything that I need to say, except thank you, Steve, for another great episode. And we will see you next week on Secure Link now.

Steve Gibson [02:55:23]:
My pleasure. And I will work to get the date right on the show. Notes next.

Leo Laporte [02:55:27]:
That's okay. That's okay. Security now.

All Transcripts posts