Transcripts

Security Now 1065 transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.

 

Leo Laporte [00:00:00]:
It's time for Security Now. Steve Gibson is here. We have lots to talk about, uh, a big change to Chrome bringing something called device-bound session credentials to your browser. Steve's going to talk about how you can prove you are who you say you are when it comes to your code signing. And bad news about more than 200 Chrome extensions that were spying on more than 34 million people That and more coming up next on Security Now.

TWiT.tv [00:00:32]:
Podcasts you love from people you trust.

Leo Laporte [00:00:36]:
This is TWiT. This is Security Now with Steve Gibson, episode 1065, recorded Tuesday, February 17th, 2026. Attestation. It's time for Security Now, the show we cover the latest in security, privacy, how things work, sci-fi, and whatever else this guy here is up to. Mr. Steve Gibson, welcome.

Steve Gibson [00:01:06]:
I do try to keep us mostly on track, though, you know, the world is not monotonic. So, you know, lots of things are going on.

Leo Laporte [00:01:17]:
You're a, a, a, a, a polyglot, a poly— you, you know everything. So it's, uh, it's nice to talk about all these things.

Steve Gibson [00:01:25]:
Certainly don't know everything. I do— there are things I know a lot about and things that I'm interested in learning more about. So, but yeah, I'm definitely curious. Um, I just, from my first moments on of awareness, I wanted to know how things work. That's what I want to know, how things work.

Leo Laporte [00:01:44]:
Yeah. Yeah, I agree.

Steve Gibson [00:01:46]:
And so I lost my fear of, you know, looking inside to go, oh look, that little cam goes this way and that pushes that lever over here and that causes that to drop down. And, you know, I was very good at the game, the board game Mousetrap, for that reason back, back in our youth. Um, okay, the elephant in the room is the 28-page security research paper that was recently published after I put the show this week together.

Leo Laporte [00:02:19]:
I, I felt so bad because I sent it to you and, and I said, oh Steve, I know you're done.

Steve Gibson [00:02:24]:
You and about 50 of our listeners. I mean, because we got it. I've been very impressed with how in touch our, our listener community is because it was like, oh Steve, oh my God, okay, you know, what, what does this mean? So To do it justice, I will answer that question next week.

Leo Laporte [00:02:42]:
Okay, good.

Steve Gibson [00:02:43]:
The good news is no hair on fire. It's not the end of the world.

Leo Laporte [00:02:49]:
Um, it, it's about password managers.

Steve Gibson [00:02:52]:
Yes, sorry. Uh, so, uh, ETH Zurich, the guys we— the, the, the re— the researchers there who we've spoken of many times as a consequence of their work, and, and some Italian guys They got together and did a deep dive into the consequences of server-side, you know, i.e., cloud is now the new term, uh, um, attacks on three popular password managers, you know, browser-based password managers— Dashlane, uh, LastPass, and Bitwarden. LastPass, of course, a, a previous sponsor and favorite of ours until they screwed up. And actually, it was on the server side, so that's kind of interesting, um, and scared us all. And Bitwarden, a current sponsor of the TWiT network, um, one of the— this was everybody.

Leo Laporte [00:03:50]:
It was 1Password, it was Dashlane, it was everybody, which I— yes, that was interesting.

Steve Gibson [00:03:55]:
Although They did focus on those three, and I thought it was interesting. First of all, Dashlane and Bitwarden both responded. Bitwarden with a thanks for the analysis, and Bitwarden commented that as a consequence of the fact that they were an open-source system from one end to the other, the, the job of the security researchers was far more enabled because it wasn't necessary for anyone to reverse engineer their stuff. You know, they're wide open. Um, okay, so again, as I started off saying, no hair on fire. Um, I'll, I'll give, I'll give us a complete readout of it next week where we look at in detail what it was that was found. Um, both Dashlane and Bitwarden are either immediately already responded to the issues, again, none of which were— I mean, these were like worst case, if a bad guy completely took over your server infrastructure, what could be learned. And to give you some feel for it, um, there was an instance, I think it was with Dashlane, where they were deliberately supporting older crypto standards for the sake of backward compatibility.

Steve Gibson [00:05:17]:
So if you could— if you took over the server infrastructure, forced a, a protocol downgrade to use the older— the oldest supported crypto, and the user had a weak password, then it might be possible to decrypt their vault. So again, it's not a lot of ifs.

Leo Laporte [00:05:41]:
That's a lot of that.

Steve Gibson [00:05:42]:
And that's my point, is that these were like that, you know. I mean, Definitely useful. Um, the, the researchers commented that they were somewhat surprised that this hadn't been done before. It's like, you know, here we are running around all using our password managers, just sort of thinking, well, seems great. But, you know, this is really the role of independent research, which an open-source facility like Bitwarden offers makes far more possible. So next week I'll have the whole update, but just why I want to thank our listeners, Leo, and Leo, you for bringing it to my attention also yesterday. It's next week's topic. Good, good.

Leo Laporte [00:06:26]:
Yeah, because I want to know, I mean, you know, I know Bitwarden's a sponsor, but we want to know, as is 1Password.

Steve Gibson [00:06:32]:
Yeah, well, and, and, you know, we were bullish on 1Password until— and it's interesting too because this has an echo Feeling like it was 1Password not updating.

Leo Laporte [00:06:46]:
No, LastPass. You're talking about LastPass.

Steve Gibson [00:06:47]:
I'm sorry.

Leo Laporte [00:06:48]:
I'm sorry.

Steve Gibson [00:06:48]:
I'm sorry.

Leo Laporte [00:06:48]:
Yes.

Steve Gibson [00:06:49]:
It was LastPass not updating the iteration level of their PBKDS, which, you know, password-based key derivation function, which got them into trouble. And it's like laziness or not like a fear about breaking something. I mean, they're, they're I, I think if, if a— if there's a lot of legacy code and the people who wrote it are gone and new guys are coming along going, oh yeah, you know, let's, you know, let's let— we don't want to be responsible for breaking something. So there's kind of a like, leave it alone if it's not broke. But unfortunately, the way crypto standards are going, you do need to keep rolling forward because the attacks are getting stronger. Anyway, we'll look at that completely, but no one needs to, like, fear that this means they have to go back to a paper pad for writing their passwords down. No. Today's topic for Podcast 1065 is attestation.

Steve Gibson [00:07:59]:
I want to share an adventure I've just survived. Which I will get to at the end of the podcast. Oh boy, uh, really, really interesting what's going on in the industry, uh, and understandable. Uh, so I will get to all that. We're going to talk about websites, uh, websites placing high demands upon limited CPU resources. I realized after we talked about this last week, Leo, what what happened with AI.com and why that graphic you showed that showed that Cloudflare was just all fine, but the host was unresponsive out at the end. Right. What happened? Because I realized that we've talked about this before, but I just, it didn't hit me until I was thinking about it later.

Steve Gibson [00:08:52]:
Also, in a worrisome move, Microsoft appears to be backing away from its commitment to security. Uh, okay. Uh, also, what's Windows 11 26H1 and where do we get it? Um, Chrome 145 is released and it brings something known as device-bound session credentials finally out to the mainstream world. We talked about this, I think it was last April, but we're gonna, uh, you know, circle around again because now it's here. Also, I had a blurb, and I heard you, you guys, you've been talking about this in a number of different, uh, places, Leo. More, more countries moving to ban underage social media use. Yeah, and Discord to require proof of adulthood for adult content social media use. Uh, and there's been a little bit of overreaction to that, so we'll tamp that down.

Steve Gibson [00:09:54]:
We have the return of Roskamnanzor, which, you know, no podcast would be complete without that. Also, might you still be using WinRAR 7.12? I was.

Leo Laporte [00:10:09]:
What?

Steve Gibson [00:10:10]:
Yeah, it caught me. So we're going to have to make sure nobody is. Also, we now have proof that Paragon's graphite smartphone spyware can definitely spy on all instant messaging apps. A researcher discovered 30 malicious Chrome extensions, and a different project found 287 Chrome extensions which were spying on 37.4 million of their users. So this is really a problem that we're going to talk about. The first malicious Outlook add-in has stolen 4,000 of their users' credentials. I've got some thoughts on AI vibe coding, and then I'm going to go through, but I just survived obtaining a new code signing certificate. So I think, and of course we have a fun and interesting picture of the week.

Steve Gibson [00:11:11]:
So yeah, I think we got a good one here for February.

Leo Laporte [00:11:14]:
As usual, as usual, Security Now, you've been waiting waiting all week. It's finally here. But while Steve takes a sip from that giant mug of coffee, I am going to talk about our sponsor for this segment of Security Now. It's in my hand right here. This is— yes, you recognized it. This is not an external hard drive, although it kind of looks like one. It's about the size of a USB hard drive. But I guess the giveaway is that, uh, that Think Canary logo on the front and the, uh, the fact that the only connector on it is an Ethernet port, and that is because this is my Think Canary honeypot.

Leo Laporte [00:11:51]:
This is gold. If you have a network, if you know bad guys are trying to penetrate your network, and I think any business probably should be thinking that because they are, you really have to ask yourself, how would I know if somebody successfully got in? How would I know if a hacker or even a malicious insider was wandering around inside my network looking for data about our customers to exfiltrate, looking for places they could put time bombs for ransomware. How would I know that? It might terrify you to think. I mean, every time we'd see a breach, we see these numbers. Well, they found out that somebody broke in 2 months ago. The average time between being penetrated and the time a company notices they've been breached 91 days, 3 months. That's 3 months too long. That's why you get the Thinkst Canary.

Leo Laporte [00:12:52]:
These are honeypots, easy to deploy. You can do it right on the, uh, on the console on your, on your web page there. And it can be almost anything. I mean, everything from a Windows server to a SCADA device to a Linux server with an SSH server. You pick. You can also create lure files. Just, they look like, you know, document files, like word processing document files or spreadsheet files, or they can even be things like WireGuard configuration files, except they're not. If somebody, a bad guy says, oh, look, there's their WireGuard configuration file, but there's passwords in there, or oh, look, that spreadsheet on their Google Drive says payroll information.

Leo Laporte [00:13:34]:
Man, I bet that's got some juicy stuff. As soon as they open it, you get an alert. As soon as they try to brute force your fake internal SSH server, you get an alert. No false positives, just the alerts that matter. And you get it any way you want it. SMS, sure, mail, syslog, webhooks. There's an API. All you gotta do is choose a profile for your Things Canary device.

Leo Laporte [00:13:59]:
And it's so easy to do that. You could change it every day if you feel like it. I do. I often just play with it just to see what else it could be. And these profiles are good. I mean, this thing is, beautifully designed. It has the actual MAC address of the company, so a hacker can't look at it and say, well, I know that's phony. There is no way to tell.

Leo Laporte [00:14:17]:
Mine is usually a Synology NAS. It has the DSM7 login page. Actually, that's good to have a login page because you get more information about what the bad guy knows when you see what email address and password they use, right? Uh, just choose a profile for your Things Canary device, register it with your hosted console. You're going to get monitoring, you're getting notifications, and you just sit back and you relax. An attacker who's breached your network, a malicious insider, any adversary will make themselves known. They can't help it because it, it doesn't look vulnerable. It looks valuable. It looks like what they're looking for.

Leo Laporte [00:14:51]:
As soon as they access your things, canary, you got 'em. Now I think these are fantastic. And how many you get depends really on how big your company is, how big your network is. You should certainly have one for every network segment. A big bank might have hundreds with branch offices, might have them everywhere. Small business like ours, maybe a handful. I'll give you an example. Go to canary.tools/twit.

Leo Laporte [00:15:14]:
For just $7,500 a year, you're going to get 5 things to Canary's. You'll also get your own hosted console. You'll get upgrades, you'll get support, you'll get maintenance. And if you use the code TWIT in the how did you hear about us box, you're going to get 10% off the price and not just for the first year, but forever. For life, for as long as you have your ThinksCanaries. If you're at all unsure, let me tell you this: you can always return your ThinksCanaries with their 2-month money-back guarantee for a full refund. I have to tell you, during all the years TWiT has partnered with ThinksCanary, almost a decade now, their refund guarantee has not ever been claimed. Never.

Leo Laporte [00:15:53]:
Not once. It's a good sign. Visit canary.tools/twit, enter the code TWIT, in the How Did You Hear About Us box for 10% off the Things Canary. This thing, that's a— it's a doozy. You got to get one. Canary.tools/twitwit. We're actually going to see them at RSA next month. I'm looking forward to seeing the Canary team, do a little interview with them.

Leo Laporte [00:16:19]:
All right, let's talk about your picture of the week, Steve.

Steve Gibson [00:16:22]:
So I gave this picture, which didn't have a caption, a caption. Uh, placing unconditional trust in technology can lead to mistakes.

Leo Laporte [00:16:34]:
All right, I'm going to scroll up now and I'm going to see it for the first time along with you.

Steve Gibson [00:16:44]:
Oh, that's good.

Leo Laporte [00:16:45]:
I never thought of that.

Steve Gibson [00:16:48]:
So, so we, we have a picture of a security camera which, mounted on the ceiling, was originally pivoted around, as you'd expect, to be surveying the room that it is monitoring so that it knows what's going on. Apparently, the people who populate that room decided, you know, we don't really want to be having this camera looking at us all the time. So it's clear you can see how this picture came about. Someone got up on a chair or a ladder or something and took a photo of the room from the vantage point of the camera, printed it out on 8.5 by 11 sheet of paper, stuck the paper on the wall behind the camera, and then swung the camera around so that it's looking at the paper. And of course, so as I said, placing unconditional trust in technology can lead to mistakes. Now, the, the, so the people in the room are no longer being surveilled. They can be doing anything they want to be. And meanwhile, the security people in some room with lots of monitors are looking in that going, when is this guy going to come back from the bathroom? Very quick.

Steve Gibson [00:18:11]:
Why is his desk— how long is his lunch? You know, and so anyway, I thought I got a kick out of the picture. It's just very funny. Again, uh, you know, there are all kinds of instances, right, where we, we adopted— we adopt technology to save us some trouble of some sort, and it turns out that people who don't want to be encumbered by that come up with a simple workaround. So, you know, like sticking bubble gum on the camera lens or, or, you know, something easy to— easy to mess with the technology. So, um, as I said at the top of the show, last week we noted that the fact that during Sunday's Super Bowl, the company with the very expensive $70 million domain name ai.com had been DDoSed by their own advertisement during the Super Bowl. And Leo, you showed us that Cloudflare screen that indicated that all was well with the CDN delivering traffic to the backend hosting server, and that all it said was that the hosting server was not responding. Um, and I, I, I think it was, uh, Delahanty.

Leo Laporte [00:19:31]:
It was Patrick Delahanty. Yes.

Steve Gibson [00:19:33]:
Yeah, Patrick Delahanty. You shared with us that, that his, I guess, modest website was like being inundated with bot traffic, and which was really causing him a problem. Like, like, you know, he was being DDoSed. The point I wanted to follow up with, and we've talked about this before, as I said, is that modern websites, both large and small, are no longer almost ever generating their content. Well, actually, the way most of GRCs is still. I mentioned also last week, my— this call kind of came about because we were talking about how I, you know, hand author lightweight HTML and CSS. Um, and it's not that there isn't dynamic content at GRC. Shields Up, uh, the, the DNS spoofability test and other things, you know, the, the CPU is involved in generating those pages.

Steve Gibson [00:20:35]:
But of course it's all in assembly language, so there's like zero overhead, I mean, associated with even GRC's dynamically generated pages. But the modern way— modern, you know, like supposed to be better— modern way to create a website is with a CMS, a content management system, where the web server doesn't actually have static pages. It runs server-side scripting of some sort— PHP, Ruby, JavaScript with Node, maybe Java, C#, you know,.NET, maybe Python. But the point is that one of those content engines is, is producing the HTML on the fly, which is sent to the browser, backed up by some backend database, which— and, and so queries of this database describe the content, which is then interpreted by the script and used to generate HTML that then goes out. And so the point is that while these approaches turn web servers into very flexible application delivery platforms, that power and flexibility to dynamically deliver any page content comes at a steep price in processor load and database load. So I have no doubt that, um, you know, as we saw in that chart Cloudflare was faithfully delivering HTTP queries to whatever backend server infrastructure ai.com had built out at that point. But whatever it was, it was unable to scale as it needed to, uh, to handle the massive demand spike which was created by a Super Bowl ad. I don't think the site went down technically.

Steve Gibson [00:22:41]:
It was just probably, you know, the.

Leo Laporte [00:22:43]:
Per-Page.

Steve Gibson [00:22:46]:
Processing cost was so high that there just wasn't enough processor available to keep up with the demand. So, you know, mostly it was just embarrassing, right? And it was certainly not an auspicious launch for a new venture. And you could argue that they probably lost some people who responded during the Super Bowl commercial. And then thought, well, I don't know what's wrong with these people, but they don't seem like they got their artificial intelligence working very well. So anyway, um, the, uh, um, the consequences of this high-cost web page delivery are actually being felt a lot. Uh, by pure coincidence, I happened to stumble on last Wednesday's Linux Mint blog, which among other things addressed problems with their forums. The, the guy wrote, we'd like to apologize to our forum users for how slow and unreliable the forums were last month. The volume of traffic we receive is extremely high, and it's mostly coming from AIs, bots, scripts, and web crawlers.

Steve Gibson [00:24:02]:
It got to the point where our server could not cope and people weren't able to use the forums. In addition to the Sucuri web application firewall, it took us a while to come up with an efficient way to filter bad— what they— what he's calling bad traffic, meaning, you know, non-human users, I guess, is what he's calling bad. If you're getting 403 errors from the forums right now, please make sure your browser is up to date. I thought that was interesting. He said we upgraded the server to give it 10x the CPU capacity and twice the bandwidth. So I, I checked them out. Linux Mint forums use the free PHP BB, uh, and I don't know whether they've spent time, uh, speeding up their implementation by using There are many tricks you can use to reduce the overhead of a PHP-based site. There's an in-memory tool called OPCache, which is able to take the burden off the backend PHP interpreter.

Steve Gibson [00:25:13]:
And also Redis has a key-value store that many forums are able to enlist. I use both because the forums at grc.com are also— I'm using Zenforo. And that's a PHP-based system. And I looked at them, at the user count. They were talking about 6,000 people. Well, we have 1,000 typically roaming around at any given time, and my CPU is off that. It's down in the single-digit percentages. So, you know, there are ways to, if you are focused on improving efficiency, to do so.

Steve Gibson [00:25:54]:
I don't know what is going on with, with Patrick Leo, but it was bots that he said were causing trouble for him. So the takeaway is to remember that connection bandwidth is almost certainly no longer the limiting factor that it once was, and it, it can be practically impossible to change platforms Once one's committed, that is, give some thought to the page delivery overhead and performance of a system that you're considering switching to. It's— I'm sure anybody who's ever switched to a different platform knows what an incredible pain it is. So there's huge anti-switching inertia, but if you have a system which is inherently heavy in overhead, then switching is going to be a problem. And all you could do is scale up in CPU resources, and it can be expensive if you then need load balancing in front of, of a bunch of servers. Then there's an additional burden from, from that. So anyway, it really pays to, to keep efficiency in mind, and it's not just bandwidth anymore. With all of our pages, so many of them being delivered dynamically that the overhead of the delivery system really matters.

Leo Laporte [00:27:26]:
Um, yeah, modern websites are programs, really. They're not static websites, right?

Steve Gibson [00:27:32]:
That's absolutely true. And it's good you said modern because.

Leo Laporte [00:27:35]:
I'm delivering almost— you're static, you're HTML, I'm sure, plain old HTML, right?

Steve Gibson [00:27:41]:
Well, but Shields Up is dynamic, the DNS spoof ability. So I've got dynamic pages, but you know me, they're all written in assembly language.

Leo Laporte [00:27:50]:
Right. So my blog is also static. It has a program running in the background that generates the HTML.

Steve Gibson [00:27:57]:
And that's a very good way to do that. Yes.

Leo Laporte [00:28:00]:
So you still get the benefit. Yeah.

Steve Gibson [00:28:02]:
Yes. GRC, I have 3 freeware pages, like, and you're able to ask how you'd want them sorted. By popularity, by age, and by something else I don't remember. And every night at midnight, I statically regenerate those 3 pages so that they're delivered, you know, fast from finished HTML. And you— so you're only going through that generation process once a day because it's not the kind of thing that needs to be changed every second. So, and, and it's not, you know, there, there's no need to, to generate them per person, as is often the case with a modern website. Um, okay, so, um, I want to share an editorial which appeared in the Seriously Risky Business publication, uh, which was unfortunately titled Microsoft's Microsoft forgoes its secure future. So I'm just going to share what they wrote, and then I'm going to share some observations afterwards.

Steve Gibson [00:29:13]:
But I, I thought what they wrote was very insightful. They said, for a brief time, Microsoft appeared to be making security a priority. As with all good things though, it appears that period has come to an end with personnel changes at the organization, signaling a shift in priorities. We fear Microsoft's goal now is not to make secure products so much as to sell security products. Of course, it's not the first time we've touched on this, but some recent changes, as we'll see. They wrote, last week, CEO Satya Nadella announced that Microsoft's executive vice president of security Charlie Bell, had been replaced by, um, Hayat Galat, who was most recently president of customer experience at Google Cloud. Charlie Bell is stepping back from leading Microsoft's security organization to become an individual contributing engineer. Now that Bell is gone, it appears the guys of security first has been tossed aside, and we fear the company may slip back into being a security disaster.

Steve Gibson [00:30:33]:
Bell has a great reputation and joined Microsoft to make a positive impact on its security. Despite this, the history of his tenure at Microsoft shows that the company itself only prioritized security when it was forced to by government pressure. Bell joined Microsoft from AWS to lead a new security organization in 2021. At the time of his hiring, he wrote that we had consistently, for months on end, shown example after example of Microsoft security, as they put it, clangers. Those rolling security debacles— and of course we talked about them all here on the podcast— were a symptom of senior leadership prioritizing profit over security. You know, things like not logging unless you paid extra, that kind of thing. At the time, they wrote, we predicted that Bell would struggle to make a difference. We were right.

Steve Gibson [00:31:36]:
Not even an exceptional manager can change much if the CEO and executive team are not really interested. A 2022 profile of Bell in The Information reported that Microsoft's old guard managers pushed back on Bell's suggestions for improving their responsiveness to security vulnerabilities, believing he was setting too high a bar for stopping attacks on its products. The company continued to pay lip service to security, although it did launch a lackluster security uplift program, the Secure Future Initiative, in late 2023. Microsoft's devil-may-care approach to security, they wrote, came back to bite it after separate compromises by Chinese and then Russian state hackers were discovered. The security lapses that led to these breaches were frankly unbelievable. In April 2024, a Cyber Safety Review Board, the CSRB, report into the Chinese breach, which had compromised the email accounts of senior U.S. policymakers, found a cascade of security failures. It wasn't until this kick in the pants that Microsoft truly embraced security.

Steve Gibson [00:33:01]:
The following month, CEO Satya Nadella told staff to prioritize security above all else. And that, quote, if you're faced with a trade-off between security and another priority, your answer is clear: do security, unquote. What followed was a short halcyon period where Bell was able to kick some goals, but the Trump administration has since disbanded the CSRB and signaled that it is not interested in strong regulation. The pressure is off. Microsoft execs can grab a coffee and relax. Which brings us back to the recent change in security leadership, and in particular Nadella's messaging in his public announcement of Gallot's appointment. It sends strong warning bells that security at Microsoft is falling by the wayside. Nadella had an opportunity to highlight Gallot's work experience in security roles.

Steve Gibson [00:34:07]:
Instead, he focused on her, quote, critical roles in building two of our biggest franchises and, quote, leading to our go-to-market efforts. Much of Nadella's announcement was about selling more security products. He said that the company has, quote, great momentum in security including strong Purview adoption and continued customer growth. Purview is a product of theirs. Entirely missing was any language about the importance of actual security to the company or a call for people to get behind the critically important security work that Gallup will lead. If it walks like a sales target and— I'm sorry, if it talks like a sales target, and walks like a sales target, it ain't security. It's a recipe for security sales. Okay, so that's the end of their editorial.

Steve Gibson [00:35:04]:
I wanted to share this to highlight a lesson we've all learned throughout the past 20-plus years of our observation of real-world security deployment. The lesson I believe we've all learned is not only that security is hard but that it's always much harder than we expect it to be. If it wasn't so difficult, we'd have much more of it than the sad little bit of security we actually have out in the world. The U.S. wouldn't have Chinese and North Koreans crawling around in our networks, nor telco executives actually saying we're not sure we can get rid of it all.

Leo Laporte [00:35:50]:
What?

Steve Gibson [00:35:51]:
My point here is that since we always need all of the security we can possibly get, any sign of Microsoft slacking off whatsoever on the security front should be taken very seriously. What's worse, a reduction in delivered security is not something that can or will be immediately apparent. Right? It's only the inevitable consequences of a relaxed security posture that will wind up being felt. As for why Microsoft might make this shift, one of the problems is that since it's not possible to prove a negative, no one really receives any credit for security breaches that don't occur because they were prevented. In the case of Microsoft, the successful influence and efforts of Charlie Bell, their now previous executive vice president of security, may easily have gone underappreciated. You know, it's, look at that, I guess security isn't as big a problem as we thought. Those other problems must have just been one-offs, right? So let's hope for the best. Um, one quickie and then we'll take us another break, Leo.

Steve Gibson [00:37:13]:
I suppose I should at least mention— I know that Paul was talking about this last week— I should at least mention that this spring, because listeners have already been asking, Microsoft will be introducing what they're now terming a scoped release of Windows 11. Its scope is limited to use with the new Qualcomm Snapdragon X2 next-generation ARM system-on-chips, where Windows 11, uh, 26H1 will come pre-installed on those machines. It only runs on them, and it will not be available in any other form for general use or upgrading. The latest general Windows 11 release will remain 25H2, and this oddball 26H1, whose naming appears to have ruffled many feathers— there's lots of dialogue out on the net saying, what? Come up, come on, Microsoft, give this a different name, it's really confusing. Anyway, despite its name, it is not an update for 25H2, so everybody else should just ignore it. We can't have it. We need to wait for 26H2. And Leo, they just— Microsoft just cannot stick with anything.

Steve Gibson [00:38:40]:
It's just— I mean, I guess I understand you don't know how the world is going to evolve over time. That's the nature of it. But still, you know, you know, it's not like what they do doesn't matter. And a lot of people aren't paying attention and trying to figure it out. So I mean, so there's a, you know, a high price for them changing their mind all the time.

Leo Laporte [00:39:04]:
Sad to say. Yeah.

Steve Gibson [00:39:05]:
Okay. Break time. I'm going to rehydrate and then we're going to look at Chrome 145 and its new support for device-bound session credentials.

Leo Laporte [00:39:16]:
Well, there you go. Stick around. You don't want to miss that excitement. No, baby. Noob. But first, a word from our sponsor, DeleteMe. Have you ever wondered how much of your personal data is out there on the internet for everyone to see? It's a depressing fact. I mean, your name, your contact info, yeah, of course, but even like your Social Security number, your home address, information about your family members, information about if you're at a business about your workers, your coworkers, your managers? Why are they there? Because they're all being compiled by data brokers whose sole and utter business is to steal— well, borrow, I don't know— take your personal information and sell it online.

Leo Laporte [00:40:10]:
I mean, in the early days, I guess it was kind of benign because you're— they're selling it to marketers, I guess. It ain't benign anymore. Anyone on the web can buy your private details, and that can lead to some nasty side effects like doxxing, identity theft, phishing, harassment. So what are you going to do? Well, you could protect your privacy. You can use DeleteMe. Now, I am very aware— Steve and I both are very aware of this because there was a data breach some time ago, and All the data was put online in a searchable database. And so Steve and I, we did it on the show, looked up our information in this, in this breach, and there was our Social Security number. There was all our private information.

Leo Laporte [00:41:03]:
Uh, it was from a data broker, right? This breach, they had all that stuff. That's when I found out the worst thing. It is completely legal. If you somehow manage to acquire Steve Gibson or my Social Security number, it is completely legal for you to sell it to anybody, a foreign government, anybody. So this is why I recommend DeleteMe. In fact, we use DeleteMe at TWIT because of the really very real issue of phishing. Phishing is made much more effective when they know personal information. Hi, "Hey, this is Leo, and I'm tied up in a meeting right now, but could you send some Apple gift cards to my son? Here's his address.

Leo Laporte [00:41:52]:
I forgot his birthday." Something like that. And all of that information, if it's real, it makes it more convincing. We got phished like that all the time. We get phished like that. So we use DeleteMe. DeleteMe is a subscription service. It removes your personal information and it does it from hundreds of data brokers. There are more than 500 on DeleteMe's list.

Leo Laporte [00:42:16]:
You know, the state of California recently said we're going to do this, fewer than 100, and it's not till this fall. And I mean, there's all these caveats. No, no, DeleteMe, you sign up, you give DeleteMe just what information you want deleted. That's nice because you control what's deleted and what's not deleted. Their experts take it from there. This is what they do, and they will send you regular personalized privacy reports showing what they found where they found it and what they removed. DeleteMe, it's not just a one-time service either. That's really important.

Leo Laporte [00:42:47]:
We just got one of those emails, the reports. It's always working for you, constantly monitoring and removing the personal information you don't want on the internet. You have to do that because these data brokers, even if the data has been removed, they'll start repopulating it and they'll change their name to avoid this. You know, they change and their new ones start up every day. Dozens of them because it's very profitable. To put it simply, DeleteMe does all the hard work of wiping you and your family and your company's personal information from data broker websites. Take control of your data. Keep your private life private.

Leo Laporte [00:43:21]:
Sign up for DeleteMe. We've got a special discount for our listeners. You'll get 20% off your individual DeleteMe plan, 20% off when you go to joindeleteme.com/twit and use the promo code TWIT at checkout. Now, this is the only way to get 20% off. And you have to use this address because it's— you can't just Google it because there are other DeleteMes in the world and they don't do the same thing. You want to go to joindelete.me.com/twit and make sure you use that offer code TWIT at checkout. That's joindelete.me.com/twit, offer code TWIT. I'm, I am so glad DeleteMe exists and I'm so glad we found it.

Leo Laporte [00:44:02]:
joindelete.me.com.

Steve Gibson [00:44:06]:
/Twit.

Leo Laporte [00:44:06]:
Now back.

Steve Gibson [00:44:07]:
Okay, so last Tuesday, Google updated the world to Chrome 145. This update repaired, you know, your typical assortment of a few high, mostly medium, and some low severity security issues and continued to move Chrome's support for the latest HTML CSS and JavaScript standards forward. Um, perusing those, uh, I'm just astonished over the complexity of today's modern web content interpreters. Uh, I, I, these browsers are so complicated. Um, it just gets more insane every day. One of the new features that stood out is Chrome 145's support for something known as device-bound session credentials. Think about that phrase for a moment. Device-bound, as in binding to a device, session credential.

Steve Gibson [00:45:13]:
A session credential is just a fancy name for a cookie, and device binding would mean binding a session credential cookie to the device whose web browser first receives that cookie from a remote website. So that means that this innovation arranges to, for the first time ever, prevent anyone who might somehow arrange to obtain a session cookie from being able to use it themselves anywhere else. That's huge, and Chrome 145 now supports it. Many years ago, before servers were fast enough to glibly encrypt all connections all the time, a user's session cookies would be sent in the clear after they had first successfully logged on. This allowed anyone who was able to eavesdrop on internet traffic anywhere to capture those logged-on session cookies to impersonate their rightful owner. Looking back on that, it's just like hard to imagine we survived that state of affairs. But of course, that was yesterday's internet too, less mission-critical than, than it is today. So although things are much better today with all of our connections encrypted all the time, there are still various interception attacks and mechanisms that create vulnerabilities and weaknesses.

Steve Gibson [00:46:43]:
For session credentials. For example, though it's, you know, it's being done only for the best and most justifiable reasons, many enterprises maintain TLS decrypting middleboxes that decrypt everyone's TLS connections as they cross the enterprise's network edge in order to scan them for malware, and other shenanigans and protect the internal network. Everyone's cookies are thus exposed at that point, and if it were possible to briefly impersonate or compromise either end of a connection to observe any browser's reply, the session's logon credential cookies would be exposed. So You know, it's all we've been able to do so far, but there are problems. This resolves that. Until this time, the browser cookie, you know, has been a— I would get— I guess I would say it's an overworked authentication mechanism. It really was just meant in the original, uh, creation of it to allow a web server to, to like, to, to create this notion of being logged on, to identify you when you made successive moves around pages on a website. It would be like, oh, there's that guy again, okay, fine.

Steve Gibson [00:48:15]:
And, you know, so you, you could maintain state. Well, now we use this for banking and international commerce and, you know, super private connections to, uh, investment portfolios, everything is being done with this poor overloaded cookie. So it's all we've had. With this innovation of device-bound session credentials, that finally changes. Now you do need some form of secure enclave such as a TPM or a secure enclave like Apple has on iOS devices on their platforms. You have to have something like a TPM, a Trusted Platform Module,, in order to store the, the, the secret part of this credential. But as we know, all modern OS platforms require this already for themselves, so that's really not a problem anymore. And in fact, it may have been what's delayed the arrival of this.

Steve Gibson [00:49:20]:
That's all I'm going to say for now. If anyone's curious, we described the operation of this in full detail during episode 1021 last April 18th on the podcast. So it did take longer than expected to arrive, but we have it now with Chrome 145. And I said on the podcast then that Firefox, you know, that Mozilla had implemented in Firefox and it was also in Safari. I did not follow up to see what the status is today. But it just got released in Chrome 145. So even if it takes a while to actually filter out into the world, um, it's clearly going to happen. It, it does require, as, as I explained last April, significant support effort for— from the web server.

Steve Gibson [00:50:13]:
It's not just your old, you know, it's not— it's not your grandparents' cookies, uh, it It's a whole different technology to pull this off, but it's eventually going to become— clearly become a widespread standard because it, it significantly increases the integrity of, of, of cookie authentication.

Leo Laporte [00:50:35]:
It can't be used to fingerprint you though, right? Because it is unique to you. Uh, they can't request that ID. They probably— it's like a public key thing.

Steve Gibson [00:50:48]:
Oh yeah, yeah, they would just match.

Leo Laporte [00:50:50]:
It up so they can't— yeah, correct. Yeah. Okay, well, that would— that's good.

Steve Gibson [00:50:56]:
That's, that's interesting. Well, how— if, if this, you know, comes back up on in our, uh, as a topic, it'd be nice to make sure that there isn't some. But I can't imagine in this day and age that, that we would have implemented it. I mean, it's, it's industry-wide. It's an industry.

Leo Laporte [00:51:12]:
Oh, okay.

Steve Gibson [00:51:13]:
It's not just— yes, it's not, it's not just Google. Yeah, yeah, they couldn't have gotten away from it. But I agree with you, Google would love to have a mechanism to love that, track us around. Okay, so I noted that the governments, uh, in, in catching up on the last week, Kazakhstan, Moldova, and Romania are considering adding their names to the growing list of countries that are enacting age restrictions on the creation of no new social media accounts by children. I also saw some commentary somewhere that I appreciated. It noted that the newer legislation was deliberately eliminating— and I really thought this was good, and I, you may not agree, Leo, but I can understand it from a, from a practicality standpoint— the newer legislation was deliberately eliminating any opportunity for parental override or exception, where, for example, a child who was at least 13 but not yet 16 could appeal to their parents to allow them to, to create an account. And the, the person commenting, you know, clearly understood that parents would be hard-pressed not to succumb to the argument, but, but Mom, Susie's parents let her use Instagram and she's younger than me.

Leo Laporte [00:52:43]:
So you shouldn't be a parent.

Steve Gibson [00:52:46]:
I mean, well, that's another topic entirely.

Leo Laporte [00:52:49]:
We could do— we really need the government to enforce this.

Steve Gibson [00:52:52]:
Do a whole podcast titled You Shouldn't Be a Parent. I know, that would be a good show. So anyway, uh, the world does seem to be moving in that direction. Um, uh, and I've got another, another point, uh, about that. Uh, although for some reason I've got a, I've got a little blurb here. It's quick about the return of Roskomnadzor. Um, and you know, what would a Security Now podcast be without an update on the most recent machinations of Russia's Roskomnadzor internet watchdog? It turns out that part of the infrastructure that supports Russia's sovereign RU-net, uh, is its own domain name system, right? They've got their own DNS called NSDI, um, and Roskomnadzor controls what's listed and what's not, uh, though access to YouTube and, uh, and WhatsApp, uh, or WhatsApp, uh, has been throttled since last July. Remember we talked about that? Remember, it's like they only allowed a tiny bit of YouTube data, and it was like, what, what can you do with that much data? It's like you can't even get a video off the ground.

Leo Laporte [00:54:09]:
Anyway, you can see all the things you're not able to see is what you would do, right? You, you can't watch that or that, right?

Steve Gibson [00:54:16]:
Or that, right? Anyway, so they've gone beyond throttling now. Now those two domains YouTube and WhatsApp, along with Facebook and Instagram, have been entirely removed from Russia's DNS following the Russian government designating Meta as an extremist organization after it refused to censor content relating to Russia's war with Ukraine. In addition to YouTube and these three Meta properties, you know, Facebook, Instagram, and WhatsApp, Roskomnadzor also blocked access to the Tor Project, Windscribe's VPN, APKMirror, and the BBC, as well as several other news sites. So, you know, they're tightening their grip on the internet, and Russian citizens are, you know, having to come up with workarounds or just go with what the Russian state tells them is happening in the world.

Leo Laporte [00:55:15]:
So what you're saying is Raskamnadzor is treating every Russian as under 13.

Steve Gibson [00:55:22]:
Yes, yes, you're not, you're not mature enough to understand.

Leo Laporte [00:55:24]:
You are not ready for these things.

Steve Gibson [00:55:27]:
Yes, and I would argue that Russia is probably not the best parent. So back to why you shouldn't be a parent podcast. Okay, so Discord. Some of our listeners wrote to ask whether I'd seen that Discord, uh, perhaps as part of reprofiling itself in advance of a $15 billion IPO, would be switching all accounts to underage by default unless shown evidence to the contrary. Now, since that's partially true, I wanted to share the full story. In their own— Discord's own clarification, because of course, you know, this created quite an upheaval. This is what they wrote. They said, we've seen some questions about our age assurance update and we want to share more clarity.

Steve Gibson [00:56:24]:
We know how important these changes are to our community. Here's what we want you to know: Discord is not requiring everyone to complete a face scan or upload an idea— an ID to use Discord. The vast majority of people can continue using Discord exactly as they do today without ever being asked to confirm their age. You need to be an adult to access age-restricted experiences such as age-restricted servers and channels or to modify certain safety settings. For the majority of adult users, we will be able to confirm your age group using information we already have. We use age prediction to determine with high confidence when a user is an adult. This allows many adults to access age-appropriate features without completing an explicit age check. When additional confirmation is required, We offer multiple privacy-forward options through trusted partners.

Steve Gibson [00:57:33]:
Notice that, you know, they trust them. Whether we trust them is another thing. And they, they enumerated that facial scans never leave your device. Discord and our vendor partners never receive it. IDs are used to get your age only and then are deleted. And Discord only receives your age. That's it. Your identity is never associated with your account.

Steve Gibson [00:58:00]:
Okay, so for the time being, this is probably the best we can hope for. You know, we know that it will eventually be nice to have our devices able to assert an age range on our behalf, but we don't appear to be even close to having any universal solution or even a standard took for that yet. You know, the most recent meeting of the World Wide Web Consortium was, well, what do we want to achieve with this? I was like, oh God, okay, well, I'm gonna— we're not there yet, obviously. So I'm sure we will in time since it's very clear. I mean, there could not be more pressure on, on getting this to happen. I know that Steven Ahrensward is hard at work on this. Her whole focus is, is addressing this issue, um, and she tends to get results, uh, and is very much an activist and active in all these sorts of things. Uh, for me, it's just no, I can't do committees.

Steve Gibson [00:59:02]:
Um, so are privacy purists losing some of their precious, if entirely illusory and fictitious, privacy? Yeah, you know, that's going to happen. But even that will be better in the future once stronger privacy-protecting standards are in place. And Leo, I know you were talking about Discord and this recently.

Leo Laporte [00:59:28]:
Yeah, because we use Discord.

Steve Gibson [00:59:30]:
Right. But are you flagged as an adult content server? No.

Leo Laporte [00:59:34]:
No. So we probably wouldn't have to worry.

Steve Gibson [00:59:37]:
Right. And that's my point is that it's probably only, excuse me, explicit servers that are, that are offering explicit content, and when Discord doesn't have— has not been able to obtain high confidence that a user is already an adult, that then they would say, okay, you know, you're gonna— you know, sorry, you've— the people you're talking to haven't convinced us, and your language or your grammar or whatever they're using as signals you need to prove that you're an adult to us. And I should mention that we'd lose.

Leo Laporte [01:00:14]:
Members if that started happening. I mean, it's a real problem.

Steve Gibson [01:00:17]:
Yeah, and it wouldn't happen because you're not flagging your server as adult. So I think, you know, there isn't a problem in, in the, in the best case. Now, it is, however, the case that we have seen evidence of IDs not being deleted when they should have been. And so, you know, that's a concern, right? This KID, uh, you know, K-ID, uh, is a service that— a third-party service that some providers are using in order to, uh, to obtain, uh, their— this age verification, right? Um, and I don't want to accuse them wrongly, but somebody had a breach, and we talked about it on the podcast about 6 months back where a ton of, of, you know, identification, personally identifiable information, like the works, was obtained in a breach by one of these third-party ID, you know, age verification services that for no reason anyone had or could explain had not deleted this information from their servers. It's like, guys, you know, we're giving this to you on the, on the condition that you delete it. You have to. On the other hand, we all just heard about what happened with the Ring doorbell, you know, magic video being deleted, then somehow coming back. So yeah, you know, what, what does that mean?

Leo Laporte [01:01:45]:
The Google Nest doorbell.

Steve Gibson [01:01:46]:
Yeah, yeah, yeah.

Leo Laporte [01:01:48]:
And this, in this Nancy Guthrie case.

Steve Gibson [01:01:50]:
Yeah. Okay, so when I saw that GTIG, Google's Threat Intelligence Group, had identified a widespread active exploitation of the critical vulnerability in WinRAR, which we talked about last summer. Although I was certain I had updated my copy then, I double-checked and yikes, I was still using 7.12. Which contained the vulnerability. It was the last version that did. 7.13 fixed it. I'm now using 7.20, but I decided that given that the threat has moved, as it was from theoretical to now real and live, I ought to remind all WinRAR users to be certain they've updated. Here's what Google's Threat Intelligence Group just posted.

Steve Gibson [01:02:55]:
They said the Google Threat Intelligence Group, GTIG, has identified widespread active exploitation of the critical vulnerability CVE-2025-8088 in WinRAR, a popular file archiver tool for Windows, to establish— and it's being abused to establish initial access, and deliver diverse payloads. Discovered and patched in July last summer, 2025, government-backed threat actors linked to Russia and China, as well as financially motivated threat actors, continue to exploit this N-day across disparate operations, meaning lots of people are in the act. The consistent exploitation method A path traversal flaw allowing files to be dropped into the Windows startup folder for persistence underscores a defensive gap in fundamental application security and user awareness. In this blog post, we provide details on CVE-2025-8088 and the typical exploit chain highlight exploitation by financially motivated and state-sponsored espionage actors, and provide indications of compromise to help defenders detect and hunt for the activity described in this post. To protect against this threat, we urge organizations and users to keep software fully up to date, blah blah blah. Okay, so anyway, uh, 8088 is a high-severity path traversal vulnerability in WinRAR that attackers exploit by leveraging alternate data streams. They're able to craft malicious RAR archives which, when opened by a vulnerable version of WinRAR, can write files to arbitrary locations on the system. Exploitation of this vulnerability in the wild began as early as July 18th, 2025, and the vulnerability was addressed by RAR Lab with the release of WinRAR version 7.13 shortly after on July 30th.

Steve Gibson [01:05:20]:
So that's enough said. And actually, now that I'm reading this, I wanted to check the version I have on this machine because when I put this together on the weekend, I checked my, my other machine, a Win 10 machine,, and that's where I found 7.12. This is probably the one that I updated immediately, and I forgot to do it to the other one. So then I thought back, okay, have I downloaded any WinRAR in the last 6 months that, like, from anywhere? I don't, I don't think so. That's not my normal mode of getting things. But again, uh, anyone wanting more information and details, I've included the link to Google's coverage in full in the show notes. Um, and you can just go to win-rar.com/download.html. That will get you 7.12 for whatever platform you're using.

Steve Gibson [01:06:23]:
Uh, make sure you're using that. If you do discover a version of WinRAR before 7.13, as I did,, you can know, at least for what it's worth, it's not, it's not much good, but we're in good company. Uh, Stairwell Security just wrote, Stairwell recently identified a significant and concerning trend across our customer base. Get this, over 80% of monitored environments contain vulnerable versions of WinRAR affected by CVE-2025-8088. This finding underscores a persistent challenge in enterprise security when widely deployed trusted software quietly falls out of date and becomes a high-value target for attackers. And then they talk about how Google identified the exploitation, blah blah blah. Any 80% of the, the environment that they, that they monitor, they discovered versions of WinRAR earlier than 7.13. Everything previous to that has the vulnerability.

Steve Gibson [01:07:45]:
So yikes. Uh, and Leo, speaking of yikes, we're at an hour in. Yeah, I think it's time for me to have.

Leo Laporte [01:07:52]:
A little caffeination. Well, have, have at it, Mr. Steve, while I tell you about Meter, the company building better networks, our sponsor for this segment of Security Now. Meter was founded by two network engineers who feel your pain. If you're a network engineer, well, they know the headaches. Legacy providers with inflexible pricing and IT resource constraints stretching you thin. Everybody's got that, right? Complex deployments across fragmented tools. Look, as a network engineer, you're, you're mission critical to the business, but you're working with infrastructure that wasn't built for today's demands.

Leo Laporte [01:08:33]:
No, no one knows that better than you. That's why businesses are switching to Meter. M-E-T-E-R. Meter delivers full-stack networking infrastructure— wired, wireless, even cellular that's built for performance and scalability because they build it themselves. Meter realized if we're going to be effective, we've got to do the whole stack. Meter designs the hardware, writes the firmware, builds the software, manages deployments. They of course provide support afterwards too. Meter offers everything from even ISP procurement They'll help you find the best ISP for your needs.

Leo Laporte [01:09:15]:
They'll help you with security. They can do routing, switching. They do wireless, they do firewalls, cellular, power, DNS security, VPN, SD-WAN, and multi-site workflows, all in a single solution. Meder's single integrated networking stack scales from— beautifully from major hospitals. And you know, if you've ever been in a hospital, how bad the internet is. Well, it makes sense. They've got MRI machines. They've got all kinds of equipment.

Leo Laporte [01:09:46]:
It blocks signals. You can't have signals in certain areas, but meter can help. They branch offices, you know, you've got a great setup at the headquarters, maybe, maybe, but does that branch office have a great setup? And does it, does it integrate with yours so that they can work as if they're in the main office? And you know what happens? A lot of companies buy, they acquire branch offices, they acquire warehouses, or they build warehouses. Suddenly the problem is exponential. Large campuses, they even do data centers. In fact, they did Reddit's data center. The assistant director of technology for Webb School of Knoxville. This is what he said.

Leo Laporte [01:10:29]:
We had more than 20 games, athletic games on campus between our two facilities. At the same time. Each game was streamed via wired and wireless connections, and the event went off without a hitch. Can you imagine that? He said, quote, we could never have done this before Meter redesigned our network. Let them redesign your network. With Meter, you get one partner for all your connectivity needs, from the first site survey to ongoing support, without the complexity of managing multiple providers, multiple tools. And you know how it is If you've got multiple providers, they're going to blame each other. It's not our fault.

Leo Laporte [01:11:06]:
It must be the router, not the router's fault. Must be the ISP, not the ISP's fault. Must be that security appliance. And then nobody gets it fixed. Well, Meteor, that's not— doesn't work that way because they're the whole stack. Meteor's integrated networking stack is designed to take the burden off your IT team and give you deep control and visibility, reimagining what it means for businesses to get and stay online. And after all, that's what's changed in the last 20 years. If you're not online, if you don't have effective internet access, you got problems.

Leo Laporte [01:11:40]:
Meter's built for the bandwidth demands of today and tomorrow. We thank Meter so much for sponsoring Security Now. Go to meter.com/securitynow to book a demo. Meter.com/securitynow to book that demo. Let them show you what they can do for you. You will be blown away. Meter.com/security now. Now back.

Steve Gibson [01:12:08]:
To Steve Arino. Okay, we've talked about the Graphite spyware before. Uh, you know, it's one of the Israeli companies, in this case Paragon Solutions. It's one of the more capable systems, but it's one thing to hear about it and another thing to see it. They made the mistake of exposing details of their Graphite spyware control panel. The panel was exposed in photos from a demo day recently in the Czech Republic. The photos, which were immediately taken down, as I— whoopsie, didn't mean to show those— revealed Graphite's ability to extract messages from instant messaging clients, including WhatsApp, Signal, Telegram, Line, Snapchat, TikTok, and more. We already know what— well, we are— we already know that WhatsApp and Signal are truly secure, and that Telegram, well, it probably is, mostly because its encryption is so random and scrambled that no one has yet, as far as we know, been able to make heads or tails of it, even though when we talked about this about a year ago, some researchers really tried.

Steve Gibson [01:13:27]:
They're like, what? We're not sure what this is doing. Anyway, the point is, uh, we've— as we've always observed, there is no threat from anyone monitoring their users' communications on the outside. The threat is that once spyware arranges to gain a foothold inside a smartphone, it doesn't need to untangle Telegram's mess of crypto or fight with Moxie's triple ratchet in Signal. All it needs to do is pretend to be the device's user, examine the decrypted state, you know, the decrypted data that, that's presented on the device's screen and send that back to its central headquarters. So those leaked photos conclusively demonstrated that once a smartphone has been, you know, lubed up with Paragon's graphite, none of its secrets will be safe from spying eyes. And as we know, this is the battle that Apple is in. I mean, and they really take this seriously. They've gone to, you know, every extreme imaginable just to keep this, this cat-and-mouse battle going on, trying to harden and then re-harden and over-harden and super-harden their, their hardware platforms to, to keep the bad guys from getting into their devices.

Steve Gibson [01:14:57]:
It's just amazing how this battle has continued. If it weren't so difficult to apply A useful security caution might be beware anything that's too popular. We often see that bad guys are very quick and unfortunately clever about jumping on to anything for which there's a large demand. For example, fake charitable contribution sites invariably pop up following any natural disaster. In the hope of cashing in on people's compassion, you know, for the plights of others. So I suppose we shouldn't be surprised to learn that some cretin has created a family of 30 malicious AI assistant browser extensions for Chrome. Of course, why wouldn't someone do that? AI is all the rage at the moment, and people are going to be looking around for AI this or that. So last Thursday, LayerX reported on their discovery, which they've named AI Frame, with the headline, fake AI assistant extensions targeting 260,000 Chrome users via injected iframes.

Steve Gibson [01:16:27]:
They wrote, As generative AI tools like ChatGPT, Claude, Gemini, and Grok become part of everyday workflows, attackers are increasingly exploiting their popularity to distribute malicious browser extensions. In this research, we uncovered a coordinated campaign of Chrome extensions posing as AI assistants for summarization, chat, writing, and Gmail assistants. While these tools appear legitimate on the surface, they hide a dangerous architecture. Instead of implementing core functionality locally, they embed remote server-controlled interfaces inside extension-controlled surfaces and act as privileged proxies, granting remote infrastructure access to sensitive browser capabilities. So basically, you, you're, you install this and then you've created a tunnel from the bad guy's backend server infrastructure into your browser. Not what anybody wants. They said across 30 different Chrome extensions published under different names and extension IDs and affecting over 260,000 users. We observed the same underlying codebase, permissions, and backend infrastructure, meaning they're all from the same guy, group, whatever.

Steve Gibson [01:17:59]:
Critically, because a significant portion of each extension's functionality is delivered through remotely hosted components, their runtime behavior is determined by external server-side changes. Rather than by code reviewed at install time in the Chrome Web Store. And we should just pause to say there is something so wrong with the fact that this is even possible. The fact that, that the.

Leo Laporte [01:18:30]:
Chrome Web.

Steve Gibson [01:18:31]:
Store could be, could be allowing extensions to then later change their own behavior by changing what's happening on the server side. So this entire— the security of this whole aspect of the ecosystem is badly broken. They said the campaign consists of multiple Chrome extensions that appear independent, each with different names, branding, and extension IDs. In reality, all identified extensions share the same internal structure. The same JavaScript logic, the same permissions, and the same backend infrastructure. Across 30 extensions impacting more than 260,000 users, the activity represents a single coordinated operation rather than separate tools. Notably, several of the extensions in this campaign were featured by the, by the Chrome Web Store. It's a featured extension.

Steve Gibson [01:19:30]:
By the Chrome Web Store, increasing their perceived legitimacy and exposure. The technique, commonly known as extension spraying, is used to evade takedowns and reputation-based defenses. When one extension is removed, others remain available or are quickly republished under new identities. Although the extensions impersonate different AI assistants, Cloud, ChatGPT, Gemini, Grok, and generic AI Gmail tools. They all serve as entry points into the same backend controlled system. By leveraging the trust users place in well-known AI names, you know, brand names such as Cloud, ChatGPT, Gemini, and Grok, attackers are able to distribute extensions that fundamentally break the browser security model. The use of full-screen remote iframes combined with privileged API bridges transforms these extensions into general-purpose access brokers capable of harvesting data, monitoring user behavior, and evolving silently over time. While framed as productivity tools, their architecture is incompatible with reasonable expectations of privacy and transparency, which I would say is putting it mildly.

Steve Gibson [01:20:59]:
As generative AI continues to gain popularity, defenders should expect similar campaigns to proliferate. Extensions that delegate core functionality to remote mutable infrastructure should be treated not as convenience tools, but as potential surveillance platforms. Amen. So yeah, more than a quarter million instances of browser extension downloads and installations which front for this single malicious campaign. We know that web browser extensions are super popular and arguably necessary. After all, we could be using the password manager of our choice today without them. But their diversity and popularity has overwhelmed Google's ability to examine and manage them such that today's web browser ecosystem creates serious vulnerabilities. And there's really no solution today except to just say, be prudent.

Steve Gibson [01:22:02]:
Only install from, like, really well-known brands with, you know, that have been around a long time. And next, that's not even the worst. Would you believe— that was 30 extensions. Now we have 287 Chrome extensions found to be spying on 37.4 million users. Chrome browser extensions. The researcher in this case is, uh, actually they posted on Substack great research despite the fact that they, they chose as their handle the Q Continuum. Okay, uh, they wrote, although their research is great, they wrote, we built an automated scanning pipeline that runs Chrome inside a Docker container. This is great research.

Steve Gibson [01:23:03]:
Routes all traffic through a man-in-the-middle proxy and watches for outbound requests that correlate with the length of the URLs we feed it. That's very clever. So they feed the browser URLs of different lengths. And then although they're unable to see the detail, They look at the length of the traffic which is passing to a remote server and see that if it's correlating with the length of the URL, then it is almost certainly that URL encrypted. So they say, using a leakage metric, we flagged 287 Chrome extensions that exfiltrate browsing history, meaning you install this extension, every single URL you visit in Chrome, even though just because the extension is sitting there in, in your pile of extensions, it is sending them all back to the extension's publisher. Complete breach of your privacy. They said those extensions collectively have 37.4 million installations, roughly 1% of the global Chrome user base, just this group, 1%. The actors behind the leaks span the spectrum: SimilarWeb, Curly Doggo, Off the Docks, Chinese actors, many smaller obscure data brokers, and a mysterious Big Star Labs that appears to be an extended arm of SimilarWeb.

Steve Gibson [01:24:46]:
They said the problem isn't new. In 2014, Weisbacher et al., their research on malicious browser extensions demonstrated this. In 2018, Heaton showed that the popular Stylish theme manager was silently sending browser URLs to a remote server. These past reports caught our eye and motivated us to dig into this issue today. So fast forward to 2025. Chrome Store now hosts roughly 240,000 extensions, right? So just shy of a quarter million browser extensions. How can they possibly know what they're all doing? Many of them, they wrote, with hundreds of thousands of users. We knew that we needed a scalable, repeatable method to measure whether an extension was actually leaking data in the wild.

Steve Gibson [01:25:45]:
It was shown in the past that Chrome extensions are used to exfiltrate user browser history that is then collected by data brokers such as SimilarWeb and Alexa. We try to prove this in, in this report. We, we try to prove in this report that SimilarWeb is very much still active and collecting data. Why does it matter? They write, there's a moral aspect to the whole issue. Imagine that you build your business model on data exfiltration via innocent-looking extensions and using that data to sell them to big comp— to big corporates. Well, that's how SimilarWeb is getting part of the data. That should remind us that whatever software you're using for free and it's not open sourced, you should assume you are the product. The second aspect is that it puts the users into danger, and potentially this could be used for corporate exfiltration.

Steve Gibson [01:26:54]:
Even if only browsed URLs are exfiltrated, they typically contain personal identifications. That way, bad actors that would pay for the raw traffic collected can try to target individuals. So anyway, they, they go on, uh, at length. I just wanted to put this again on everyone's map. I, again, I don't know, I don't know how to solve the problem. Uh, we want extensions that are powerful. Our extensions need to be powerful to be, for example, a password manager, you know. I fill out a form and, and, uh, Bitwarden sees the contents that I put in the form and says, oh, uh, I checked your domain, uh, I don't have this in my library for you.

Steve Gibson [01:27:49]:
Would you like me to add this to your, you know, password manager collection? And you just say, yeah, I do. I want that. And that's done. So super convenient, but consider what that means this extension can do. It sees you entering the plain text password and your username, and it knows where you are, the whole URL. That's what these extensions have access to. And now we have an ecosystem in the Chrome Web Store of 240,000 of these extensions. Obviously, many of them are spying on their users.

Steve Gibson [01:28:30]:
In this case, these guys found 287 representing— that have been downloaded by 37.4 million users, representing around 1% of the Chrome user base, sending everywhere they go home. Yikes. The folks at Koi Security titled their write-up, uh, of a new attack, Agree to Steal: The First Malicious Outlook Add-in Leads to 4,000 Stolen Credentials. And here's another fundamental problem that we have in the industry. I had this on my radar for a while, and then another instance of this came up. Uh, generically, these are known as domain recovery attacks. They can be quite serious, and they reveal an aspect of internet security that is important and has largely been overlooked. So I'll first share the beginning of what Koi wrote last Wednesday.

Steve Gibson [01:29:35]:
They posted, this is the first known malicious Microsoft Outlook add-in detected in the wild. But the developer who built the add-in is not the attacker. In 2022, so 4 years ago, a developer built a meeting scheduling tool called AgreeTo and published it to the Microsoft Office Add-in Store. It worked, people liked it, then the developer moved on and the project died. However, the add-in stayed listed in Microsoft's store. The URL it pointed to, hosted on the versel.app domain, became claimable, and an attacker claimed it. After making it theirs, they deployed a phishing kit, and Microsoft's own infrastructure started serving it inside Outlook's sidebar. By gaining access to the attacker's exfiltration channel, we, Koi Security, were able to recover the full scope of the operation— over 4,000 stolen Microsoft account credentials, credit card numbers, banking security answers.

Steve Gibson [01:31:01]:
The attacker was actively testing stolen credentials yesterday Yeah, they posted this— what is it, on Thursday? So last Wednesday— oh no, they posted it on last Wednesday. So last Tuesday they saw this happening. They said the infrastructure is live as you read this. This is the story of how a dead side project become— became a phishing weapon. They said first off, Office add-ins are not installable code, their URLs. A developer submits a manifest to Microsoft, an XML file that says load this URL into an iframe inside Outlook, you know, whereupon of course we say what could possibly go wrong? They said Microsoft reviews the manifest, signs it, and lists the add-in in their store., but the actual content, the, you know, the UI, the logic, everything the user interacts with is fetched live from the developer's server every time the add-in opens. Okay, so just to pause here, that really sounds like an architecture that is asking for trouble. What What could Microsoft possibly have been thinking to implement Office add-ins like this? And it appears the trouble is what they got.

Steve Gibson [01:32:38]:
Coy continues saying, note the read-write item permission in the manifest that grants the add-in the ability to read and modify the user's emails. It was appropriate for a meeting scheduler. It's less appropriate for whoever controls that URL today. There's no static bundle to audit, no hash to verify. Whatever the domain outlook-one, O-N-E,.vercel, V-E-R-C-E-L,.app serves right now is what runs inside Outlook. If the developer pushes a bad update, it's immediately live. If someone else takes control of that URL, they control what every user of that add-in sees inside Outlook's trusted sidebar with full read and write access to their email. Microsoft blessed this manifest once in December of 2022.

Steve Gibson [01:33:45]:
They never check what the URL serves again. Agree2 was a real product, an open-source meeting scheduling tool with a Chrome extension, 1,000 users, 4.71-star rating, 21 positive reviews, and an Outlook add-in published to Microsoft Store in December of 2022. The developer maintained an active GitHub repo, a full TypeScript mono repo with Microsoft Graph API integration, Google Calendar support, and Stripe billing. This was somebody building a business. Then it stopped. Development stopped. The last Chrome extension update shipped in May of 2023. The developer's domain agree2.app expired.

Steve Gibson [01:34:40]:
Google eventually removed the dead Chrome extension in February 2025, but the Outlook add-in stayed listed in Microsoft's Office Store, still pointing to a Vercel URL that no longer belonged to anyone. At some point after the developer abandoned the project, their Vercel deployment was deleted. The subdomain outlook-one.vercel.app became claimable and the attacker grabbed it. They deployed a 4-page phishing kit, a fake Microsoft sign-in page, a password collection page, an exfiltration script, and a redirect. That's all it took. They didn't submit anything to Microsoft. They weren't required to pass any review.. They didn't create a store listing.

Steve Gibson [01:35:32]:
The listing already existed. Microsoft reviewed, Microsoft signed, Microsoft distributed. The attacker just claimed an orphaned URL domain and Microsoft's infrastructure did the rest. So their description continues with all the details, but everyone gets the idea— very poor design. On Microsoft's part. I can understand Microsoft not wishing to re-vet and re-verify any change that an add-in developer might make, but they should have some mechanism for preventing abandoned and dangling URL domains from being taken over and repurposed. That's just dumb. In general, the design of the internet creates this problem, right? We've all encountered abandoned domains that have been acquired typically by low-end advertisers who snap up web domains that have expired, uh, and then they host their own content that nobody wants in the hope of generating revenue from advertisers who will pay for any traffic from anyone, and they're not discriminating.

Steve Gibson [01:36:46]:
But when domains that are used to host important content are abandoned, things can quickly take a turn for the worse. Years ago, we examined an instance where the domain of an important and super popular web browser JavaScript library had changed hands. Suddenly, an incredible number of web browsers were pulling a critical library from someone else. Should be enough to keep one up at night. And Leo, we're at an hour and a half. Um, we got some listener feedback. Let's take a break and then, uh.

Leo Laporte [01:37:25]:
We will plow into some feedback. No reason to stay up at night. Take a nap and I'll be back in a minute. It's our show today, brought to you by Zscaler. Brought to you by the world's largest cloud security platform. That should get your attention. We talk a lot about AI in all of our shows, of course, the potential rewards of AI in your business, I think too great to ignore. No business can afford not to at least explore AI, but the risks are there too.

Leo Laporte [01:37:59]:
I mean, there's the issue of loss of sensitive data, even attacks against enterprise-managed AI. And of course, the bad guys love it. Generative AI increases opportunities for these threat actors. It helps them to rapidly create impeccable phishing lures, write malicious code. They use AI to automate data extraction at speed. I mean, there's so many, so many things to worry about. Your employees may even be using AI right now without your knowledge. And the problem is Even if used carefully, it's possible to accidentally leak proprietary information.

Leo Laporte [01:38:38]:
For instance, there were 1.3 million instances of Social Security numbers leaked to AI applications last year. ChatGPT and Microsoft Copilot saw nearly 3.2 million data violations. So I think we can agree it's time to rethink your organization's safe use of public and private AI. That's what Chad Pallett did. He's the acting CISO at BioIVT. They use Zscaler. He uses it. He says Zscaler helped them reduce their cyber premiums, reduce them by 50%, and double their coverage.

Leo Laporte [01:39:16]:
So that's like a 4— I don't know, 50% times— I don't know, it's like a big improvement, right? And really improved their controls too. Take a look at this video we you got from Chad. With Zscaler, as long as you've got.

Steve Gibson [01:39:29]:
Internet, you're good to go. A big part of the reason that we moved to a consolidated solution away from SD-WAN and VPN is to eliminate that lateral opportunity that people had and that opportunity for misdirection or open access to the network. It also was an opportunity for us.

Leo Laporte [01:39:48]:
To maintain and provide our remote users with a cafe-style environment. Thank you, Chad. With Zscaler Zero Trust plus AI, you can safely adopt generative AI and private AI to boost productivity across your business, and you don't have to worry about it. Their Zero Trust architecture plus AI helps you reduce the risks of AI-related data loss. It also protects against those AI attacks to guarantee greater productivity and compliance. Learn more at zscaler.com/security. That's zscaler.com/security. We thank them so much for their.

Steve Gibson [01:40:26]:
Support of Security Now. Back to Steve. Okay, so, uh, I got an email from Walt Stoneburner who said, Steve, thank you for pointing out that quality-tested code that adheres to functional specs is important for production level code. There's a big difference between throwing something together that seems to work but that you don't understand and experienced craftsmanship. It's not that we don't love coding, it's just a pleasant benefit. It's that we're aiming for correctness, speed, size, cost, maintainability, clarity, extensibility, expressiveness, modularity, portability, and a host of other factors that vibe coding does not do. Walt in Ashburn. Okay, so lots of our listeners are writing in saying, Steve, you know, what do you think about all of this code generation by, by AI? Um, and I've continued to think about it.

Steve Gibson [01:41:28]:
So one thing I want to say is I think what Leo and I were— I know that Leo and I were talking about this, I think it was before we, we began recording today. That, that I want to always acknowledge that this— whatever, wherever we are today is not where we're going to be tomorrow. It's not where we were yesterday. And I don't see any sign of this slowing down. I'm happy that so much resource— I mean, I'm happy for the hype because the hype means that a ton of resources are being put into something which I think has great potential. Okay, that said, where we are at the moment— I've been thinking about this vibe coding, and I think that the most unnerving aspect of vibe coding for me, a lifelong coder, is the idea that a bunch of code has been cast which may do what I want and expect but it also may not. There's every chance that in some subtle way it might misbehave. In some of the feedback I've received and shared in recent weeks, the tasks were relatively straightforward, so, you know, the various strange errors Claude code made were obvious to its user.

Steve Gibson [01:42:50]:
You know, like that book author's name appearing twice in its field. He didn't know why, but he pointed it out to Claude and it says, oh yeah, sure enough, and then it fixes it. But this should give any true coder some pause to wonder what other far more subtle errors might be lurking in there that haven't been seen and pointed out to the code bot. And we would expect that there would be an exponentiating effect in error as projects grew in size to create many more possible interactions and places where subtle errors might hide. And this is nothing against AI. We've talked about this with actual, you know, any project size. Any time complexity is getting larger, you know, there's far more opportunity for mistakes. Okay, so, but having said that, Then I challenge myself and say, okay, hold on there a second, Gibson.

Steve Gibson [01:43:53]:
When, when you use a library authored by some third party, you didn't write that library. You don't know everything about its innards. You're taking on faith the fact that it operates correctly. And that's true. But the difference is that I'm able to assume when I use a third party you know, code from, from a third party that its non-AI author took pride in creating, uh, you know, deliberately writing code and knowing what it did and testing the functions of each and every one of its whatever it does, I'm able to assume the library's correctness. This suggests that a, a unit testing approach to professional AI code generation might be the solution. Break the large project down into small pieces, then design and apply unit tests to verify the correct operation of each piece, you know, under every edge case and possible condition. Now, and this echoes some of the early formal code correctness verifications that programmers have been applying by hand for years.

Steve Gibson [01:45:13]:
It's considered the only way to know for sure from a testing standpoint. So perhaps AI can similarly be asked to build large projects from smaller, carefully tested pieces. There's one thing that worries me when some aspect of my code is not doing what I expect. I'm able to quickly and easily zero in on the trouble and fix it because I wrote it in the first place. You know, it's my code, so I understand how it works and what it's supposed to be doing. But what happens when a non-coder detects that something is not working? Last year, when we began looking into Microsoft's early use of Copilot for fixing bugs, Remember that instance where Copilot was shown a bug in some code where a parser was running off the end of the stack that it was parsing? Rather than fixing the underlying error, because a stack underflow should not have been possible at all, Copilot added some glue, an explicit test to prevent the pointer underflow. Okay, technically this repaired the problem that, you know, that occurred by explicitly preventing the condition that revealed the bug. This is reminiscent of the old joke— excuse me— about the guy who goes to the doctor with a complaint.

Steve Gibson [01:46:53]:
He explains to his doctor that You know, his left shoulder hurts whenever he raises his arm in a certain way, and his doctor says, no problem, just don't raise your arm like that. Of course, the joke is that the symptom was suppressed, but the underlying problem was not addressed. In the case of the early Copilot experiment, an experienced Microsoft coder was overseeing the Copilot testing and questioned whether Copilot's fix might not be masking a subtler underlying problem. So I'll suggest that it's going to be very interesting to watch this whole vibe coding era play out. And I also think that we're at 1%, if that, of where we're going to be. I mean, I, I was among the first people to say very early on that code should be something that AI could master. And, you know, we're seeing very, very encouraging early results. Um, but again, I said it last week, I author the code that I produce.

Steve Gibson [01:48:03]:
There's no way I'm going to be selling code under my name that an AI produced. That's just— that isn't— that isn't for me. Um, Denny Vandemeil said, hello Steve, longtime listener of Security Now and user of your web products and software. For many years I've held the position that free VPN services are scary in general. Then I stumbled across Cloudflare's free tier of their Warp VPN for most devices. As you know, Cloudflare's IP address and DNS is 1.1.1.1. Cleverly they bought the TLD ONE, and their free tier VPN is located at one.one.one.one. He said it works well and could be installed on Apple macOS, iOS, Android, Windows, and Linux.

Steve Gibson [01:49:00]:
Signed, Denny. And so I just wanted to thank Denny. I'd forgotten about Cloudflare's, uh, free Warp VPN offering. So I am glad for the reminder. Okay, so that's our feedback from our listeners. I want to talk about attestation and what it's about and the surprising and unplanned adventure that I had last week. Uh, why don't we just do our, our last, uh, uh, advertiser break, and then I won't break in the middle of this. Okay.

Steve Gibson [01:49:35]:
Unplanned adventures. It lets me clear my throat too.

Leo Laporte [01:49:37]:
If I got something. Yeah, unplanned adventures are never good, I think. You always want to plan them ahead of time. This episode of Security Now brought to you by Hoxhunt. Oh, we're going to talk about this on our talk at Zero Trust World. What are you calling it?

Steve Gibson [01:49:56]:
The problem is inside the house.

Leo Laporte [01:49:58]:
The call is coming from inside the house. It's your users, right? That is often the biggest problem with security. Well, Hawk's Hunt is about helping your users help you. That's a good way to put it. As a security leader, you are paid— well paid, I hope, never enough, I'm sure— to protect your company against cyber attacks. But it's getting harder. Well, I mean, there are more cyber attacks than ever. And if you are faced with this idea of, you know, keeping your employees from clicking malicious links.

Leo Laporte [01:50:33]:
You got to be terrified by the AI-generated emails you're getting, the phishing emails. They're getting better and better. I was fooled. I was fooled. I shouldn't have been, but I was a couple of weeks ago. We talked about it on the show. Every morning as Lisa goes through her email, I go through my email. We compare phishing emails that we're getting.

Leo Laporte [01:50:55]:
You know, it just terrifies me, you know, to know that our employees are getting the same email and they could click one, one wrong click and you're, you're dead. Problem is legacy one-size-fits-all awareness programs really don't stand a chance. At most, they're sending out 4 kind of generic trainings a year. Most employees hate them. They loathe them. They ignore them. And then of course, uh, you're sending out the tests, but when somebody actually clicks, they're forced into an embarrassing training program that feels like a punishment. That's no way to learn.

Leo Laporte [01:51:30]:
That's why more and more organizations are trying Hoxhunt. Hoxhunt goes beyond security awareness and changes behaviors by rewarding good clicks and coaching away the bad. Whenever an employee suspects an email might be a scam, Hoxhunt will tell them instantly, providing a dopamine rush that gets your people to click, to learn, to protect your company. And you'll love it. As an admin, Hoxhunt makes it easy to automatically deliver phishing simulations in any way you want— email, Slack, Teams. You can use AI to mimic the latest real-world attacks. The simulations are personalized to each employee based on department, location, and more, while instant micro trainings solidify understanding and drive lasting safe behaviors because they're little, they're short, and they're fun. You can trigger gamified security awareness training that awards employees with stars and badges.

Leo Laporte [01:52:26]:
Now, I know that sounds dopey. I got to tell you, though, from my own experience, it's not. You feel good. I got a star. I got it. You should go see the demo at the website. They'll surround it. Tada! You did it! That boosts completion rates, that ensures compliance.

Leo Laporte [01:52:47]:
People learn when they're having a good time, when they're enjoying themselves, not when they're being punished with trainings that are boring and useless. Plus, you could choose from a huge library of customizable training packages so you can really fit it to your needs. You can even generate your own with AI. Hawks Hunt— they've got everything you need to run effective security training in one platform. Platform, meaning it's easy to measurably reduce your human cyber risk at scale. You don't have to take my word for it. There are over 3,000 user reviews on G2 that make Hoxhunt the top-rated security training platform for the enterprise, including easiest to use, best results. It's also recognized as a customer's choice by Gartner, and thousands of companies like Qualcomm, AES, and Nokia use it to train millions of employees all over the globe.

Leo Laporte [01:53:37]:
Visit hoxhunt.com hoxhunt.com/securitynow to learn why modern secure companies are making the switch to Hoxhunt. Do it right now. hoxhunt.com/securitynow. H-O-X-H-U-N-T, like fox hunt with an H. hoxhunt.com/securitynow. It really works. It's great. And we thank them so much for supporting Steve.

Steve Gibson [01:54:03]:
Now. Let's talk about attestation. As I've noted and warned, the month of March 2026, which is now a mere 2 weeks away, will see major changes in the identity certificate issuing industry. A few weeks ago, uh, near the end of January— actually, it was Monday, January 26th— being a customer of DigiCert, as I have been for a long time, I received a courtesy piece of email with the subject, Important Reminder: TLS/SSL Certificate Lifetimes Changing February 24th, 2026. They said, hello, we're writing to remind you that starting February 24th, 2026, TLS/SSL certificates issued through DigiCert Search Central will have a maximum validity of 199 days, down from 397. Okay, so close to 200 versus close to 400. They're basically cutting certificate life in half. This change to shorter certificate lifetimes is an industry-wide requirement mandated by new CA Browser Forum baseline requirements.

Steve Gibson [01:55:19]:
While shorter lifetimes may require adjustments, yeah, They also reduce risk, blah, blah, blah, blah, blah, right? Justifying all of this. So basically they explain that they're cutting the lifetime of their certs in half and how this affects their customers. Everyone who's been following this podcast knows only too well the reasoning behind my feelings about this ridiculous and extremely inconvenient shortening of certificate lifetime. And that's doubly so for code signing certificates, which unlike web server certificates can only be stored in HSM hardware, making them completely impervious to remote theft. In this case, DigiCert is alerting their customers and giving us a 1-month reminder of the upcoming reduction In web server authenticating TLS certificates, maximum certificate lifetime will be dropping from 1 year plus some margin down to just 6 months plus some margin. One of the consequences of the industry's shortening certificate lifetime is the need to decouple certificate issuance from certificate qualification. In the bygone days When certificates lasted for 5 or 10 years, as they once did, the act of proving you were who you claimed to be would be part of the certificate renewal process. In applying for or renewing a certificate, you would need to do whatever the CA asked you to do to prove that you were you.

Steve Gibson [01:57:03]:
You know, but now that process has also been significantly fouled up. 'Cause, right, you don't want to have to do that with, you know, every time you need to renew a certificate. DigiCert's email says, for example, on February 24th, meaning a week, a little more than a week from now, OV, Organization Validation reuse periods, will be shortened from 825 days to 397 days. On the same date, on February 24th, domain validation reuse periods will be shortened from 397 to 199. In other words, it will now be necessary to revalidate one's organization annually rather than only every 2 and a quarter years. It used to be 825 days every 2 and a quarter years. Now you got to do it every year. Like reprove who you are, who your organization is.

Steve Gibson [01:58:07]:
Now, given that Let's Encrypt only offers domain validation certificates, not organization validation— all they're saying is what you need to connect to a server reliably, which I think makes total sense, you know, and thus it doesn't incur any of this nonsense— I have a difficult time understanding how The CAs are not putting themselves out of business with these kinds of practices. I suppose, you know, they plan to survive on all of the other various types of certificates which they issue and manage, you know, such as for signing documents and such. And they'll continue to offer TLS web certificates sort of as a loss leader. You know, it's like, well, they just want to offer a full suite of products, so they will continue to offer certificates because they already do. In order to obtain the best price possible, I previously purchased TLS certification from DigiCert into 2028. In preparation for this march, GRC recently jumped through the various organization validation hoops, and at the start of last week, I reissued GRC's TLS, you know, grc.com, our TLS domain certificates, well in advance of DigiCert's February 24th deadline for a full year, you know, a year plus, uh, 397 days, because I didn't want anything to happen that might get in the way of that, you know, because with certi— with certification now having become so involved, you know, you've got to you gotta be standing by the phone when it rings and, and, and jump through all kinds of hoops. There's no telling when or why that process might fail or stall. I've been surprised in the past, so I wanted to give myself time to fix anything that might fail before that deadline.

Steve Gibson [02:00:06]:
Now, the process, as it turns out this time, proceeded without a hitch. So just because I can, and because DigiCert has no problem with reissuing certificates, Next Monday morning, the 23rd, the day before the drop-dead date, just because I can, on the last possible day to obtain a 397-day certificate, I'm going to do that. So first part of this is this should serve as a heads-up reminder to anyone who might similarly have better things to do right at this moment than figure out how to switch their certificates over to Let's Encrypt. That whoever their CA is, there will be an end-of-the-month halving of standard TLS certificate lifetime from more than a year to just over more than half a year. Um, and of course, I will be moving over to Let's Encrypt and switching to domain validation instead of organization validation certs as soon as I can, as soon as it makes sense. So I imagine that once my pre-purchase at DigiCert, I've already bought certificates through 2028, I might as well have them and then I'll switch to Let's Encrypt. Okay. So that's the current status of the TLS web server certificate side.

Steve Gibson [02:01:30]:
But my primary focus today is on another class of certificates I've recently discussed, specifically code signing. As I noted recently, the maximum lifetime of code signing certificates is also being cut, in this case by two— to a third, from a convenient 3 years down to 1 year minimum. Anyone who examines any of the software that's available from GRC will find that it's all signed with a DigiCert certificate. Sadly, that will no longer be true after this August when my current code signing certificate reaches the end of its 3-year life. That's that EV certificate that I've got currently being signed. I would prefer to remain with DigiCert. I, you know, I'm— why not? They've, they've been good to me. But the recent changes at DigiCert have overcome my own change inertia which is big for a code signing certificate authority.

Steve Gibson [02:02:36]:
So long as there's any practical alternative, I will not countenance renting the privilege of signing my own code. I can't imagine using a cloud-based provider who places a limit on the number of signatures I'm able to make and charges per signature for any overage. And even when my code signing, uh, uh, or, or even when signing my code with my own customer-provided HSM, which is what I've been doing for the past 3 years with DigiCert, the least expensive code signing plan where the user provides their own hardware is advertised as $50 per month, but that's disingenuous as hell. Since it's not possible to purchase it in monthly increments. It's only available with an auto-renewing annual commitment paid in advance. So that's $600 a year, and even that $600 per year presumably is subject to change at the next annual billing cycle since there's no longer any way to pre-purchase future years to get a price commitment. So while I'm bitterly disappointed in DigiCert, to whom I felt a well-deserved loyalty for many years, I, you know, I don't mean to single them out. The entire code signing certificate industry appears to be headed in the same direction, and it's not pretty.

Steve Gibson [02:04:07]:
As I was looking around, I discovered that a number of other CAs are now reselling DigiCert for exactly the same pricing structure. I mean, It is DigiCert for all intents and purposes, just you go to them with a different domain and website. Scouting around, I found that IdentTrust will still sell a no-strings-attached 3-year code signing certificate for $538. Um, when placed into my own HSM, I'm able to use it. So that's a, that's $180, $179 a year, basically 30% the cost of remaining with DigiCert. And that's assuming that DigiCert doesn't choose to further raise their prices before the next 3 years have passed. IdenTrust is well known, so IdenTrust it was for me. And thus began the new adventure of obtaining A code signing certificate in 2026.

Steve Gibson [02:05:15]:
Our illustrious CA Browser Forum has added a surprising hoop through which anyone wishing to obtain a code signing certificate must now jump. The CA Browser Forum requires the issuing certificate authority to obtain an attestation letter from an independent legally licensed attorney or CPA. You know, CPA, a certified public accountant. This third-party individual must attest to having firsthand knowledge of the legitimacy of the corporation and its officers. Okay, now since Gibson Research Corporation has been a taxpaying California corporation in good standing for 37 years, with a stable business, uh, business location, a DNS domain name, and a well-known presence. I initially doubted the need for this attestation letter, which I've never needed before nor been asked to provide, and IdenTrust's documentation was unclear about it. So one week ago today, last Tuesday, I created an account with IdenTrust and received a link to download a PDF packet of documents. I filled them out, omitting the clearly separate final 3 pages that contain the attestation letter details.

Steve Gibson [02:06:41]:
I sent this off to IdentTrust via— to IdentTrust in Utah via Federal Express overnight. Last Wednesday, the next morning, 11:32 AM, I received email confirmation of the forms having been received. And 35 minutes later, at 12:07 on Wednesday, I received notice with the subject Action Required: Code Signing Application Attestation Letter Required. Oh great. So the famous Merriam-Webster dictionary defines attestation as an act or instance of attesting something, such as a proving of the existence of something through evidence or an official verification of something as true or authentic. So apparently I needed to prove— or to provide IdentTrust with an attestation letter. My lifelong personal and corporate attorney retired from practice a few years ago, and I'm sure that he was you know, he allowed his license to, uh, lapse. I've been using the same CPA tax accountant firm for the past 40 years, since 1984.

Steve Gibson [02:08:00]:
So I asked my California licensed CPA if I could trouble him to use his license because he's got to fill out this form which, which, you know, basically he's having to justify his own existence as a CP— a California licensed CPA— to IdentTrust. I asked him if he would attest to Gibson Research Corporation's identity. He didn't hesitate to say yes. So Wednesday afternoon, I emailed IdentTrust's 3-page attestation letter document to him. The CA browser form requires either a digital signature using the attesting individual's personal certificate, which is not something that my CPA had, or what they termed a wet-signed original. My CPA signed and filled out the PDF and signing it in nice blue ink, so it was very clearly not from some printer. Thursday morning, I dropped by his office, picked it up. Then swung by FedEx to send the originally signed attestation letter to IdentTrust.

Steve Gibson [02:09:13]:
Late the next morning, last Friday, I received notice that my identity had been established, and a few hours later, a code signing certificate was issued. So, success. My reason for sharing all this is to establish the proper and full context for understanding what has happened to us, to the entire PC industry, in response to the threat of malware. This is the nature of the cost and the burden that malware has inflicted upon the world. I dislike what I've just had to go through to obtain the privilege of adding a cryptographic signature to my code. As the only available means of proving my identity as my code's signer. But as long as our systems are subject to malicious abuse from malicious software, I understand the need to have some unspoofable means of determining the source of any software we allow to run on our computers. As we've seen, all of the PC desktop and mobile platforms that are able to run third-party applications, with the notable exception of Linux, check and verify the cryptographic signature of any code they're being asked to run before they let their processors near it.

Steve Gibson [02:10:47]:
So I understand the need for this, and I have no better idea But what really rubs me the wrong way is the apparent profiteering by the industry's certificate authorities. I get it that the CA browser forums' increasingly stringent policies have increased the verification burden upon those CAs, and thus the cost of offering this service. But even that is one-time and non-recurring. Once any new CA has figured out who I and Gibson Research Corporation are, that's not going to ever change, just as it never did for DigiCert. The— I must have been grandfathered in because I was never asked to do all this from DigiCert. These requirements were already in place when I obtained my most recent EV code signing certificate And as I said, I never needed to go through any of this, presumably because I had already established a long multi-year relationship with them and I was grandfathered in. Looking over the current baseline requirements— that's what they call— they're called in this document, which dictate the behavior of all certificate authorities that issue code signing certificates. It became clear that the standing and authenticity of my own CPA was also— had also just been thoroughly researched.

Steve Gibson [02:12:24]:
Today, you know, I'm calling this podcast Attestation because I want to share what I just learned about the extent of what this attestation means. It's a bit eye-opening. The document which governs the content of the world's certificate authorities, is titled Baseline Requirements for the Issuance and Management of Publicly Trusted Code Signing Certificates, version 3.8.0. Now everyone should keep in mind that these requirements are applicable to anyone and everyone who wishes to create code that will be signed and widely published by any platform. Trusted code requires that it be signed and timestamped by an unexpired code signing certificate. As we know, unexpired at the time of the signing. Near the top of the baseline requirements is a section of definitions. Under attestation letter, the document says a letter attesting that subject information is correct, written by an accountant, lawyer, government official, or other reliable third party customarily relied upon for such information.

Steve Gibson [02:13:49]:
Section 3.2.2.1, authentication of organization identity for non-EV. Remember that I'm not going for EV again. Because that's just throwing money away at this point. I'm seeing some language on the internet that says that Windows SmartScreen filter gives you immediate benefit if you're using an EV cert. I think that may just be inertia from years past because Microsoft is reportedly, and we've talked about this, no longer giving EV any extra validation whatsoever. So all of this is for— is the minimal verification and certification for code signing certificate. I don't even want to think about what would be required to establish extended validation with a new certificate authority. So that's Section 3.2.2.1 says prior to issuing a code signing certificate to an organizational applicant, the CA must verify the subject's legal identity, including any DBA, you know, doing business as, proposed for inclusion in a certificate in accordance with Section 3.2.2.1.1, uh, under identity, and 3.2.2.1.2 under DBA trade name.

Steve Gibson [02:15:17]:
The CA must also obtain, whenever applicable, a specific registration identifier assigned to the applicant by a government agency in the jurisdiction of the applicant's legal creation, existence, or recognition. That was point 1. Point 2: verify the subject's address in accordance with Section 3.2.2.1.1 under identity. Third, verify the certificate requester's authority to request a code signing certificate and the authenticity of the certificate request using a reliable method of communication. That's in all caps, so that's an official term. Reliable method of communication in accordance with Section 3.2.5, Validation of Authority. And finally, point 4: if the subject's or subjects' affiliates, parent companies, or subsidiary companies' date of information— date of formation is less than 3 years prior to the date of the certificate request— thank goodness mine's 37 years— verify the identity of the certificate requester. The method used to verify the identity of the certificate requester shall be per Section 3.2.3.1, individual identity verification.

Steve Gibson [02:16:42]:
Okay, so if the corporate entity is less than 3 years old, then the identity of the requester is also verified. There were several references to Section 3.2.2.1.1 under identity, so that definitely comes into play. It says if the subject identity information is to include the name or address of an organization, and it has to— the CA shall verify the identity and address of the organization and that the address is the applicant's address of existence or operation. The CA shall verify the identity and address of the applicant using documentation provided by or through communication with at least one of the following: a government agency, in the jurisdiction of the applicant's legal creation, existence, or recognition, a third-party database that's periodically updated and considered a reliable data source, a, a site visit by the CA or a third party who's acting as an agent for the CA, or an attestation letter. Thank goodness. The CA may use the same documentation or communication described in 1 through 4 above to verify the applicant's identity and address. Alternatively, the CA may verify the address of the applicant but not the identity of the applicant using a utility bill, bank statement, credit card statement, government-issued tax document, or other form of the identification that the CA determines to be reliable. I should note that it has become very difficult for individuals to obtain code signing certificates.

Steve Gibson [02:18:25]:
It's not impossible. There is something known as an IV certificate, an individual validation certificate, but not all CAs offer them. Only a couple do. So how do individuals confirm their identity? The baseline requirements assert a principal individual associated with the business identity must be validated that is, I who represent Gibson Research Corporation as its president and CEO, must be validated in a face-to-face setting. The CA may rely upon a face-to-face validation of the principal individual performed by the registration agency, provided that the CA has evaluated the validation procedure and concluded that it satisfies the requirements of the guidelines for face-to-face validation procedures. Okay, and I'm going to skip a few paragraphs of this, like, mind-numbing boilerplate. You have a personal statement, uh, has to be provided and signed, uh, providing a full name and/or names by which a person is or has been previously known, residential address at which he/she can be located, date of birth, and an affirmation that all information contained in the certificate request is true and correct. A current signed government-issued identification document that includes a photo of the individual and is signed by the individual, such as a passport, driver's license, personal identification card, concealed, concealed weapons permit, or military ID.

Steve Gibson [02:20:02]:
At least two secondary documentary evidences to establish his/her identity that include the name of the individual one of which must be from a financial institution. Acceptable financial institution documents include a major credit card, provided that it contains an expiration date and has not expired; a debit card from a regulated financial institution, provided that it contains an expiration date and has not expired; a mortgage statement from a recognizable lender that is less than 6 months old; a bank statement from a regulated financial institution that is less than 6 months old. Acceptable non-financial documents, and it goes on like that. I mean, wow. And then a third-party validator performing the face-to-face validation must attest to the signing of the personal statement and the identity of the signer and identify the original vetting documents used to perform the identification. In addition, the third-party validator must attest on a copy of the current signed government-issued photo identification document that it is full, true, and accurate reproduction of the original. Now, of course, the certificate authority doesn't know who this supposed third-party validator is, right? So the baseline requirements state about the third-party validator, the CA must independently verify that the third-party validator is a legally qualified Latin notary, which is a special, like, high-end type of notary whose statements aren't questioned. Do they speak in Latin? No, it's weird.

Steve Gibson [02:21:50]:
I, I didn't know what it was either, so I did some research, and it is like a, like, like a super special class of notary, or a regular notary or legal equivalent in the applicant's jurisdiction, a lawyer or accountant in the jurisdiction of the individual's residency, and that the third-party validator actually did perform the services and did attest to the signatures of the individual. And that leads me to the final piece I want to share of this far longer and detailed document, which which I'm going to skip most of, under verification of attestation. The baseline requirements say the CA must confirm the authenticity of the attestation and vetting documents, and then elaborates acceptable methods of establishing the foregoing requirements for vetting documents are the CA must verify the professional status of the third-party validator, meaning myCPA, by directly contacting the authority responsible for registering or licensing such third-party validators in the applicable jurisdiction. And the third-party validator must submit a statement to the certificate authority which attests that they obtained the vetting documents submitted to the CA for the individual during a face-to-face meeting with the individual. In my case, that happened between me and my longstanding CPA last Thursday. And finally, 3, the CA must confirm the authenticity of the vetting documents received from the third-party validator. The CA must make— the CA, the certificate authority, must make a telephone call to the third-party validator and obtain confirmation from them or their assistant that they performed the face-to-face validation. The CA may rely upon self-reported information obtained from the third-party validator for the sole purpose of performing this verification process.

Steve Gibson [02:24:08]:
Oh, whoa. Now, if all of that leaves you feeling somewhat dizzy, you're not alone. I, I almost feel guilty, Leo, that I was able to pass through that verification gauntlet.

Leo Laporte [02:24:23]:
You're one.

Steve Gibson [02:24:26]:
Of the few, the proud. That's right. I'm somewhat surprised that I was accepted by Identrust without first agreeing to a full-body cavity search. Although amazing, I'm pretty sure that I would need a new CEPA if that happened. So, um, okay, stepping back from all of those gory details for a moment, think about what all this means, why this was done, and, and what it does and does not achieve in return for all this effort. Our industry is desperately trying to get control of the malware scourge. Among other things, we're seeing attacks at every stage of the software creation process. Source code repositories are being attacked and poisoned.

Steve Gibson [02:25:20]:
Malicious libraries are given off-by-one character names in the hope that a developer will introduce a typo at just the right place to invoke the typo-squatted library to devastating effect. Even AI has been used to invoke a malicious library as a result of a weaponized hallucination. And you know, the most frustrating part of this in the context of today's discussion of code signing is that any of these or similar supply chain attacks would result in compiled code that is then code-signed in good faith by its publisher and accepted by any commercial OS platform despite inadvertently incorporating that infiltrated malware. In other words, it's not as if blessing Code with a signature is able to confer any assurance about the behavior of the code that's been signed. It could— it's still got bugs. It might even be malicious. The only thing signing is able to do is assert that not a single bit of the signed code has been altered since its signing. As well as the identity of the signer as it was known to the certificate authority that issued the signer's certificate.

Steve Gibson [02:26:54]:
But that said, we're certainly far better off occupying a world where entities who are not interested in deliberately creating malware are able to sign their code and have their unspoofable signatures recognized by the guardians of the platforms we're all using. So what's the point of all this seemingly over-the-top attestation? Well, with the world's major commercial platforms having become completely unwilling to run any software that's unsigned, Linux accepted, the bad guys must somehow arrange to get their malware signed, right? One avenue we've seen is to attack the software supply chain in the hope of being incorporated into otherwise legitimate software under the code signing signature of some unsuspecting developer. The other, much more powerful solution that's available to the bad guys is the direct full-frontal approach of obtaining their own legitimate code signing certificate from one of the many trusted certificate authorities. The blockade that now prevents the major commercial OS platforms from executing any code that has not been signed has created huge pressure to spoof corporate identities in— or just make up, synthesize a corporate identity in order to trick certificate authorities into issuing valid code signing certificates to explicitly malicious parties. Fraudulent code signing certificates are a real problem. This explains why today it's the reputation of the signing certificate that matters not just its existence. The CA Browser Forum understands that what they just put me through was inconvenient as all heck and a pain in the butt, but what choice do we have? They cannot simply take the word of anyone who may be able to recite, you know, a Boy Scout is trustworthy Loyal, helpful, friendly, courteous, kind, obedient, cheerful, thrifty, brave, clean, and reverent. No, that doesn't cut it.

Steve Gibson [02:29:31]:
They clearly need another trust anchor, and that anchor is a licensed attorney or CPA who will be willing to put their own reputation and license on the line to substantiate and attest to the identity of the code signing certificate applicant. Given what I just went through, anyone who may have forgotten or may have been putting off obtaining a 3-year code signing certificate has about 10 days from today to get that done. So if you want to get a certificate good for 3 years, you can from IDENTrust, and I was very impressed with how quickly they moved. If you're attempting to establish you or your company's identity with a new certificate authority like IDENTrust, take the need for an attestation letter from an attorney or CPA to heart. It may save you, as it did— as it would have for me— another couple days that you might not have remaining because you want to squeak in you know, into February. Um, and I would expect the code signing certificate authorities to be a bit busy, uh, as these last days of February, uh, expire and 3-year certificate availability winds down. Um, and remember, if you want to avoid cloud-based pay-as-you-go or limited quantity code signing, having your own signing hardware is now a requirement. And if you want to get that done now, you'll be able to use it, whatever you do, for 3 years.

Steve Gibson [02:31:17]:
I'm glad I'm doing that. I want to have this— get this new certificate from IdenTrust since my current certificate with DigiCert lasts through August, at which time I will not be getting another one from them. My plan is to dual sign my software so that the world has a chance to see this new certificate and but also sees that it, that it's co-signed with the already almost— well, now it's 2 and a half years old, uh, DigiCert, uh, code signing certificate. And then the, the DigiCert certificate will drop off after it expires.

Leo Laporte [02:31:57]:
So boy, I mean, it is— it.

Steve Gibson [02:32:00]:
Doesn'T feel very robust, I have to say. No, it's not. I mean, you're right. You could get somebody to fake a CPA or fake an attorney.

Leo Laporte [02:32:10]:
And, you know, I mean, but, but what— there's got to be a better way to do this. There just has to be. It just feels like they're not improving it.

Steve Gibson [02:32:19]:
They're just kind of layering stuff on. Well, and the, the price. I mean, on one hand, okay, they clearly had to go, I guess, jump through some hoops, but boy, are they making it expensive just to produce code.

Leo Laporte [02:32:34]:
Yeah, and I feel like that's the.

Steve Gibson [02:32:36]:
Point, is to make money off of you producing code. Unfortunately. I mean, I like DigiCert, but as I looked around, I found that GlobalSign.

Leo Laporte [02:32:45]:
And like, there were like 4 other CA— I've used GlobalSign for, uh, MIME certs. Yeah, yeah, there are others.

Steve Gibson [02:32:54]:
Yeah.

Leo Laporte [02:32:54]:
And a lot of this is not a solution to this, to any of this, the code signing. ACME.

Steve Gibson [02:33:00]:
None of this can be used for code signing. No, not for code, because ACME is explicitly saying, I control this domain. Right. Code signing is, I am this identity.

Leo Laporte [02:33:11]:
So it's I am me. Exactly. Yeah, authentication is so hard. Maybe, you know, Sam Altman's got the right idea with the orb, the iris scanning orb. I mean, he recognizes this is an issue. This is going to be an issue in the new world. How do you prove you are who.

Steve Gibson [02:33:28]:
You say you are? Well, and think about it. I mean, a global network just decouples you from identity. We've been heralded, we've been heralding that as the great liberation. It frees us. It's, oh my God, it's, you know, we get to be autonomous and, and you, you can be a dog if you want to be. Right. Unfortunately, there are instances where it really does matter. Bad guys will abuse that very anonymity.

Steve Gibson [02:33:59]:
And so it turns out clamping down.

Leo Laporte [02:34:01]:
On it is really hard. Yeah. Neal Stephenson writes about this in his book Fall or Dodge in Hell. And what he talks about is having kind of a variety of identities you can assume. You have your real identity, which you can prove, but in order to allow anonymity and flexibility and autonomy, me. You also have other identities that are spawned from your real identity, but it can't be connected back to your real identity. And I think that we'll end up with something like that. It might be tied to some sort of TPM hardware or something, but we'll end up with something, a chip implanted in your brain at birth or something.

Leo Laporte [02:34:44]:
We've got to solve it.

Steve Gibson [02:34:46]:
It's a big issue in the— And we're running smack into it with the whole age restriction deal. Exactly. That all of our politicians have suddenly decided, well, we don't know how you're going to solve it, but you guys are smart.

Leo Laporte [02:35:01]:
Nerd harder. Yeah. Yeah. You'll figure it out. What an interesting subject. I feel like authentication is one of the most interesting and thorny problems we have. And it's a necessity.

Steve Gibson [02:35:12]:
We need to solve it. It's why I spent 7 years on Square.

Leo Laporte [02:35:15]:
Squirrel was that, right?

Steve Gibson [02:35:16]:
You know, right, it was really worth fixing.

Leo Laporte [02:35:20]:
Yeah, that's the guy. That's the squirrel guy, Steve Gibson. He's at grc.com. Uh, you might want to check it out. There's a lot of great stuff at GRC. Of course, uh, the two programs he sells— that's how he makes his living— uh, Spinrite, the world's best mass storage maintenance, recovery, and performance-enhancing utility. Uh, that's at grc.com. But there's also his brand new DNS Benchmark Pro, which is inexpensive, $9.99, and that's there.

Leo Laporte [02:35:47]:
And you should probably own both of them. Check it out. While you're there, you can get your email validated and sign up for his two newsletters or two email mailing lists. Go to grc.com/email. You provide your email address. He, through the magic of something, will attest that you are at that address and not a spammer. And well, then you can send him email, okay, from that address. So that's good.

Leo Laporte [02:36:15]:
And then below it, there are two unchecked boxes, one for the newsletter for this show, the weekly show notes, which is definitely worth, everybody should subscribe to that. That's really great. You'll get 'em a day or two ahead of the show and you can.

Steve Gibson [02:36:27]:
Read along as you listen and so forth. We are 17 subscribers shy of 20,000.

Leo Laporte [02:36:36]:
SecurityNow subscribers. That is, that is really cool. Getting there. And it's a book every— I mean, this is a free magazine. Really, it is. It's a free magazine, 21 pages of great stuff this week. The other email list is just to announce his announcement list, and he doesn't have many announcements.

Steve Gibson [02:36:55]:
In fact, he really basically doesn't ever use it. I'm zeroing in on finishing the updates to our e-commerce system to create the notion of single user and consultant licenses.

Leo Laporte [02:37:09]:
And so that will be happening soon. Cool. And that'll be announced probably. Let's see what else. Lots of other things there. A lot of free utilities, a lot of information. It's really great hang. And the podcast, of course, Steve had all the versions Steve hosts are unique.

Leo Laporte [02:37:28]:
He has a 16-kilobit audio, admittedly not the highest fidelity, but it is small and that's its real virtue. There is a 64-kilobit audio that sounds fine. It's still smaller than the one we offer, but so maybe go there to get that. He also has the show notes for download, so you don't have to subscribe to the newsletter. You can just download it. He has transcripts. Elaine Ferris, an actual human person, writes those every week. Takes a couple of days after the show.

Leo Laporte [02:37:53]:
That's how you know it's an actual human person. Uh, but those are really well done, and those are available too. Useful for searching, reading along, you know, late night, uh, trying to fall asleep, whatever. They're great.

Steve Gibson [02:38:07]:
You should have them.

Leo Laporte [02:38:08]:
Uh, it'll put you right out. Put your— put you right out. All of that at grc.com. We have the show at our site too, twit.tv/sn. Our version is different. We have 128 kilobit audio. Don't ask. We also have video, which is nice if you want to see Steve's mustache.

Leo Laporte [02:38:28]:
Uh, there is video at the YouTube site dedicated to SecurityNow. That's nice. We have a— I was just looking, you have 76,000 subscribers to your YouTube channel, which is not shabby, not at all shabby. That's pretty good. So I obviously— people subscribe there, press the subscribe button, it's free, and then and you— I don't know what you get. I don't really understand how YouTube works. There's a bell, there's a thing. I don't know, you get notifications.

Leo Laporte [02:38:55]:
I don't know. You know, we also have a Twitch channel at youtube.com/twitch. You could do the same thing there and get notifications when we go live with shows, that kind of thing. We do go live every Tuesday right after MacBreak Weekly, supposed to be 1:30. Sometimes a little late. We were late today. I'm sorry. That's 1:30 Pacific, 4:30 Eastern, 21:30 UTC.

Leo Laporte [02:39:15]:
We stream live in the Club Twit Discord. I hope you're a club member. If you're not, go to twit.tv/clubtwit. It's $10 a month. It's not free, but you get ad-free versions of all the shows and lots of extra content. And if you're a Club Twit member, you can also watch this show in the Club Twit Discord. Great place to hang out. Lots of smart people.

Leo Laporte [02:39:34]:
We've been having great conversations in there. You can also watch on YouTube, Twitch, X, Facebook, LinkedIn, and Kick. So we stream on 6, 7 different platforms, uh, every Tuesday. Um, and the best way to get it, of course, subscribe in your favorite podcast client. That way you're going to get it automatically. You can listen whenever you want. You get the audio, get the video, whatever. Thank you, Steve.

Leo Laporte [02:39:58]:
Have a great week, and we will see you next time right here on Security Now. I'll be back.

Steve Gibson [02:40:06]:
Bye.

Leo Laporte [02:40:06]:
Hello Hey everybody, Leo Laporte here. You know what a great gift would be, whether for the holidays or at just any time, a birthday, a membership in Club Twit. If you have a Twit listener in your family, somebody who enjoys our programming, and you want to give them a nice gift and support what we do, visit twit.tv/clubtwit. They'll really appreciate it, and so will we. Thank you. To security now.

All Transcripts posts