Transcripts

Security Now 981 transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show

0:00:00 - Leo Laporte
It's time for Security Now. Steve Gibson is here with some scary news about OpenSSH. An old bug that was fixed is back, and it means a lot of people are going to have to update their OpenSSH Problems with the speed of sync thing. Steve has a solution, and then we'll talk about a certificate authority. That really seems to have messed it up. It looks like they're in deep trouble. It's all coming up. Next on Security Now Podcasts you love.

0:00:33 - Steve Gibson
From people you trust. This is Twit.

0:00:42 - Leo Laporte
This is Security Now with Steve Gibson, episode 981, recorded Tuesday, july 2nd 2024. The end of Entrust Trust. It's time for Security Now, the show where we cover the latest security and privacy news. Keep you up to date on what's happening in the world out there with this guy right here. He's in charge. Steve Gibson. Hello Steve, I'm in charge. You are, that's right here. He's in charge. Steve gibson. Hello steve, I'm in charge. You are, that's right, large and in charge. Just don't ask my wife if she thinks I'm in charge.

0:01:12 - Steve Gibson
So uh no, we have. We have today something for everyone. We're gonna do, boy, going to take a deep dive into this disastrous Open SSH problem, which really does mean everybody needs to patch their instances of Open SSH if it's exposed to the Internet, and I'll explain why. And everyone's going to come away with tinnitus. I mean, really, this one will wind up your propeller beanies.

But we also are going to spend most of our time, like the latter half of the podcast, looking at the politics of the whole question of certificate authority self-management, because one of the oldest original certificate authorities in the world, entrust, has. Well, they've gotten themselves in so much trouble that they're no longer going to be trusted by the industry's browsers. So basically, all of their SSL TLS certificate business is gone in a few months. It's like yeah and so. This also gives us an occasion to really look at the behind-the-scenes mechanisms by which this can happen. And of course, we've covered CAs falling from grace in the past. This is a biggie. Also, someone just moved 50 Bitcoins minted back in the Satoshi era. Sadly it wasn't me, but who was it?

well, that's interesting yeah, also, how are things going with our intrepid voyager one spacecraft? Uh, what features have I just removed from grc's email system? And, by the way, just shy 5,000 people have email from me from, like an hour ago, nice. And what embarrassingly affordable commercial emailing system am I now prepared to recommend without reservation? Which is what I'm using, and I could not be more impressed with it, or its author? Who's a she and not a he, who I mistakenly referred to last week? What's recently been happening with SyncThing and what can you do about it? Why do I use DNS for freeware release management and how? And then we're going to spend the rest of our time taking a look at the title of today's podcast 981, for this July 2nd the End of Entrust Trust. Wow, so I think you know a really great episode of Security. Now, that is now yours for the taking.

0:04:25 - Leo Laporte
Well, it will be in a moment. I mean, as soon as I get through my ad, it'll be yours for the taking. Well, it will be in a moment. I mean, as soon as I get through my ad, it'll be yours for the taking. I'm going to keep it here for a little bit. Wow, very interesting. I can't wait to hear about all of these things. This is going to be a very geeky episode. I like that. I always enjoy it when your shows are a little bit on the propeller head side. By the way, welcome to a brand new sponsor. We're really thrilled to have Big ID on the Security Now show.

They're the leading DSPM solution Data Security, posture Management, dspm. If you're an enterprise, you certainly know what this is or you should anyway. But they do DSPM differently. Dspm centers around risk management and how organizations like yours need to assess, understand, identify and then, of course, remediate data security risks across their data. And there are lots of reasons to do this. But let me tell you why you need to do it with BigID. Bigid seamlessly integrates with your existing tech stack, so you don't have to abandon anything, right? It allows you to coordinate all your security and remediation workflows With BigID. And let me tell you, some of their clients are as big as you can get. I don't know if I can give you some names. If you go to the website, you'll see. Big ID will let you uncover dark data, identify and manage risk. Remediate the way you want to remediate, scale your data security strategy, take action on data risks, whether it's annotating, deleting, quarantining or more based on the data, all while maintaining a full audit trail. They work with ServiceNow. They work with Palo Alto Networks, with Microsoft, with Google, with AWS. They work with everybody With BigID's advanced AI models. Now, this is kind of new and it's very cool. You can reduce risk, accelerate time to insight and gain visibility and control over all your data.

Now, I mentioned that they have some pretty big customers, somebody maybe I don't know like the United States Army. Imagine how much data the US Army has, and in many different data stores. You know a completely heterogeneous environment. Bigid equipped the USs army to illuminate their dark data, to accelerate their cloud migration, to minimize redundancy and to automate data retention. The army us army training and doctrine command. Quote here let's, let's, let me play this for you.

The first wow moment with big id came with just being able to have that single interface. Bigid came with just being able to have that single interface. That inventory is a variety of data holdings, including structured and unstructured data, across emails, zip files, sharepoint databases and more. To see that mass and to be able to correlate across all of those is completely novel. Us Army Training and Doctrine Command said I've never seen a capability that brings this together like Big ID does. Big enough for the Army, going to be big enough for you.

Cnbc recognized Big ID as one of the top 25 startups for enterprise. They were named the Inc 5000 and the Deloitte 500 two years in a row. They're the leading modern data security vendor in the market today. Aren't you glad I brought this up? You need to get to know these guys. The publisher of Cyber Defense Magazine here's another testimonial says Big ID embodies three major features we judges look for to become winners One, understanding tomorrow's threats today.

Two, providing a cost-effective solution. And three, innovating in unexpected ways that can help mitigate cyber risk and get one step ahead of the next breach. You need this. Start protecting your sensitive data wherever your data lives at bigidcom slash security. Now, of course, you can go there and get a free demo to see how BigID can help your organization reduce data risk and accelerate the adoption of generative AI. By the way, one of the things they do I've talked to them the other day let you use AI without exfiltrating important information and protect you against the privacy risk of AI without stopping you from using AI. That is fantastic. That by itself is worth calling BigID.

Bigidcom slash security. Now they also have some really useful white papers and reports. There's a new free report that provides valuable insights and key trends on AI adoption challenges, the overall impact of generative AI across organizations. That's just one of many, though, so I really want you to go to bigidcom security now. If you've got a lot of data and the Army had a lot of data you need Big ID, bigidcom security now. Reduce risk, protect that sensitive data and accelerate AI adoption. It can all be done at the same time. Bigidcom slash security now Brand new sponsor. We want to welcome them to the show. Had a great conversation with them the other day. Really impressive the stuff they do. All right, let's get back to Steve Gibson and your first topic of the day.

0:09:45 - Steve Gibson
Well, our picture of the day.

0:09:47 - Leo Laporte
Well, our picture of the week.

0:09:49 - Steve Gibson
Oh yes, I gave this one the caption. Perhaps we deserve to be taken over by the machines.

0:09:56 - Leo Laporte
Oh, dear, because oh boy, oh boy, oh boy, I don't know. Describe this for us, will you? Boy, oh boy, oh boy, I don't know. Describe this for us, will you?

0:10:04 - Steve Gibson
So we have a close-up photo of a standard intersection traffic light, and permanently mounted on the horizontal member which holds the red, yellow and green lights, is a sign that's got the left turn arrow, with the big circle red slash through it, clearly indicating that if a police officer is watching you and you turn left, you'll be seeing some flashing lights behind your car before long. Now the problem here, the reason I'm thinking, okay, maybe machines will be better at this than we are is that the signal has a left green arrow illuminated, which means, you know, turn left, turn left here.

So I'm not sure what you do. This is one of those things where the automated uh driving software in the imagine in the electric vehicle it comes here and just quits, it just shuts down and says, okay, I don't know what to do I don't know what I'm. I'm seeing that I can't turn left and the signal is saying I must. And now not only that, but it's my turn. So I give up anyway. I don't know. I'm frankly I don't know what a human would do if we came to. This is like go straight.

0:11:42 - Leo Laporte
That's what they do. I would not anything but turn left wow.

0:11:47 - Steve Gibson
okay, so the big news of the week we're going to start with, because we're going to talk about industry, politics and management like self-management of the whole public key certificate mess at the end of the show. But if you survive this first piece, you'll be ready for something as tame as politics. Everyone's buzzing at the moment about this regression flaw. It's a regression because it was fixed back in OpenSSH in 2006, and in a later update it came back. Thus there was a regression to an earlier bad problem Qualys in you know OpenSSH, a widely used and inherently publicly exposed service, and in fact, when I say widely used, we're talking 14 million vulnerable, publicly exposed servers identified by Shodan and Census. So the developers of OpenSSH have been historically extremely careful, and you know thank God, because you know OpenSSH being vulnerable that would be a big problem. In fact, this is get a load of this the first vulnerability to be discovered in nearly 20 years. That's an astonishing track record for you know a chunk of software. But nevertheless, when you hear that OpenSSH has an unauthenticated remote code execution vulnerability that grants its exploiter full root access to the system with the ability to create a root level remote shell, affects the default configuration and does not require any interaction from the user over on the server end. That ought to get anyone's attention. Okay, so we have CVE-2024-6387.

So here's what its discoverer, qualys, had to say. They wrote the Qualys Threat Research Unit, the TRU, discovered this unauthenticated remote code execution vulnerability in OpenSSH's server, which is SSHD because you know, it's a Linux daemon in GLibc-based Linux systems. This bug marks the first OpenSSH vulnerability in nearly two decades. An unauthenticated remote code execution that grants full root access. It affects the default configuration and does not require user interaction, posing a significant exploit risk. In Qualys' trues analysis, we identified that this vulnerability is a regression of the previously patched vulnerability, which was CVE-2006-50-51, reported in, of course, 2006. A regression in this context means that a flaw once fixed has reappeared in a subsequent software release, typically due to changes or updates that inadvertently reintroduce the issue. This incident highlights the crucial role of thorough regression testing to prevent the reintroduction of known vulnerabilities into the environment. This regression was introduced in October of 2020 with OpenSSH 8.5 P1. So it's been there for four years, and what this means is any OpenSSH from 8.5 P1 on, thus for the last four years, is vulnerable.

Now they say OpenSSH is a suite of secure network utilities based on the SSH protocol that are essential for secure communication over unsecured networks. It provides robust encryption, secure file transfers and remote server management. Openssh is widely used on Unix-like systems, including macOS and Linux, and it supports various encryption technologies and enforces robust access controls. Despite a recent vulnerability, openssh maintains a strong security record, exemplifying a defense-in-depth approach and a critical tool for maintaining network communication confidentiality and integrity worldwide.

Okay, so what we're dealing with in this instance is a very subtle and very tight race condition between multiple threads of execution. Remember that not long ago we spent some time looking at race conditions closely. Back then I used the example of two threads that both wanted to test and conditionally increment the same variable. A race condition fault could occur if one thread first read the variable and tested it, but before it could return the updated value, it was preempted. Then another thread came along to change that shared value without the first thread being aware. Then, when that first thread had its execution resumed, it would place the updated value back into the variable, destroying whatever that second thread had done. That's a bug, and just something as simple as that can lead to the loss of you know lots, okay, so today, as it happens, know lots. Okay, so today, as it happens, we're going to see a real-world example of exactly this sort of problem actually occurring.

Okay so, first, though, I want to share Qualys' note about OpenSSH in general. In their technical report about this they wrote OpenSSH is one of the most secure software in the world. This vulnerability is one slip-up in an otherwise near flawless implementation. Its defense, in-depth design and code are a model and an inspiration, and we thank OpenSSH's developers for their exemplary work. Then they explain this vulnerability is challenging, they write, to exploit due to its remote race condition nature, requiring multiple attempts for a successful attack. This can cause memory corruption and necessitate overcoming address space layout randomization. Aslr Advancements in deep learning may significantly increase the exploitation rate, potentially providing attackers with a substantial advantage in leveraging such security flaws.

In our experiments and I should note that they wrote this, but we'll see later this is one of three sets of experiments, and this was the least worrisome of the three they wrote in our experiments it takes around 10,000 tries on average to win this race condition. So, for example, with 10 connections being accepted every 600 seconds, it takes on the average of one week to obtain a remote root shell. On the other hand, you could obtain a remote root shell, which is not nothing, and, as it turns out, there are ways to optimize this heavily, which we'll get to Okay. So, of course, again, they say around 10,000 tries on average to win the race condition. So that's statistics, right. It could happen on the first try or never, or anywhere in between. You know, it's like those 50 Bitcoin I mined back in 2011. I got lucky and it's still possible to get lucky today, though it's vastly less likely than it was back then.

The great concern is the available inventory, the total inventory of currently vulnerable OpenSSH servers, which server instances exposed to the Internet customer base of users who are using their CSAM 3.0 external attack surface management technology. Approximately 700,000 external internet-facing instances of their own customers are vulnerable, and they explained that this accounts for around 31% of all internet-facing instances of OpenSSL in their own global customer base. So you know of their own customers. They know of 700,000 external internet-facing instances vulnerable. Census and Shodan have all of those and an additional 13-plus million more. Okay, so the way to think about this is that both intensely targeted and diffuse and widespread attacks are going to be highly likely If a high-value target is running a vulnerable instance of OpenSSH.

Once this has been fully weaponized, someone can patiently try and retry in a fully automated fashion, patiently knocking at the door until they get in. And what makes this worse, as we'll see, is that the attack is not an obvious flooding style attack that might set off other alarms. Its nature requires a great deal of waiting. This is why the the 10 connections over 600 seconds that Qualis mentioned Each attack attempt requires the way it actually works is a 10-minute timeout, but since 10 can be simultaneously overlapped and running at once against a single target overlapped and running at once against a single target that brings the average rate down to one completed attack attempt per minute. So on average you're getting one new connection attempted per minute, which is, you know, each of those patiently knocking quietly on the door until it opens for them. And note that what this means is that a single attacker can be, and almost certainly will be, simultaneously spraying a massive number of overlapping connection attempts across the Internet. It would make no sense for a single attacking machine to just sit around waiting 10 minutes for a single connection to time out. Rather, attackers will be launching as many new attempts at many different targets as they can During the 10 minutes. They must wait to see whether a single connection attempt succeeded on any one machine. So to that end, they wrote Qualis has developed a working exploit for the regression vulnerability. As part of the disclosure process, we successfully demonstrated the exploit to the OpenSSH team to assist with their understanding and remediation efforts. We do not release our exploits as we must allow time for patches to be applied. However, even though the exploit is complex, we believe that other independent researchers will be able to replicate our results.

Okay, and then indeed, they detail exactly where the problem lies. I'm going to share two dense paragraphs of techiness, then I'll pause to clarify what they've said. So they wrote we discovered a vulnerability, a signal handler race condition in OpenSSH's server, sshd. If a client does not authenticate within the login grace time, which is 120 seconds recently 600 seconds in older OpenSSH versions then SSHD's SIGALARM handler is called asynchronously. That's a key. Asynchronously, they said. But this signal handler calls various functions that are not async signal safe. For example, it calls syslog to log the fact that somebody never authenticated, and it's going to hang up on them. They said. This race condition affects SSHD in its default configuration. This vulnerability is exploitable remotely on GLibc-based Linux systems where Syslog itself calls async signal unsafe functions. Async signal unsafe functions like malloc and free, which allocate, and free dynamically allocated memory. They said an authenticated remote code execution as root, because it affects SSHD's privileged code, which is not sandboxed and runs with full privileges, can result. We've not investigated any other LibC or operating system, but OpenBSD is notably not vulnerable because SigAlarm handler calls syslog underscore R, which is async signal safe version of syslog that was invented by OpenBSD back in 2001.

Okay, so what's going on here is that when someone connects to a vulnerable instance of OpenSSH, as part of the connection management, a connection timeout timer is started. That timer was once set to 600 seconds, which is 10 minutes, but in newer builds giving someone 10 minutes to get themselves connected seemed excessive and unnecessary, so it was shortened to 120 seconds, which is two minutes. Unfortunately, at the same time they increased the number of simultaneous waiting connections to complete from 10 to 100. So that really did make things worse. And because the attack inherently needs to anticipate the expiration moment, a shorter expiration allows for faster compromise, since it's the instant of timer expiration when OpenSSH is briefly vulnerable to exploitation. That window of vulnerability is what the attacker anticipates and exploits. So the more often you get those little windows, the worse off you are.

So, upon a new connection, the timer is started to give the new connection ample but limited time to get itself authenticated and going. And if the incoming connection just sits there doing nothing or trying and failing to properly authenticate, regardless of what's going on and why, when that new connection timeout timer expires, openssh drops that still pending connection Right. All that makes sense. That's the way you'd want things to operate. Unfortunately, before it does that, as it's doing that, it goes off to do some other things, like make an entry in the system log about this expired connection attempt. So if the Wiley attacker was doing something on purpose at the precise instant that the connection expiration timer expires, the race condition can be forced to occur. Wow, just as. Yeah, yeah.

0:30:12 - Leo Laporte
Modern day hacks are so subtle and interesting yeah because all the easy ones are gone.

0:30:19 - Steve Gibson
That's a good point. Yeah, the dumb ones. We're not doing dumb problems anymore.

0:30:24 - Leo Laporte
Well, you can see how this could have been a regression too. That would be easy to reintroduce it.

0:30:29 - Steve Gibson
Yeah, there was actually an if def that got dropped from an update and that allowed some old code to come back in that had been deliberately defed out. So, just as with the two threads in my original shared variable example, the timer's expiration asynchronously and that's the key. It asynchronously interrupts. That means it doesn't ask for permission to interrupt, it just yanks control away from OpenSSH in order to start the process of tearing down this connection that never authenticated itself. So if the attacker was able to time it right so that OpenSSH was actively doing something involving memory allocation at the exact time of the timer's expiration, the memory allocations that would then be performed by the timer-driven logging action would conflict and collide with what the attackers were causing OpenSSH to be doing, and that could result in the attacker obtaining remote code execution under full root privilege and actually getting themselves a remote shell onto that machine. With the massive inventory of 14 million exploitable OpenSSH servers currently available, this is going to be something bad guys will not be able to resist and unfortunately, as we know, with so many of those forgotten unattended not being quickly updated whatever, there's just no way that attackers will not be working overtime to work out the details of this attack for themselves and get busy, Qualis explained, to exploit this vulnerability remotely. To the best of our knowledge, the original exploit of this, the original vulnerability of this CVE-2006-5051, which I've initially mentioned, was never successfully exploited before. They said we immediately face three problems. From a theoretical point of view, we must find a useful code path that, if interrupted at the right time by SIGALARM, leaves SSHD in an inconsistent state. And we must then exploit this inconsistent state inside the sig alarm handler. From a practical point of view, we must find a way to reach this useful code path in sshd and maximize our chances of interrupting it at the right time. And then, from a timing point of view, we must find a way to further increase our chances of interrupting this useful code path at the right time remotely. So theoretical, practical and timing, they said, to focus on these three problems without having to immediately fight against all the modern operating system protections, in particular ASLR and NX, which is, you know, execution protection, no execute, they said.

We decided to exploit old OpenSSH versions, first on an x86 system, and then, based on this, this experience, move to recent versions. So their first experiment was with uh, debian uh. They showed it as well. It was the old woody version uh, which they show is debian one uh 3.4 point. Uh 3.4 p1, hyphen1.woody.3. They said this is the first Debian version that has privilege separation enabled by default and that is patched against all the critical vulnerabilities of that era. They wrote to remotely exploit this version.

We interrupt a call to free. We interrupt a call to free where memory is being released back to the system with SIGALARM inside SSHD's public key parsing code. Now that's significant because that means that the attacker is causing OpenSSH, causing SS, open SSH to to do some public key parsing, probably presenting it with a bogus public key saying here's my key, Use this to authenticate me. So unfortunately, bad guys have been given lots of clues here as a consequence of this disclosure that they know exactly where to look and what to do. So they said we interrupt a call to free with SIGALARM while SSHD is in its public key parsing code. That leaves the memory heap in an inconsistent state and exploit this inconsistent state during another call to free inside the SIG alarm handler, probably in Syslog.

They said in our experiments it takes around 10,000 tries on average to win this race condition, in other words, with 10 connections, which is the max startups setting setting accepted per 600 seconds, which is the login grace time. They said it takes around one week on average to obtain a remote root shell. But again, like, even if you couldn't multiplex this and you couldn't be attacking a bazillion servers at once, just one attacker camped out on some highly valuable open SSH that thinks it's secure because, hey, we use public key certificates. You're never going to guess our password. It's just sitting there ticking away, knocking patiently at the door an average of once a minute, because it can do be doing it 10 times over 10 minutes and eventually the door opens 10 000 tries is hysterical a remote right, but very patiently if you have to be patient, but still, yep, that's.

0:37:26 - Leo Laporte
That's how subtle this race condition is right, yes, yes.

0:37:31 - Steve Gibson
Well, because it also involves vagaries of the Internet. Timing Right, because you're a remote person, I mean. The good news is the further away you are. If you're in Russia with a flaky network and lots of packet delay, or you're in China and there's so many hackers that your packets are just competing with all the other attackers, then that's going to introduce a lot more variation. But still, again, this is where patience pays off. You end up with a remote shell with root privilege on the system that was running that server. So the point is, yeah, around 10,000 tries, but massive payoff on the other side. Okay.

Then they said and I won't go through this in detail on a newer Debian build where the login grace time had been reduced from its 600 seconds down to 120, in other words, from five minutes to two minutes, it still took them around 10,000 attempts, but since they only needed to wait two minutes for time or expiration rather than 10 minutes, and they were able to do oh, no, sorry, on that system they were still only to do 10 at once. So it reduced the wait from five minutes to two minutes down from I'm sorry, from 10 minutes to two minutes. They were able to now obtain a remote shell. This is on a newer build in one to two days down from around a week and finally, on the most current stable Debian version, 12.5.0, due to the fact that it has reduced the login time to 120 seconds but also increased the maximum number of simultaneous login attempts. That so-called max startups value from 10 to 100, they wrote. In our experiments it takes around 10,000 tries on average to win this race condition. 10,000 tries on average to win this race condition. So on this machine, three to four hours with 100 connections accepted per 120 seconds. Ultimately, it takes around six to eight hours on average to obtain a remote root shell, because we can only guess the GLIB-C's address correctly half the time due to ASLR. And they finish explaining.

This research is still a work in progress. We've targeted virtual machines only, not bare metal servers on a mostly stable network link with around 10 milliseconds of packet jitter. We are convinced that various aspects of our exploits can be greatly improved and we've started to work on an AMD 64. You know 64-bit world, which is much harder because of the stronger ASLR. You know, of course, that's address, address, space, layout, rate, randomization, and the reason 64 bits make things much worse is that you have many more high bits to allow for more randomized places to locate the code.

We noticed a bug report in OpenSSH's public bugzilla regarding a deadlock in SSHD's SIGALARM handler. We therefore decided to contact OpenSSH's developers immediately to let them know that this deadlock is caused by an exploitable vulnerability. We put our AMD64 work on hold and we started to write this advisory. Okay, so yikes, we have another new and potentially devastating problem. Everyone running a maintained Linux that's exposing an OpenSSH server to the public internet, and potentially even major corporations using OpenSSH internally because, you know, can you trust all your employees Need to update their builds to incorporate a fix for this immediately. Until that's done, and unless you must have SSH running, it might be worth blocking its port and shutting it down completely.

0:42:13 - Leo Laporte
I think I have SSH running on my Ubiquiti system and my Synology NAS?

0:42:20 - Steve Gibson
Yes, probably. And how about on the Synology box? Yeah, yeah yeah.

0:42:29 - Leo Laporte
So I better check on both of those and my server too, come to think of it.

0:42:34 - Steve Gibson
So yeah, I mean. No, this is a big deal. Yeah, so, and as I said at the top, both profiles you know a high-value target could be located and notice that nothing prevents 50 different people from trying to get into the same high-value target at once. So high value is a vulnerability and just being present is one. Value is a vulnerability, and just being present is one, because the bad guys are going to be spraying the Internet, just looking for opportunistic access. If nothing else, even if they don't care what's going on on your server, they want to put a crypto miner there or stick a botnet node there. I mean, they're going to want in on these machines where now they have a way. This gives them a way for anything in that has been brought up with code for the last four years, since 2020, when this regression occurred, and you know they'll. They also know some systems will be getting patched, so there's also a rush to weaponize this thing and get into the servers they can.

0:43:56 - Leo Laporte
The way you describe it. It sounds so difficult to implement. But they publish proofs of concept which a script kitty can implement right, Yep, yeah.

0:44:08 - Steve Gibson
Yep, it'll end up being product you know, productized right and and you don't need to know what you're doing, you just in the same way that we saw that windows wi-fi bug last week, some guy offering for five grand. You can just buy it right now, wow oh well, what a world hey, it keeps this podcast, thank goodness that's right, hello, we appreciate it.

0:44:33 - Leo Laporte
Give up the good work, bad guys, you're keeping us busy.

You want me to do an ad now, or you want to keep going up to you perfect timing okay, uh, you're watching security now with our genius at work, steve gibson, when I wouldn't be able to talk about this stuff without him. I tell you he's the key. Security now is brought to you by delete me. One of the things we've learned by doing this show over the many years that we have is it's dangerous out there, right, and we've also learned to pay attention when we get, uh, suspicious emails or text messages. It happened to us. Fortunately, our staff listens to this show and they knew I mentioned this before. The CEO sent out a text message it happens all the time in every company to her underlings saying hey, quick, I'm in a meeting, I need Amazon gift cards. Just buy a bunch and send them to this address. Fortunately, we have smart people here. I hope you have smart people who work for you. But let me tell you the thing that was the eye-opener here. We didn't lose any money, but the thing that was clear eye-opener is that they know a lot about us. And how do they know a lot about us? Because of information brokers that collect this information and have no scruples about who they sell it to, whether it's an advertiser, a foreign government or the hacker down the road. That's why you need Delete Me.

Have you ever searched for your name online and didn't like how much of your personal information was available? I can't recommend it. Don't. If you haven't done it, don't. But maintaining privacy is not just a concern for you. It's a concern for your business and, you know it's even a concern for your family. Delete Me has family plans. Now an adult has to administer it. With a family plan, you can ensure everyone in the family feels safe online and, of course, doing this getting your name off of these databases reduces the risk from identity theft, cybersecurity threats, harassment and more. We used it for Lisa for years and it really made a big difference getting that stuff off. What happens is you go there and, if you're on the family plan, by the way, as the administrator you'll have different information sheets and different requirements for each member of your family, tailored to them and easy to use controls so you can manage privacy settings for everybody.

Deleteme's experts will find and remove your information from hundreds of data brokers. Now the law requires these data brokers have those forms that say you remove my data. So yeah, you could do it yourself. But here's the problem. The data brokers just start building your dossier all over again the minute you leave. You've got to keep going back, and that's what Delete Me does. They'll continue to scan and remove your information regularly and it is everything. I mean. It's property records, it's social media photos, emails, addresses, relatives, phone numbers, income information. All that stuff's online and you know they say well, we just collect this for targeted advertising. Yeah, that and anybody else who wants to get it.

Protect yourself, reclaim your privacy. Visit joindeletemecom. Slash twit. Use the code TWIT for 20% off. That's joindeletemecom. Slash twit and use the offer code TWIT for 20%. You owe it to yourself, you owe it to your family, you owe it to your company. Join deletemecom. We thank them so much for the job they did to protect Lisa and for the job they're going to do to protect you. Now back to Steve Gibson, who is protecting us all week long.

0:48:08 - Steve Gibson
So a listener of ours, james Tutton, shot me a note asking whether I may have found my 50 Bitcoin when he saw an article about 50 Bitcoin having been moved from a long dormant wallet. Now, yeah, I wish that was my 50 Bitcoin, but I've satisfied myself that they're long gone. But I thought our listeners would enjoy hearing about the general topic of ancient Bitcoin movement. The article which appeared last Thursday at CryptoNewscom was titled Satoshi-Era Bitcoin Wallet Awakens.

0:48:47 - Leo Laporte
Wow, when did you make your 50 bitcoins?

0:48:51 - Steve Gibson
uh strike it was early 9th of 2011.

0:48:56 - Leo Laporte
Okay, so it was one year after this.

0:48:58 - Steve Gibson
Oh, that's interesting it was early, but it was not this early, okay, um, so they? They said satoshi era bitcoin wallet awakens moves 50 Bitcoin to Binance, and the crypto news piece starts out saying a Satoshi era Bitcoin wallet address, dormant for 14 years, transferred 50 Bitcoin approximately 3.05 million us dollars to the Binance exchange on June 27th Last Thursday, to the Binance exchange on June 27th last Thursday. The wallet is believed to belong to a Bitcoin miner who likely earned the 50 Bitcoin as mining rewards in 2010. This must make you cry, I know, believe me.

It's like oh gosh, it hurts. Yeah, they said. On-chain analytics firm LookOnChain revealed the Bitcoin wallet's origins. It's linked to a miner who received 50 Bitcoin as a mining reward on July 14, 2010, just months after the Bitcoin network launched. And I'll note that my podcast, which Tom Merritt and I did, which was titled Bitcoin Cryptocurrency, where I explained the operation of the entire Bitcoin cryptocurrency system, how the blockchain works, and all that that aired the following February 9th of 2011. Wow, we were really early on that. We were on the ball. So, while it's true that solving the Bitcoin hash problem way back then resulted in an award of 50 Bitcoin, my 50 were different from the 50 that were recently moved. The article continues back in 2010, one Bitcoin and this explains why I formatted my hard drive was valued at a mere $0.003, or 0.3 cents.

0:51:05 - Leo Laporte
So I mean, it was all just, you know, A nickel's worth of Bitcoin. It wasn't worth worrying about. Yeah.

0:51:11 - Steve Gibson
Well, and remember the faucet. The Bitcoin faucet was dripping out Bitcoin that anybody could go get for free.

Yeah, right. So they said, this price was not surpassed until February of 2011, reaching $30 by June of that year of 2011,. Reaching $30 by June of that year. Today, bitcoin trades around $61,000, which they say is a 17% drop from its all-time high in mid-March of this year of $73,750 per coin. Wow, satoshi. Bitcoin wallets, they write, which were created during Bitcoin's infancy from 2009 to 2011, hold historical significance. This period marked the time when Bitcoin's enigmatic creator, satoshi Nakamoto, was still an active presence in the cryptocurrency community. The wallet's historical value, coupled with the limited transactions during that era, makes any movement of funds from them a notable event. In 2010, bitcoin mining was accessible to anyone with a personal computer, yielding a reward of 50 Bitcoin. This accessibility stands in stark contrast to the current Bitcoin mining environment. Four halving events, you know as in cut in half, have since reduced the block reward to a mere 3.125 Bitcoin. On the other hand, the Bitcoins are worth 60 grand, so not so mere.

0:53:03 - Leo Laporte
It gets harder to get a block to make a though, too. The math is much harder.

0:53:08 - Steve Gibson
It's virtually impossible. These halvings, occurring roughly every four years, are integral to Bitcoin's deflationary model. This recent transfer from a Satoshi Bitcoin wallet is not an isolated incident. It joins a growing list of dormant wallets springing back to life. And, of course, we know why they're springing it's because Bitcoin is jumping. So you know, multiplying the number of Bitcoins, which were easily earned back then, by 60,000, will definitely put a spring in one's step. So they wrote. In March march, a similar event occurred a miner transferred 50 bitcoin earned from mining on april 25th 2010 to coin bat base after 14 years of wallet inactivity.

The reactivation of these wallets often stirs interest and speculation within the cryptocurrency community. Many are curious about the intentions behind these moves, whether they signal a change in market dynamics or simply represent a longtime holder finally deciding to liquidate their assets. Bitcoin whales individuals or entities holding vast quantities of Bitcoin possess the capacity to influence the cryptocurrency market through their sheer trading volume and holdings. Two such whale wallets, dormant for a decade, sprang to life on May 12, 2024, transferring a combined 1,000 Bitcoin. On September 12th and 13th of 2013, when Bitcoin was trading at $1.24, each of these two wallets received 500 Bitcoin, which was valued at $62,000 back then, in another noteworthy event, on May 6th, a Bitcoin whale moved $43.893 million worth of Bitcoin to two wallet addresses. This whale had remained inactive for over 10 years, having initially received the Bitcoin on January 12th 2014, when it traded at $917.

0:55:41 - Leo Laporte
This is why it's hard, though, because had you had those 50 Bitcoin, when it got to say worth $100,000, you would have, for sure, sold it.

0:55:51 - Steve Gibson
You would have said I'll take the money, that's great, I'm happy, right and that's why, and that's why last week's podcast about when is a bad, pseudo random number generator a good thing it kept that guy right from decrypting his wallet and selling his Bitcoin until, you know, far later, when it became, you know, worth paying some hackers. We don't know what percentage they took, but you know it would have been nothing if this guy hadn't had them crack his password.

0:56:20 - Leo Laporte
Many have offered to crack my password. All have failed because it's probably a good password and it's not. You know it's not. It's not. If it's not, if it's a random password, it's not and it's done well, which it was, you know, using Bitcoin, the Bitcoin wallet. It's virtually impossible to you know, brute force.

0:56:43 - Steve Gibson
So salted and memory hard and slow to do, and so forth.

0:56:47 - Leo Laporte
But someday, you know, I figure this is just a forced savings account Someday those eight Bitcoin will be mine. I don't know, I'll guess the password, because I must have come up with something. I know why wouldn't I record it, right, I'm sure?

0:57:02 - Steve Gibson
I know mine.

0:57:04 - Leo Laporte
Yeah, that's what's back then.

0:57:05 - Steve Gibson
Yeah, yes, back then we were not fully up to speed on generating passwords at random and having password managers hold on to them, right, so I could guess my own password.

0:57:18 - Leo Laporte
I've tried all of the dopey passwords I used by rote back in the day and none of those worked. So maybe I was smart.

0:57:25 - Steve Gibson
Did you rule out monkey123? I did immediately Okay. It's the first one I tried.

0:57:33 - Leo Laporte
Oh well, those eight Bitcoin. I'm just going to sit there for a while. It's interesting that 50 is enough to make a news story. That's really amazing.

0:57:40 - Steve Gibson
Yes and Leo, there is so much Bitcoin that has been lost. I mean, so many people did this Right. I'm not a unique hard luck case at all and besides I'm doing fine, but a lot of people and remember when we had some of our listeners come up to us in Boston when we were there for the Boston event, there was one guy in particular who said thank you for that podcast. I retired a long time ago.

Thanks to listening to the Security Now podcast. What you said made a lot of sense. I got going, I mined a bunch of Bitcoin and I don't have to work anymore for the rest of my life.

0:58:26 - Leo Laporte
That's just good luck, good fortune.

0:58:28 - Steve Gibson
Well, yeah, nice, okay. So, on the topic of astonishing achievements by mankind and not cracking your password wall, I wanted to share a brief update on the status of what has now become the Voyager 1 interstellar probe. Nasa's JPL wrote. Nasa's Voyager 1 spacecraft is conducting normal science operations for the first time, following a technical issue that arose back in November of 2023. The team partially resolved the issue in April, when they prompted the spacecraft to begin returning engineering data, which includes information about the health and status of the spacecraft. On May 19th, the mission team executed the second step of that repair process and beamed a command to the spacecraft to begin returning science data. Two of the four science instruments returned to their normal operating modes immediately. Two other instruments required some additional work, but now all four are returning usable science data.

The four instruments study plasma waves, magnetic fields and particles. Study plasma waves, magnetic fields and particles. Voyager 1 and Voyager 2 are the only spacecraft to directly sample interstellar space, which is the region outside the heliosphere, the protective bubble of magnetic fields and solar wind created by the sun. While Voyager 1 is back to conducting science, additional minor work is needed to clean up the effects of the issue. Among other tasks, engineers will re-synchronize timekeeping software in the spacecraft's three onboard computers so they can execute commands at the right time. The team will also perform maintenance on the digital tape recorder which records some data for the plasma wave instrument, which is sent to Earth twice per year. Most of the Voyager's science data is beamed directly to Earth, not recorded on board. Voyager 1 now is more than 15 billion miles 24 billion kilometers miles, 24 billion kilometers from Earth, and Voyager 2 is more than 12 billion miles, 20 billion kilometers from us. The probe will mark 47 years of operations 47 years of operations later this year. They're NASA's longest running and most distant spacecraft.

1:01:10 - Leo Laporte
We were young men at the time. Yes, Leo. Just children.

1:01:17 - Steve Gibson
Just, we thought we were going to be able to understand all this one day, and you know, there's more to understand now than there was then.

1:01:27 - Leo Laporte
Well, that's fun, you know. It's not like everything's a solved problem anymore.

1:01:32 - Steve Gibson
Nope, we don't have to worry about that. Speaking of solved problems, everything is going well with GRC's email system and I'm nearly finished with my work on it. The work I'm finishing up is automation for sending the weekly Security Now email, so I'm able to do it before the podcast, while being prevented from making any dumb errors like forgetting to update the names of links and so forth. I'm about a day or two away from being able to declare that that work is finished. Declare that that work is finished, and I should mention, just shy of 5,000 listeners already have the email describing today's podcast, with a thumbnail of the show notes that they can click on to get the full size show notes, a link to the entire show notes text that you and I have, leo, and then also a bullet pointed summary of the things we're talking about. So that's all working.

Last week's announcement that I had started sending out weekly podcast summaries generated renewed interest and questions from listeners, both via Twitter or forwarded to me through Sue and Greg, and these were listeners who had apparently been waiting for the news that something was actually being sent before deciding to subscribe to these weekly summary mailings, so now they wanted to know how to do that. All anyone needs to know is that at the top of every page at GRC is a shiny new white envelope labeled email subscriptions. Just click that to begin the process. If you follow the instructions presented at each step, a minute or two later you'll be subscribed. And remember that if your desire is not to subscribe to any of the lists but to be able to bypass social media to send email directly to me, you're welcome to leave all of the subscription checkboxes unchecked. When you press the update subscriptions button. That will serve to confirm your email address, which then allows you to send feedback email, pictures of the week, suggestions, whatever else you like, directly to me at by just writing to security now at GRC dot com. Finally, I wanted to note that the email today's subscribers have already received from me was 100% unmonitored, as I expect all future email will be, so I won't know whether those emails are opened or not.

I've also removed all of the link redirections from GRC's email so that clicks are also no longer being counted. This makes the mailings completely blind, but it also makes for cleaner and clearer email. Some of our listeners, as I mentioned last week, were objecting to their clients warning them about being tracked even though I still don't think that's a fair use of a loaded term when the email has been solicited by the user and if the notification only comes back to me. I would never have bothered, frankly, to put any of that in if I'd written the system myself from scratch, but it was all built into the bulk mailing system I purchased and it is so slick and it has such lovely graphical displays with pie charts and bar charts and flow charts, and it was so much fun to look at, you know, while it was new and, frankly, I didn't anticipate the level of backlash that doing this would produce, you know, but then this is not your average crowd, is it? So you know, we're all security now listeners.

1:05:24 - Leo Laporte
And, by the way, the average crowd probably knows this, but I will reiterate this you could go get this PHP program yourself, but the chances are your internet service provider would immediately block it. You have some sort of special relationship with level three or somebody that allows you to send 5 000 emails out at once. No other internet service provider would allow that well, no consumer is right, right, right, so.

1:05:51 - Steve Gibson
So anybody who has a, you know any of our people in corporations who have you, you know, a regular connection to the Internet, you know, not through Cox or through you know any of the of the consumer ISPs. But anyway, the first two mailings I've done so far, which did contain link monitoring, interesting feedback. For example, three times more people clicked to view the full-size picture of the week than clicked to view the show notes. Now, in retrospect, that makes sense, right, because most people will be listening to the podcast audio, but they're still curious to see the picture of the week, which you know we have fun describing each week. And in any event, I'm over it now. No more single pixel fetches with its attendant email client freak out or anything else that might be controversial. What you do with any email you receive from me is entirely up to you.

1:06:57 - Leo Laporte
I'm just grateful for everyone's interest for me is entirely up to you. I'm just grateful for everyone's interest. There's also an issue with those invisible pixels. Most good email clients certainly all the ones I use don't ever load them. They know they're there. They don't warn me. I don't get a warning, they just go. Yeah.

1:07:14 - Steve Gibson
Oh well, a lot of our listeners do, Apparently they do. That's probably Outlook.

Yeah, but you know most email clients just go, yeah sure. Anyways, that's all gone. Now, one thing I've been wanting to do and I've been waiting until I knew I could was to give a shout out to the emailing system I chose to use. I've been utterly and totally impressed by its design, its complete feature set, its maturity and the author's support of his system, and I have to say I feel somewhat embarrassed over what I've received in return for a one-time purchase payment of $169. This thing is worth far more than that.

Now, because I'm me, I insisted upon writing my own subscription management front end, although, I have to say, this package's author, a Greek guy whose first name is Panos, and I can't even begin to pronounce his last name because it's about 12 inches long he has no idea why I've done you know my own subscription management front end. He thinks I'm totally nuts because his system, as delivered, does all of that too. But as Frank Sinatra famously said, I did it my way. I wanted to, you know, have it look like you know GRC's pages that our users interacted with. So NuvoMailer, which is spelled N-U-E-V-O-M-A-I-L-E-R, it's an open source PHP based email marketing management and mailing solution. Runs beautifully under Windows, unix or Linux. To help anyone who might have any need to create an email facility for their organization or their company or whatever, from scratch or replace one that you're not happy with. I made it this episode's GRC shortcut of the week, so GRCSC slash 981 will bounce you over to wwwnuvoemailercom.

I've had numerous back and forth dialogues with Panos because I've been needing to customize some of the RESTful APIs which his package publishes. I've actually extended his API for my own needs, but you know, for example, a new feature that's present in the email everyone received from me today for the first time provides a direct link back to everyone's own email subscription management page, so you can click it and immediately be looking at all of the lists and add or remove yourself. To do that, I needed to modify some of his code so I can vouch for the support he provides and, as I've said, I felt somewhat guilty about paying so little when I've received so much. I mean, this is GRC's email system moving forward forever. So you know, I'm aware that telling this podcast listeners about his work I hope will help him. All I can say is that he deserves every penny he makes. There are thousands, literally thousands, of bulk mailing solutions out in the world. This one allows you essentially to roll your own, and I'm very glad I chose it.

1:10:52 - Leo Laporte
You know what is it chimp mail and or contact Mail chimp yes. Mail chimp or constant contact, because they do the mailing and they've, you know, arranged with whoever's doing their mailing to send out tens of thousands of emails at once. But yeah, most consumer ISPs won't let you mail anything like that at all.

1:11:14 - Steve Gibson
No, no, in fact, they block port 25. Right, which is SMPTP.

1:11:18 - Leo Laporte
He has a very limited. Basically, he has a very limited set of possible customers, so you should use it if you can. Yeah, absolutely.

1:11:29 - Steve Gibson
Okay, a bit of errata and then we're going to take our next break. Last week's podcast drew heavily on two articles written by Kim Zetter. Heavily on two articles written by kim zetter. It's embarrassing that I've been reading, appreciating and sharing kim's writing for years, but never stopped to wonder whether kim would probably associate with the pronoun he or she. Her quite attractive wikipedia photo strongly suggests that she would opt for she, as will I, from didn't you?

call it her, him last time, I think I must have, because somebody's you know she said hey, gibson, she's a she, yeah what are you talking about?

1:12:10 - Leo Laporte
I get the pronouns right these days. That's, I want to hear what you have to say about sync thing, because I still use it like crazy and I'm worried now that there's something I should be worried about. But that's, after all, why we listen to the show, isn't it? So let's take a break and then I will come back and you can explain what I need to do to keep my sync thing in sync, or something With your thing and do my thing with the sync thing.

This episode of Security Now brought to you by Panoptica. Panoptica is Cisco's cloud application security solution, and it provides end-to-end lifecycle protection for cloud-native application environments. More and more we're moving to the cloud these days, and Cisco Panoptica is ready and willing to protect you. It empowers organizations to safeguard everything about their cloud implementations, their APIs, their serverless functions, containers, their Kubernetes environments. Panoptica ensures comprehensive cloud security compliance and monitoring at scale, offering deep visibility, contextual risk assessments and actionable remediation insights for all your cloud assets.

Powered by graph-based technology, panoptica's Attack Path Engine prioritizes and offers dynamic remediation for vulnerable attack factors, helping security teams quickly identify and remediate potential risks across cloud infrastructures. A unified cloud-native security platform has a lot of benefits it minimizes gaps that might arise with multiple solutions you know, this does that much, that much, but then there's a big hole right in the middle. Or providing you know a complex variety of management consoles, none of which really look like the other one. With Cisco's Panoptica, you get a centralized management and you can see it, and, of course, you don't have that problem of fragmented systems causing real issues with your network. Panoptica utilizes advanced attack path analysis, root cause analysis and dynamic remediation techniques to reveal potential risks from an attacker's point of view. This approach identifies new and known risks, emphasizing critical attack paths and their potential impact.

Panoptica provides several key benefits for businesses at any stage of cloud maturity, including advanced CNAP, multi-cloud compliance, end-to-end visualization, the ability to prioritize with precision and context, dynamic remediation and increased efficiency with reduced overheads. It's everything you need. Visit panopticaapp to learn more. P-a-n-o-p-t-i-c-a panopticaapp. We thank Cisco and Panoptica for their support of security. Now On, we go with the show, mr G.

1:15:07 - Steve Gibson
So I wanted to note that while I am still a big fan of SyncThing, lately I have been noticing a great deal of slowdown in its synchronization relay servers. I don't think they used to be so slow. I'm unable to get more than one and a half to 1.8 megabits of traffic through them. While it's not possible to obtain a direct end-to-end or I should say, when it's not possible to obtain a direct end-to-end connection between sync thing endpoints, an external third-party relay server is required to handle their transit traffic. You know everything is super well encrypted, so that's not the issue. The issue is the performance of this solution. Since this problem has persisted, or was persisting for me for several weeks, my assumption is that SyncThings popularity has been growing and actually we know it has and is loading down their relay server infrastructure, which after all they just provide for free. No one's paying anything for this.

At one point in the past I had arranged for point-to-point connections between my two locations, but some network reconfiguration had broken that. My daytime work location has a machine that runs 24-7, but I shut down my evening location machine at the end of every evening's work. The trouble was that synchronization to that always-on machine had become so slow that I was needing to leave my evening machine running unattended for several hours after I stopped working on it, waiting for my evening's work to trickle out and be synchronized with the machine I'd be using. The next morning I finally became so. You know, this problem finally became so intolerable that I sat down and punched remote IP filtered holes through my firewalls at each endpoint endpoint. Even if PFSense's firewall rules were not able to track public domain names as they are.

The public IPs of our cable modems, for example, change so rarely that even statically opening an incoming port to a specific remote public IP is practical. Once I punched those holes, syncthing was able to make a direct point-to-point connection once again and my synchronization is virtually instantaneous. So I just wanted to give a heads up to anyone who may be seeing the same dramatic slowdown that I was seeing with the use of their relay server infrastructure. You know it is an amazingly useful free service and, frankly, helping it to establish direct connections between endpoints also helps to keep the relay servers free, you know, freed up for those who really need them. So that was the issue, leo. It was just the use of a third relay server had recently really ground to a near halt.

1:18:30 - Leo Laporte
Yeah, I haven't noticed it, but you have a much more complicated setup than I do.

1:18:34 - Steve Gibson
Yeah, I've got like double NAT and all kinds of other crazy stuff going on that really make it a little extra difficult. But for what it's worth, I guess. My point is it's worth taking the time. If you are not seeing a direct WAN, they call it a WAN connection in the UI with the IP of your remote node. Instead you see some mention of relay servers. Yeah, that's not good. Well, you probably already know how slow things are going. The point is it's worth taking the time to resolve that, and then syncing is just instantaneous.

Matt St Clair Bishop wrote saying Hello, steve, I've been a listener of Security Now for some years now. However, as I've edged closer to making my own utilities publicly available, my mind has turned to my method of updating them. I remember you saying that you used a simple DNS record to hold the latest edition of each of your public releases, and the client software inspects that record, it being a very simple and efficient mechanism to flag available updates. Could you elaborate at all if you have a spare section in your podcast? I'm personally using C Sharp and the NET framework as I'm a Windows-only guy, so if you could paint the broad strokes I should be able to Google the C-sharp detail. Spinrite user. Loving all your efforts in this field, matt, st Clair Bishop. St Clair Bishop. Okay, so Matt's correct about my use of DNS and I am pleased with the way that capability has turned out.

Anyone who has the ability to look up the IP address, for example for validriverelgrccom, will find that it returns 239.0.0.1. This is because Validrive is still at its first release. When I've released an update to Validrive, it will be released number two and I'll change the IP address of validriverel, as in releasegrccom, to 239.0.0.2. It performs a quick DNS lookup of its own product name, validrive, in front of reldnscom and verifies that the release number returned in the lower byte of the IP address is not higher than its own current release number. If it is, it will notify its user that a newer release exists.

What's convenient about this? I mean there are many things about it. You know there's no massive flood of queries coming in all over the internet. It also provides all of its users the anonymity of making a DNS query, as opposed to coming back to GRC. So there's that too. But this version checking is also performed by a simple DNS query packet and that DNS is distributed and caching, so it's possible to set a very long cache expiration to allow the cached knowledge of the most recent version of Validrive to be spread out across the internet, varying widely with when each cache expires. This means that when the release number is incremented, the notifications of this event will also be widely distributed in time as those local caches expire. This prevents everyone on the Internet from coming back all at once to get the latest version. And, you know, typically it's not a matter of any urgency.

And to Matt's question and point, I've never encountered a language that did not provide some relatively simple means for making a DNS query. I know that C sharp andnet make this trivial. So anyway, that's the story on that Um oh and. And I should mention that two 39 is, you know, is a obviously a huge block of IPs which have been set aside. That's the high end of the multicast address space. But the 239 block specifically is non-routable. So those IPs will never and can never go anywhere. Will never and can never go anywhere. So that's why I chose 239 as the first byte of the IP in the DNS for my release management.

A listener of ours named Carl sent email to me at securitynowatgrccom. He said Hi, steve. Much has been discussed over the recent weeks on your podcast about the upcoming Windows recall feature and its value proposition versus security and privacy concerns. It has been suggested that the concept started as a productivity assistant that uses AI to index and catalog everything on your screen and may be more applicable in an office environment than at home. However, I think it was just as likely that this concept first started as a productivity monitoring tool where corporate management can leverage recall to ensure employees are using their screen time doing real, actual work. Of course Microsoft realizes they can't possibly market recall this way. So here we are. He said. I dread the day recall is installed on my work computer signed Carl. Bad news, carl.

1:24:57 - Leo Laporte
Microsoft already has a product that does that, called Viva. They don't need another one. They monitor you all the time.

1:25:06 - Steve Gibson
Anyway Carl's take on this. You know it aligned with the evil empire theory which, as we know I don't subscribe to. I would say that recall itself is ethically neutral. It's like the discovery of the chain reaction in the fission of atomic nuclei. That discovery can be used to generate needed power or to make a really big bomb, but the chain reaction itself is just the physics of our universe. Chain reaction itself is just the physics of our universe.

Similarly, recall is just a new capability which could be used to either help or to hurt people. Could employers use it to scroll back through their employees' timeline to see what they've been doing on enterprise-owned machines? That's not yet clear. Enterprise-owned machines that's not yet clear. There are indications that Microsoft is working to make that impossible, but we know that as it was first delivered it would have been entirely possible. It appears that Microsoft desperately wants to bring recall to their Windows desktops. It would be incredibly valuable as training material for a local AI assistant and to deeply profile the desktop user as a means for driving advertising selection in a future ad-supported Windows platform. So I suspect they will be doing anything and everything required to make it palatable and, as I said, they already have an enterprise product that does that.

So right for that is that is deployed you know, in business group policy.

Yeah, right yeah, right, okay. So this week I want to share the story and the back story of the web browser community, again bidding a less than fond farewell to yet another significant certificate authority in, as we'll see, what appears to be, or as a result of, as we'll see, what appears to be, a demonstration of executive arrogance. Entrust, one of the oldest original certificate authorities, after six years of being pushed, prodded and encouraged to live up to the responsibilities that accompany the right to essentially print money by charging to encrypt the hash of a blob of bits, the rest of the industry that proactively monitors and manages the behavior of those who have been, dare I say, entrusted to do this responsibly finally reached its limit, and Google announced last Thursday that Chrome would be curtailing its trust of Entrust from its browser's root store, from its browser's root store. Okay, so signing and managing certificates is by no means rocket science. There's nothing mysterious or particularly challenging about doing it. It's mostly a clerical activity which must follow a bunch of very clearly spelled out rules about how certificates are formatted and formulated and what information they must contain. These rules govern how the certificates must be managed and what actions those who sign them on behalf of their customers must do when problems arise. And, just as significantly, the rules are arrived at and agreed upon collectively. The entire process is a somewhat amazing model of self-governance everyone gets a vote, the rules are adjusted in response to the changing conditions in our changing world and everyone moves forward under the updated guidance. This means that when someone in this collective misbehaves, they're not pushing back against something that was imposed upon them. They are ignoring the rules that they voted to change and agreed to follow.

Certificates have been an early and enduring focus and topic on this podcast because so much of today's security is dependent upon knowing that the entity one is connecting to and negotiating with over the Internet really is who we think they are, and not any form of spoofed forgery. The idea behind a certificate of authority is that, while we may have no way of directly confirming the identity of an entity we don't know across the Internet, if that entity can provide proof that they have previously and somewhat recently proven their identity to a third party, a certificate authority, whose identity assertions we do trust, then, by extension, we can trust that the unknown party is who they say they are when they present a certificate to that effect, signed by an authority whom we trust. That's all this whole certificate thing is about. It's beautiful and elegant in its simplicity, but, as the saying goes, the devil is in the details. But as the saying goes, the devil is in the details, and we're going to see today. Those who understand the importance of those details can be pretty humorless when they are not only ignored but flaunted.

The critical key here is that we are completely and solely relying upon a certificate authority's identity assertions, where any failure in such an authority's rigorous verification of the identity of their client customers could have truly widespread and devastating consequences. This is one of the reasons I've always been so impressed with the extreme patience shown by the governing parties of this industry in the face of certificate authority misbehavior. Through the years, we've seen many examples where a certificate authority that's trusted really needs to screw up over a period of years and actively resist improving their game in order to finally have the industry lower the boom on them. No one wants to do this indiscriminately or casually, because it unilaterally puts the wayward CA, the certificate authority, out of the very profitable browser certificating business overnight. Okay, so what happened? In a remarkable show of prescience, when things were only just heating up? When things were only just heating up.

Feisty Duck's cryptography and security newsletter posted the following only a few hours before Google finally lowered the boom on Entrust Fe of the oldest certification authorities is in trouble with Mozilla and other root stores. In the last several years, going back to 2020, there have been multiple persistent technical problems with Entrust certificates. That's not a big deal when it happens once or even a couple times, and when it's handled well, but according to Mozilla and others, it has not been Over time. Frustration grew. Entrust made promises which it then broke. Finally, in May, mozilla compiled a list of recent issues and asked Entrust to please formally respond.

Entrust's first response did not go down well, being non-responsive and lacking sufficient detail. Sensing trouble, it later provided another response with more information. We haven't seen a response back from Mozilla, just ones from various other unhappy members of the community. It's clear that Entrust's case has reached a critical mass of unhappiness, and that's really interesting, because this is really the point. All it takes is a critical mass of unhappiness because, as I said, four hours after this was posted, entrust lost Google and that's losing the game, essentially, if you're selling certificates for browsers. So, they said. We haven't heard from other root stores yet. However, at the recent CA browser forum meeting, also in May, google used the opportunity to discuss standards for CA incident response. It's not clear if it's just a coincidence, but Google's presentation uses pretty strong words that sound like a serious warning to Entrust and all other CAs to improve. Or else, looking at the incidents themselves, they're mostly small technical problems of the kind that could have been avoided with standardized validation of certificates just prior to issuance.

And I'll note later that I'll use the term lint. Lint is well understood in the developer community. It means just running a certificate through a lint filter to make sure that there isn't any lint, any debris, any obviously like a date set to an impossible number or you know something obviously missing that the standard says should be there. You know, just do it. But that doesn't happen.

They said as it happens, ballot SC-95 focuses on pre-issuance certificate linting. If this ballot passes, linting will become mandatory as of March 2025, meaning it's not there yet. But boy, after March we're going to see some more booms lowered if people don't lint by default, and that means people are going to have to spend some more booms lowered if people don't lint by default, and that means people are going to have to spend some time and spend some money upgrading their certificate issuing infrastructures. They have not been bothering, anyway, they said it's a good first step. Perhaps the CAB forum you know the CA browser forum will in the future consider encoding the baseline requirements into a series of linting rules that can be applied programmatically to always ensure future compliance.

A few hours after Feisty Duck posted this, google made their announcement. Last Thursday, june 27th, the Chrome Root Program and Chrome Security Team posted the following in Google's security blog under the title Sustaining Digital Certificate Security Entrust, certificate Distrust and Leo. After taking our final break, I will share what Google wrote and the logic and basically the preamble that led up to this.

1:37:34 - Leo Laporte
Yeah, as you say, you lose Google. You pretty much lost the game. Game over, game over man Our show today, brought to you by Lookout. Why have a game over when you're just getting started?

Every company today is a data company. That means every company's at risk. Cyber threats, breaches, leaks these are like just listen to the show these are the new norm. And, of course, cyber criminals are becoming more sophisticated by the minute. At a time when boundaries for your data no longer exist, what it means for you to be secure, for your data to be secure, has just fundamentally changed. But that's why you need Lookout.

From the first phishing text to the final data grab, lookout stops modern breaches as swiftly as they unfold, whether it's on a device in the cloud, across networks or working remotely at the local coffee shop. Lookout gives you clear visibility into all your data, at rest and in motion. You'll monitor, assess and protect without sacrificing productivity for security. With a single, unified cloud platform, lookout simplifies and strengthens reimagining security for the world that will be today. Visit lookoutcom today to learn how to safeguard data, secure hybrid work and reduce IT complexity. That's lookoutcom Data protection from endpoint to cloud, to your happy place. Oh, thank you, lookout, for supporting security. Now the steve gibson. He's our happy place. Back to the saga of entra.

1:39:14 - Steve Gibson
So google wrote. The chrome security team prioritizes the security and privacy of Chrome's users and we are unwilling to compromise on these values. The Chrome root program states that CA certificates included in the Chrome root store must provide value to Chrome end users that exceeds the risk of their continued inclusion. You should hear a drumbeat in the background here. It also describes many of the factors we consider significant when CA owners disclose and respond to incidents. When things don't go right, we expect CA owners to commit to meaningful and demonstrable change resulting in evidenced continuous improvement.

Over the past few years, publicly disclosed incident reports highlighted a pattern of concerning behavior by Entrust that falls short of the above expectations and has eroded confidence in their competence, reliability and integrity as a publicly trusted CA owner. In response to the above concerns and to preserve the integrity of the web PKI ecosystem, chrome will take the following actions. Ecosystem Chrome will take the following actions In Chrome 127 and higher TLS server authentication certificates validating to the following Entrust routes whose earliest signed certificate timestamp is dated after October 31st 2024, will no longer be trusted by default. Okay, then in Chrome's posting they enumerate the exact nine root certificates that Chrome has until now trusted to be valid signers of the TLS certificates that remote web servers present to their Chrome browser. They continue writing TLS server authentication certificates validating to the above set of routes whose earliest signed certificate timestamp is on or before October 31st 2024, will not be affected by this change. This approach attempts to minimize disruption to existing subscribers using a recently announced Chrome feature to remove default trust based on the SCTs, that's, the signed certificate timestamp, the signing date and certificates. Additionally, should a Chrome user or enterprise explicitly trust any of the above certificates on a platform and version of Chrome relying on the Chrome root store, the SCT-based constraints described above will be overridden and certificates will function as they do today. To further minimize risk of disruption, website owners are encouraged to review the frequently asked questions listed below.

Okay, so now? Okay, if Chrome were to yank just summarily yank all nine of those Entrust certs from their root store, at that instant any web servers that were using Entrust TLS certificates would generate those very scary untrusted certificate warnings that sometimes we see when someone allows their certificate to expire by mistake, and that makes it quite difficult to use your browser and most users just say whoa, I don't know what this red flashing neon thing is, but it's very scary, and if you want to see that, you can go right now to untrusted-rootbadsslcom and there what you will get is a deliberately untrusted certificate so you can see what your browser does, untrusted-rootbadsslcom. Okay, instead of doing that, chrome is now able to keep those I guess I would call them semi-trusted or time-based trusted root certificates in their root store in order to continue trusting any certificates Entrust previously signed and will sign during the next four months July, august, september and October, halloween being the end of that. No Entrust certificate signed from November on will be accepted by Chrome. So that's good. That allows Entrust four months to wind down their services, to decide, maybe make a deal with some other CA to you know like, purchase their existing customers and you know and transfer them over.

I would imagine that's what they'll do, but there could be no question that this will be a devastating blow for Entrust. Not only will this shut down completely their TLS certificate business, but CAs obtain a great deal of additional revenue by providing their customers with many related services. Entrust will lose all of that too, and of course, there's the significant reputational damage that accompanies this, which, you know, makes a bit of a mockery of their own name, and there's really nothing they can say or do at this point. They can say or do at this point. The system of revoking CA trust operates with such care to give misbehaving CAs every opportunity to fix their troubles that any CA must be flagrant in their misbehavior for this to occur.

As long-time listeners of this podcast know, I'm not of the belief that firing someone who missteps always makes sense. Mistakes happen and valuable lessons can be learned. But from what I've seen and what I'm going to share, I'll be surprised if this is a survivable event for Entrust's Director of Certificate services, a guy named Bruce Morton. Way back in 1994, entrust built and sold the first commercially available public key infrastructure. They started all this. Five years later, in 1999, they entered the public SSL market by chaining to the thought route and created Entrustnet and, as I said, their name has been around forever. You know, I've seen it when I've looked at lists of certificates there's Entrust. At lists of certificates, there's Entrust.

1:46:49 - Leo Laporte
Ten years later, entrust was acquired for $124 million by Thoma Bravo, a US-based private equity firm. There you have it In a nutshell. Man, add this one on the list, Wow.

1:47:01 - Steve Gibson
Now I don't know, and I'm not saying whether being owned by private equity may have contributed to their behavior and their downfall, but if so, they would have that in common with LastPass.

1:47:17 - Leo Laporte
Yeah, and Red Lobster and about a million other companies in the United States in the last 10 years that have been bought by private equity and then drained of their resources for money.

1:47:30 - Steve Gibson
It's sad, yep the question why is Chrome taking action? Replied certificate authorities serve a privileged and trusted role on the internet that underpin encrypted connections between browsers and websites. With this tremendous responsibility comes an expectation of adhering to reasonable and consensus-driven security and compliance expectations, including those defined by the CA browser TLS baseline requirements. Over the past six years, we have observed a pattern of compliance failures, unmet improvement commitments and the absence of tangible, measurable progress in response to publicly disclosed incident reports. When these factors are considered in aggregate and considered against the inherent risk each publicly trusted CA poses to the internet ecosystem, it is our opinion that Chrome's continued trust in Entrust is no longer justified. And okay, this makes a key point. It's not any one thing that Entrust did, taken in isolation, that resulted in this loss of trust. The loss of trust resulted from multiple years of demonstrated uncaring about following the rules that they had voted upon and agreed to. As a member of this group, no one wants to make Entrust an example. Too many lives will be negatively impacted by this decision, but the entire system only functions when everyone follows the rules they've agreed to. Entrust refused to do that, so they had to go.

Let's take a look at some specifics. For example, a few months ago, following an alert from Google's Ryan Dixon, entrust discovered that all of its EV certificates issued since the implementation of changes due to amounted to approximately 26,668 certificates were missing their CPS URIs, in violation of the EV guidelines. Entrust said this was due to discrepancies and misinterpretations between the CA Browser Forum's TLS baseline requirements and the extended validation guidelines. Entrust chose to not stop issuing the EV certificates that's a violation of the rules and did not begin the process of revoking the misissued certificates. That's another violation. Instead, they argued that the absence of the CPS URI in their EV certificates was due to ambiguities in cab form requirements, which was not the case. They said that the absence of the CPS URI had no security impact. They said that the absence of the CPS URI had no security impact that's arguably true and that halting and revoking the certificates would negatively impact customers and the broader WebPKI ecosystem. In other words, they thought they were bigger than the rules, that the rules were dumb or that the rules didn't apply to them. Everyone else has to follow them, but not them. Entrust then also proposed a ballot to adjust the EV guidelines so that they would not be out of compliance to not require the CPS URI. They also argued that their efforts were better spent focusing on improving automation and handling of certificates rather than on revocation and reissuance.

Wow, okay. Now the CPS URI is truly incidental. Cps stands for Certification Practice Statement, and EV certs are now supposed to contain a CPS URI link pointing to the CA's issuing document. So is leaving that out a big deal? Probably not from a security standpoint, but it's worrisome.

When a CA intentionally defies the standards that everyone has agreed to follow and then argues about them and is deliberately, knowingly in misissue, is in misissuance. A security and software engineer by the name of amir omidi has worked on maintaining certificate issuance systems at let's encrypt and, and it appears that he's currently working on a project named Boulder, which is an ACME-based certificate authority written in Go, and before that was ZLint, an X.509 certificate linter focused on WebPKI standards and requirements. Yesterday just Monday, yesterday he posted a terrific summary of the way the public key infrastructure industry thinks about these things. He wrote Entrust did not have one big, explosive incident. The current focus on Entrust started with this incident. On its surface, this incident was a simple misunderstanding. This incident happened because, up until the SC-62 Version 2 ballot, the CPS URI field in the Certificate Policy Extension was allowed to appear on certificates. This ballot changed the rules and made this field be considered not recommended. However, this ballot only changed the baseline requirements and did not make any stipulation on how extended validation certificates must be formed. The EV guidelines still contained rules requiring the CPS URI extension.

When a CA writes a mere, when a CA has an incident like this, the response is simple Stop mis-issuance. Immediately. Fix the certificate profile so you can resume issuance In parallel. Figure out how you ended up missing this rule and what the root cause of missing this rule was. Revoke the misissued certificates within 120 hours of learning about the incident. Provide action items that a reasonable person would read and agree that these actions would prevent an incident like this happening again. In other words, this is all understood. Entrust ignored it. He writes when I asked Entrust if they've stopped iss accidental incident to willful misissuance. This distinction is an important one. He says Entrust had started knowingly mis-issuing certificates.

Entrust received a lot of pushback from the community over this. This is a line that a CA shouldn't under any circumstances cross. Entrust continued to simply not give a crap and I changed that word to be a little more politically correct even after Ben Wilson of the Mozilla Root Program chimed in and said that what Entrust is doing is not acceptable. He writes Entrust only started taking action after Ryan Dixon of the Google Chrome root program also chimed in to say it is this. He said this is unacceptable. I'll interrupt to mention that this is an important distinction. The executives at Entrust appeared not to care about any of this until Google weighed in with the power of their Chrome browser. That was a monumental mistake and it demonstrated a fundamental misunderstanding of the way the CA browser forum members operate. None of this operates on the basis of market power. It's only about agreeing to and then following the rules. It's not about oh yeah, make me. We're not in the schoolyard anymore. Amir continues.

Entrust's delayed response to the initial incident, spanning over a week, compounded the problem by creating a secondary failure-to-revoke-on-time incident. As these issues unfolded, a flurry of questions arose from the community. Entrust's responses were often evasive or minimal, further exacerbating the situation. This pattern of behavior proved increasingly frustrating, prompting me to delve deeper into Entrust's past performance and prior commitments. In one of my earlier posts, I found that Entrust had made the promise that one we will not make the decision not to revoke which they just had, we will plan to revoke within 24 hours or five days as applicable for the incident, which they've said they won't. We will provide notice to our customers of our obligations to revoke and recommend action within 24 hours or five days based on the baseline requirements, which they won't do because they're not going to revoke in the first place.

He says this pattern of behavior led to a troubling cycle Entrust making promises, breaking them and then making new promises, only to break those as well as this and folded and trust and the community uncovered an alarming number of operational mistakes, culminating in a record 18 incidents within just four months. Notably, about half of these incidents involved Entrust offering various excuses for failing to meet the 120-hour certificate revocation deadline, ironically a requirement they had voted to implement themselves, he said. I do want to highlight that the number of incidents is not necessarily an indication of CA quality. The worst CA is the CA that has no incidents, as it's generally indicative that they're either not self-reporting or not even aware that they're misissuing. Okay. So, in other words, mistakes happen. Everyone understands that. No one needs to be perfect here, but it's how the mistakes that are discovered are then handled that demonstrates the trustworthiness of the CA.

Of the CA Amir said, due to the sheer number of incidents and Entrust's poor responses up until this point, mozilla then asks Entrust to provide a detailed report of these recent incidents. Mozilla specifically asks Entrust to provide information regarding and we have some bullet points the factors and root causes that led to the initial incidents, including commonalities among the incidents and any systemic failures. Okay, now listen to this. I mean because this is really Mozilla getting up in the Entrust business and Entrust apparently doesn't take kindly to that the factors and root causes that led to the initial incidents, highlighting their commonalities among the incidents and any systemic failures. We want to know Entrust's initial incident handling and decision-making in response to these incidents, including any internal policies or protocols used by Entrust to guide their response and an evaluation of whether their decisions and overall response complied with Entrust's policies. We want your practice statement and the requirements of the Mozilla Root Program. In other words, explain to us and we're not kidding here how this happened Like, are you ignoring your own policies or are these your policies, in other words, wtf, and we want it in detail, please. We need also a D. I'm literally this is in the letter a detailed timeline of the remediation process and an apportionment of delays to root causes. So please, you know, elaborate on the delays which were involved in this, because you know we're out here, we don't understand. Also, an evaluation of how these recent issues compare to the historical issues referenced above and entrusts compliance with its previously stated commitments, which everyone already knows is missing. Mozilla also asked, writes Amir, that the proposals meet the following requirements. So, literally, these are what we need to know and here are the requirements you must meet in your reply. We want clear and concrete steps that Entrust proposes to take to address the root causes of these incidents and delayed remediation. We want measurable and objective criteria for Mozilla and the community to which Entrust will commit to meeting these criteria. As Amir said even here, he said Mozilla gave Entrust a one-month deadline to complete this report.

Mozilla's email served a dual purpose. It was both a warning to Entrust and an olive branch offering a path back to proper compliance. This presented Entrust with a significant opportunity. They could have used this moment to demonstrate to the world their understanding that CA rules are crucial for maintaining Internet security and safety and that adhering to these rules is a fundamental responsibility. Moreover, entrust could have seized this chance to address the community the community, explaining any misunderstandings in the initial assessment of these incidents and outlining a concrete plan to avoid future revocation delays.

Unfortunately, entrust totally dropped the ball on this. Their first report was a rehash of what was already on Bugzilla, offering nothing new. Unsurprisingly, this prompted a flood of questions from the community. Entrust's response they decided to take another crack at it with a second report. They submitted this new report a full two weeks after the initial deadline a full two weeks after the initial deadline. In their second report, entrust significantly changed their tone, adopting a more apologetic stance regarding the incidents. However, this shift in rhetoric was not matched by their actions. While expressing regret, entrust was still overlooking certain incidents, delaying the revocations of existing misissuances and failing to provide concrete plans to prevent future delayed revocations incidents and Entrust's responses serves as a prime example of mishandled public communications during a crisis. Okay, now stepping back from this for a moment, the only at Entrust and yes, leo a public equity-owned firm.

2:06:28 - Leo Laporte
Private equity Sorry private equity-owned firm.

2:06:31 - Steve Gibson
Yeah, the executives at Entrust didn't really take any of this seriously. Entrust didn't really take any of this seriously. They acted as though they were annoyed by the gnats buzzing around them who were telling them how they should act and what they should do. Consensus among many community members is that Entrust will always prioritize their certificate subscribers over their obligations as a certificate authority. And there it is in a single sentence the consensus among many community members is that Entrust will always prioritize their certificate subscribers over their obligations as a certificate authority. He said. This practice fundamentally undermines Internet security for everyone, left unchecked. It creates a dangerous financial incentive for other CAs to ignore rules when convenient. Other CAs to ignore rules when convenient, simply to avoid the uncomfortable task of explaining to subscribers why their certificates need replacement. Naturally, customers prefer CAs that won't disrupt their operations during a certificate's lifetime. However, for CAs that properly adhere to the rules, this is an impossible guarantee to make. In other words, no one should expect CAs to be perfect. The community here doesn't. They understand mistakes will happen, but it's maintaining the integrity of the system is more important than anything else, he says. Furthermore, these incidents were not new to Entrust, as I've covered in earlier posts, entrust has continuously demonstrated that they're unable to complete a mass revocation event in the 120 hours defined by and required by the baseline requirements. This pattern of behavior suggests a systemic issue rather than isolated incidents. Despite there being over a dozen root programs, there are only four that are existentially important for a certificate authority the Mozilla root program, used by Firefox and practically all Linux distribution and FOSS software. The Chrome root program used by Chrome, the browser in the OS and some Androids. The Apple root program used by everything Apple. And the Microsoft root program used by everything Microsoft. He finishes. Enforcement over the operational rules of a CA has been a struggle. In the past, a root program only has a binary choice to either trust or distrust a certificate authority. Now there is one last, much shorter piece of interaction that I want to share. It was written by Watson lad LADD, who studied math at Berkeley and is presently a principal software engineer at Akamai. Among his other accomplishments, he's the author of RFC 9382, which specifies SPAC2, a password-authenticated key exchange system, and his name has been on about six other RFCs. So you know he's a techie and he's in the game In the public discussion thread about Entrust's repeated and continuing failings to correct their mistakes and live up to the commitments they had made to the CA browser community.

Watson publicly addressed a note to Bruce Morton, entrust's Director of Certificate Services, who has been the face of Entrust's repeated failures, excuses and defiance. Watson Ladd wrote Dear Bruce, this report is completely unsatisfactory. It starts by presuming that the problem is for incidents. Entrust is always under an obligation to explain the root causes of incidents and what it is doing to avoid them, as per the CCADB incident report guidelines. That's not the reason Ben and the community need this report. And here he's referring to Mozilla's Ben Wilson, who initially asked Entrust to explain how they would deal with those ongoing problems and demonstrate how they would be prevented in the future. And, as we know, entrust's Bruce Morton basically blew him off, apparently because he wasn't from Google. Anyway, watson says that's not the reason Ben and the community need this report. Rather, it's to go beyond the incident report, to draw broader lessons and to say more to help us judge Entrust's continued ability to stay in the root store.

The report falls short of what was asked for in a way that makes me suspect that Entrust is organizationally incapable of reading a document, understanding it and ensuring each of the clearly worded requests is followed. Wow yeah, the implications for being a CA are obvious. He's mad, holy cow, they all are. He said to start. Ben specifically asked for an analysis involving the historical run of issues and a comparison. I don't see that in this report at all. The list of incidents only has ones from 2024 listed. There's no discussion of the two issues specifically listed by Ben in his message specifically listed by Ben in his message.

Secondly, the remedial actions seem to be largely copy and pasted from incident to incident without a lot of explanation. Saying the organizational structure will be changed to enhance support, governance and resourcing really doesn't leave us with a lot of ability to judge success or explain how the changes made sparse on details will lead to improvements. Similarly, process weaknesses are not really discussed in ways that make clear what happened. How can I use this report if I was a different CA to examine my organization and see if I can do better? How can we as a community judge the adequacy of the remedial actions in this report?

Section 2.4 I find mystifying. To my mind, there's no inherent connection between a failure to update public information in a place where it appears a delay in reconfiguring a responder and a bug in the CRL generation process. Beyond the organizational, these are three separate functions of rather different complexity. If there's a similarity, it's between the latter two issues, where there was a failure to notice a change in requirements that required action. But that's not what the report says. Why were these three grouped together and not others? What's the common failure here that doesn't exist with the other incidents? If this is the best Entrust can do, why should we expect Entrust to be worthy of inclusion in the future? To be clear, there are CAs that have come back from profound failures of governance and judgment, but the first step in that process has been a full and honest accounting of what their failures have been in a way that has helped others understand where the risks are and helps the community understand why they are trustworthy. Sincerely Watson Lab Watson was hopped up.

2:15:13 - Leo Laporte
That doesn't sound like it, but that's what an engineer sounds like when they get really mad.

2:15:17 - Steve Gibson
That's what an engineer sounds like when they get really mad. Well now, Leo, I don't know these Entrust guys at all, but given the condescension they've exhibited, it's not difficult to picture them as some C-suite stuffed shirts who have no intention of being judged by and pushed around by a bunch of pencil necked geeks. But boy, did they read this one wrong? Those pencil necked geeks with their pocket protectors reached consensus and pulled their plug, ejecting them from the web certificate CA business. They had a hand in pioneering.

2:16:08 - Leo Laporte
This is what happens when people who only run businesses don't understand the difference between a business, a profit-seeking enterprise and a public trust right. And they don't understand that in order to run your business, a profit-seeking enterprise and a public trust right, yes. And they don't understand that in order to run your business, you have to satisfy the public trust part.

2:16:27 - Steve Gibson
You can't just say yeah, yeah, whatever You've got to respond and notice that Entrust was taken private. Yeah, so they no longer had literally a public trust.

2:16:37 - Leo Laporte
Well, they're a business, except that a certificate authority has a public trust. Sorry, that's the job.

2:16:42 - Steve Gibson
Yes, it is a public trust. It's a public trust.

2:16:45 - Leo Laporte
Yes, wow, clearly they were in over their heads or something.

2:16:52 - Steve Gibson
Well, but they started this business. I mean Is it the same people?

2:16:56 - Leo Laporte
though really Well, that's exactly that's a great question. It's like Joe Segrist wasn't a last pass at the end.

2:17:03 - Steve Gibson
Yes, exactly Exactly that. If they that, like the, that the, the aggravation that was simmering away in this ca browser forum was actually capable of boiling over and ending their business.

2:17:30 - Leo Laporte
They didn't get it, they really didn't get it, and you know what you know? Well, they know now, whoops yeah, it's over.

2:17:40 - Steve Gibson
They appear to believe that the rules they agreed to did not apply to them. Or you know, I thought maybe it was the extreme leniency that the industry had been showing them that led them to believe that their failures would never catch up with them, and and I boy. But the worst thing that they did was just to basically blow off. You know the heads of these root programs. When they when they said hey look, uh, we see some problems here. We need you to convince us that you're worthy of our trust. Yeah, and the untrust people probably just said F-U.

Well, now they're going to be out of business in four months Wow.

2:18:25 - Leo Laporte
So without the trust of Chrome and, I presume, other browsers.

2:18:30 - Steve Gibson
Mozilla is clearly going to follow. Oh, Mozilla will be right on their heels.

2:18:32 - Leo Laporte
And then Edge and everybody else who does certificates will follow, right, but it doesn't matter If you're not in chrome, that's right. People using chrome won't be able to go to sites that have interest certificates game over, right?

2:18:47 - Steve Gibson
yes, yeah, yes, and trust is out of business. They will no longer. The only thing they could do would be to sell their business. Remember when semantic screwed up royally? It happens they ended up having they had to sell their business to DigiCert right because you know so.

2:19:06 - Leo Laporte
So they also might face the the wrath of people who use Chrome or other websites that use their certificates when their customers can't get to them.

2:19:18 - Steve Gibson
I mean, it might be some big companies have been, all the certs that have been issued stay valid. Okay, that's, that's the good thing. So and so, any site, even through halloween, even through the end of october, any so, because what chrome is doing is they're looking at the signing date and, and they're saying anything, entrust signs after October 31st 2024, we're not going to trust, we're going to take the user to the invalid certificate page.

2:19:51 - Leo Laporte
So there is a little bit of a warning going out to people who use Entrust certificates You're going to need a new certificate from a different company now.

2:19:59 - Steve Gibson
Yes, you know, by October, yes, yes, you know, by October, yes, yes, you might as well switch. You could buy an Entrust certificate up until Halloween and it would stay good for the life of the certificate, which has now been reduced to one year. What is it? 386 days or something. That's right, yeah, so it's time to switch providers. So entrust loses all of that ongoing revenue from from certificate renewals. They also lose all of the of the second order, uh, business that their customers got you know. They're like oh well, we'll also sell you some of this and some of that, and you probably need some of this over here. I mean, they're in a world of hurt. Unfortunately, it couldn't happen to a nicer group of guys, because they entirely brought this on themselves. It was literally their arrogance. We are not going to're, we are not going to do what you are asking us to do, and so the, the, the, the consensus was okay.

2:21:06 - Leo Laporte
You don't have to but we don't have to trust you.

2:21:09 - Steve Gibson
We're not making you, we're just going to stop believing you, right.

2:21:13 - Leo Laporte
That's really interesting. Usually what happens with private equity is they buy a company and sell off assets or somehow make money. The example of Red Lobster comes to mind. The private equity company that bought the Red Lobster restaurant chain took the real estate that was owned by all the restaurants and sold it to a separate division, making almost enough money to compensate for the debt they'd incurred to buy it. Because that's what happens they borrow money, then they squeeze the company, get all the debt paid off. But the problem was now the restaurants had to pay rent and they went out of business. And that's what happens you squeeze, get your money back, get your money out, and then you don't care what happens. So I don't know what they were able to squeeze out of Entrust, but it may be they got what they wanted out of it and they don't care. At this point that's what it feels like. They just don't care. They could come back from this right.

2:22:24 - Steve Gibson
They could say oh, oh, yeah, wait a minute. Oh, sorry we were wrong. Could they, or is it? I don't think so. I mean, maybe they changed their name to retrust. Wow, uh, no, I mean it's, it's over. I mean, at the then they can't like like say oh oh, we're really sorry, we didn't understand they've. I mean this I've never seen any of this ever reversed. These are, you know, these are slow moving icebergs, and when the iceberg hits your ship, you know it doesn't turn around rend you from stem to stern.

2:22:52 - Leo Laporte
It does they? Uh, uh, out of sync in our discord, says that and trust is about 0.1 percent of all certs, compared to let's encrypt, which is now 53 percent of all certs. Uh, why not? It's free, right, uh?

2:23:08 - Steve Gibson
and if you got to renew it every 384 days, you might as well just go with let's encrypt yeah, last year we talked about the big seven, where if you only trusted seven cas, you got 99.95 or something I don't remember.

It wasn't on that list, okay, so maybe this is just incompetence or something well, it's yeah that I mean you're incompetent if you're a manager who doesn't know how to read email and understand the importance of it to your career. I mean, I doubt this bruce guy is going to have a job in four months it won't be let's put it this way.

2:23:47 - Leo Laporte
It will not be in the certificate business, cannot be in the certificate business. Steve gibson, I love it. This was a fun one. Uh, it's a little scary early on with OpenSSH, but you kind of redeemed it with some humor towards the end. What a story. That is Amazing. Steve's at GRCcom. Now, if you go to GRCcom slash email, you can sign up for his mailing list. You don't have to, though. All it will do is validate your email so that you can then email him. So it's the best way to be in touch with Steve. If you're there, you should check out Spinrite version 6. One is out. It is now easily the premier mass storage performance enhancer. It's like Viagra for your hard drive no performance enhancer maintenance and recovery utility.

2:24:35 - Steve Gibson
Well, you want your hard drive to be hard.

2:24:37 - Leo Laporte
You do. You don't want a floppy drive. We're not talking floppies here. No, not a floppy, this is a hard drive or an SSD. Let's be fair, it works on SSDs as well. We're in trouble now. We made it so far, so far, so close to the end.

While you're picking up, grc's Spinrite might want to get a copy of this podcast. Steve hosts two different versions the 64 kilobit audio version, which we have as well. That's kind of the standard, but he also has a 16 kilobit version for the bandwidth impaired. That's the version that goes to Elaine Ferris, who transcribes all this and delivers really excellent human written transcriptions of every episode. Those are all there as well, and the show notes, so that you can follow along as you listen to the show. We have 16 kilobit audio at our website. We also have, I'm sorry, 64 kilobit audio. We don't have 16. I wouldn't put that on my site. Are you crazy? We have 64 kilobit audio and video which Steve would never put on his site. So you know, what comes around goes around.

If you want to watch, you can do that as well at twittv slash sn. You can also get it on YouTube as a video and you can also subscribe in your favorite podcast player and get it automagically every week right after our Tuesday record. We do the show right after MacBreak Weekly around 2 pm Pacific, 5 pm Eastern, 2100 UTC. The stream is on YouTube live. That's youtubecom slash twit, slash live, and you know, if you subscribe and you smash the bell, you'll get an automatic notification when we go live. So that might be worth doing that for you.

What else do I need to tell you? Oh, what I need to do really is thank our club members who make this show and all the shows we do possible. Without your support, there would be no twit. If you'd like to join the club it's a great group of people Show your support for the programming we do here and meet some really smart, interesting people in our Discord. You get ad-free versions of all the shows plus special shows we don't do anywhere else, and you get ad-free versions of all the shows plus special shows we don't do anywhere else, and it's just $7 a month. It's nothing, although you can pay more you absolutely can. We won't discourage that, but it starts at $7 a month at twittv slash club twit.

2:26:52 - Steve Gibson
Thank you, steve, have a wonderful week and I'll see you next time Will do and I'm at long last I will be able to show the picture of the week to end all for Patch Tuesday. He's already got it. Omg, it will be arriving in people's mail before the 9th, and this picture was made for Windows, wow, so.

2:27:22 - Leo Laporte
Well, here's your chance. All you got to do is go to GRCcom, slash email and sign up, and you'll get it before anybody else does.

2:27:35 - Steve Gibson
Thank you, steve, have a good one. Bye Security Now.

 

All Transcripts posts