Transcripts

Security Now 985 transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show
 

0:00:00 - Leo Laporte
It's time for security now. Steve Gibson is here. The post-mortem on the CrowdStrike flaw. Actually, crowdstrike explained how it happened. I think you'll enjoy Steve's reaction to that. Firefox is apparently not doing what it says it's doing when it comes to tracking cookies. We'll talk about that. And then a flaw that makes nearly 850 different PC makes and models insecure. It's all coming up. Next on Security Now Podcasts you love From people you trust. This is Twit. This is Security Now with Steve Gibson, episode 985, recorded Tuesday, july 30th 2024. Platform key disclosure. It's time for Security Now, the show where we cover the latest news in the security front. And boy, there's always a lot of news in the security front. With this guy right here. He is our security weatherman, mr Steve Gibson. Hi, steve.

0:01:11 - Steve Gibson
And the outlook is cloudy.

0:01:14 - Leo Laporte
Chance of disaster.

0:01:16 - Steve Gibson
Yes, yes, Remember duck and cover? Well, anyway, we did not. Well, we did not have a new disaster since 10 days ago, when we had one that pretty much made the record books. But we do have a really interesting discovery, and what's really even more worrisome is that it's a rediscovery. Even more worrisome is that it's a rediscovery. Today's podcast is titled Platform Key Disclosure for Security. Now number 985, this glorious last podcast of July and the second to the last podcast, the penultimate podcast, where you are in the studio, Leo, rather than in your new attic bunker.

0:02:08 - Leo Laporte
Can an attic be a bunker, though, really? Oh, that's a good point. I think I'm actually more exposed.

0:02:16 - Steve Gibson
You'll be the first to go. But really sometimes you think maybe that's the best right.

0:02:22 - Leo Laporte
Oh, I always think that Maybe that's the best right. Yeah, oh, I always think that I know the worst thing is a prolonged, slow, agonizing, suffering, death, what we know as life, or life as it's known. Yes, that's the worst thing.

0:02:37 - Steve Gibson
Especially when you're a CIO and you're well anyway. So we've got a bunch of stuff to talk about. Yes, We've got, of course, the obligatory follow-up on the massive CrowdStrike event. How do CrowdStrike users feel? Are they switching or staying? How does CrowdStrike explain now what happened, and does that make any sense? How much blame should they receive? Which blame should they receive? We've also got an update on how Entrust will be attempting to retain its customers and keeping them from wandering off to other certificate authorities. Firefox just no one understands what's going on exactly, but it appears not to be blocking third party tracking cookies when it claims to be. Also, we're going to look at how hiring remote workers can come back to bite you in the you know what.

0:03:29 - Leo Laporte
Oh, don't tell me that. That's all remote workers now. Yeah, yeah, yeah, yeah.

0:03:34 - Steve Gibson
A security company got a rude awakening and we learned something about just how determined North Korea is to get into our britches. Also, did Google really want to kill off third-party cookies? Or are they maybe actually happy about what happened? And is there any hope of ending abusive tracking? Auto-updating anything is obviously fraught with danger. We just survived some. Why do we do it, and is there no better solution? And what serious mistake did a security firm discover that compromises the security of nearly 850 PC makes and models? Call, and I'll be further driving home the point of why, despite Microsoft's best intentions assuming that we agree, they have only the best of intentions I know that opinions differ widely there they can't keep recall data safe Because it's not necessarily their fault. It's just not a safe ecosystem that they're trying to do this in. So anyway, we have a fun picture of the week and a great podcast ahead.

0:04:51 - Leo Laporte
Actually that's a good topic we could talk about. Could any computing system make recall a safe thing? Probably not right, it's just the nature of computing systems.

0:05:05 - Steve Gibson
Yep, we have not come up with it yet.

0:05:07 - Leo Laporte
Yeah, there's no such thing as a secure, perfectly secure operating system. You could try. That's where our sponsors come in. They are there to help you.

Like Lookout, today, every company is a data company, right? I mean it's like you're all running recall all the time. Every company is at risk as a result Cyber threats, breaches, leaks this is the new norm. If you listen to the show, I don't have to tell you that Cyber criminals are becoming more sophisticated every minute now, using AI and all sorts of. I mean it's terrifying. And it's worse because we live in a time when boundaries for your data no longer exist. What it means for your data to be secure has fundamentally changed because it's everywhere.

Enter Lookout. There is a solution, from the first phishing text to the final data grab. Lookout stops modern breaches as swiftly as they unfold. Lookout stops modern breaches as swiftly as they unfold, whether on a device in the cloud across networks or hanging out working remotely at a local coffee shop. Lookout gives you clear visibility into all your data, in motion and at rest, wherever it is. Wherever it's going, you'll monitor, assess and protect, without sacrificing productivity and employee happiness for security. With a single, unified cloud platform, lookout simplifies and strengthens reimagining security for the world that we'll be today. Visit Lookoutcom today to learn how to safeguard data, secure hybrid work and reduce IT complexity. That's Lookoutcom. We thank them so much for reduce IT complexity. That's lookoutcom. We thank them so much for supporting the good work Steve's doing here at SecurityNow Lookoutcom. Do we have a? I didn't even look. Do we have a picture of the week this week? We do indeed.

0:07:00 - Steve Gibson
This one was just sort of irresistible. I know what they meant, but it's just sort of fun what actually transpired.

0:07:11 - Leo Laporte
I just saw it.

0:07:12 - Steve Gibson
Yeah, now I think and you'll probably recognize this too the signage. I think it's a Barnes Noble.

0:07:19 - Leo Laporte
Yeah.

0:07:32 - Steve Gibson
It's sadly been quite a while since I've walked into an actual bookstore and filled my lungs with that beautiful scent of paper pulp Used to love it.

0:07:37 - Leo Laporte
I grew up in the San Mateo Public Library. They just would sort of nod to me as I would walk in. Hello Steve.

0:07:40 - Steve Gibson
Enjoy your time in the stacks, anyway. So what we have here is a sign over a rack of books. The sign reads large print audio books and of course, I gave that the caption. That's right, because large print is what one looks for in an audio book. Now, many of our clever listeners who received us already via email a couple hours ago, they actually responded with what my first caption was, which was variations of. Do you think they actually meant loud audio books?

0:08:23 - Leo Laporte
Okay, that could be, wow, okay, that could be.

0:08:26 - Steve Gibson
And somebody took it more seriously and said well, you know, steve, somebody who's visually impaired might need large print on the instructions for how to play the audio book.

0:08:38 - Leo Laporte
That makes sense.

0:08:40 - Steve Gibson
That's a point.

Actually, what we know is the case is that large print books are on the upper few shelves and the audio books are down below and they put them both on the same sign, creating this little bit of fun for us and our audience. So thank you for whomever sent that to me. It's much appreciated, and I do have many more goodies to come, and I do have many more goodies to come. Ok, so I want to share the note Today that I mentioned last week. This was somebody who was forced to DM me because I don't remember why, but he, he wrote the whole thing. Then he I think he created a Twitter account in order to send it to me.

Now I'll take this note, I'll take this moment to mention I just was forced to turn off DMs, incoming DMs from unknown senders or senders Twitter people who I don't follow and, of course, I famously follow no one. So, and the reason is, when I went there today, all I had a hard time finding any actual new direct messages to me from among the crap. It was so many people wanting to date me and I don't. I don't think any of them actually do. You know and I am wearing a ring.

I'm proud of that, that and foreign language DMs that I can't read, and I finally thought what am I? You know why? What no? So I just turned it off. So I'm sorry, but I will still post the show notes to my you know from at SGGRC on Twitter. I've had a lot of people who thanked me for even though I now have email up and running and this system is working beautifully Seventy five hundred of our listeners received the show notes and a summary of the podcast and a thumbnail of the picture of the week and the ability to get the full sizes and everything several hours ago. But I just it's no longer works as an. I can't do open DMS and I don't know why, because it's been great for so long, leo. I mean I haven't had any problem, but maybe I mean the only thing I can figure is that the spam level on Twitter is just going way up.

0:11:15 - Leo Laporte
You've just been lucky. Really, I think I've just been lucky.

0:11:18 - Steve Gibson
Maybe I've just been sort of off the radar because I don't do much on Twitter except send out the little weekly tweet. So anyway, this is what someone wrote, really good piece that I wanted to share. He said hi, steve, I'm writing to you from New South Wales, australia. I don't really use Twitter, but here I am sending you a DM. Without a doubt you'll be mentioning the CrowdStrike outage in your security now this week. I thought to give you an industry perspective here and just to explain, I didn't get this from him until, as I do every Tuesday, I went over to Twitter to collect DMs and found this. So it didn't make it into last week's podcast, though I wish it had, so I'm sharing it today.

He said I work in cybersecurity at a large enterprise organization with the CrowdStrike Falcon agent deployed across the environment Think approximately 20,000 endpoints. He said around 2 to 3 pm Sydney, the BSOD wave hit Australia. The impact cannot be overstated All Windows systems in our network started BSODing at once. This is beyond a horrible outage. Crowdstrike will need to be held accountable and show they can improve their software stability rather than sprucing AI mumbo-jumbo. He said if they want to remain a preferred provider, ok. He says In defense of CrowdStrike. Their Falcon EDR tool is nothing short of amazing. The monitoring features of Falcon are at the top. It monitors file creation, network processes, registry edits, dns requests, process execution scripts, HTTPS requests, logons, logoffs, failed logons and so much more. The agent also allows for containing infected hosts, blocking indicators of compromise, opening a command shell on the host, collecting forensic data, submitting files to a malware sandbox, automation of workflows, an API, powershell, python Go libraries. It integrates with threat Intel feeds and more and more. It integrates with threat Intel feeds and more and more. Most importantly, he says CrowdStrike has excellent customer support. Their team is helpful, knowledgeable and responsive to questions or feature requests. Despite this distavous outage, we are a more secure company for using CrowdStrike. They have saved our butts numerous times. I'm sure other enterprises feel the same.

Full disclosure I do not work for CrowdStrike, but I do have one of their T-shirts, he says. Why am I telling you this? Because the second in-line competitor to CrowdStrike Falcon is Microsoft Defender for Endpoint, mde. He says MDE is not even close to offering CrowdStrike Falcon's level of protection. Even worse, microsoft's customer support is atrocious to the point of being absurd. I've interacted with Microsoft security support numerous times. They were unable to answer even the most basic questions about how their own product operated and often placed me in an endless loop of escalating my problem to another support staff, forcing me to re-explain my problem to the point where I gave up asking for their help. I even caught one of their support users using chat GPT to respond to my emails to my emails and this is with an enterprise level support plan, he says. As the dust settles. As the dust starts to settle after this event, I imagine Microsoft will begin a campaign of aggressively selling Defender for Endpoint Falcon is often more expensive than MDE, since Microsoft provides significant discounts depending on the other Microsoft services a customer consumes. Sadly, I imagine many executive leadership teams will be dumping CrowdStrike after this outage and signing on with MDE. Relying on Microsoft for endpoint security will further inflate the single point of failure balloon that is Microsoft, leaving us all less secure in the long run. And then he signs off.

Finally, I'm a big fan of the show. I started listening around 2015. As a result of listening to your show, I switched to a career in cybersecurity. Thank you, Leo and Steve. So I wanted to share that because that's some of the best feedback I've had from somebody who really has some perspective here, and it all rings 100% true to me. This is the correct way for a company like CrowdStrike to survive and to thrive, that is, by really offering value. They're offering dramatically more value and functionality than Microsoft, so the income they're earning is actually deserved.

One key bit of information we're missing is whether all of these Windows systems, servers and networks that are constantly being overrun with viruses and ransomware and costing their enterprises tens of millions of dollars to restore and recover, we're normally talking about here every week, are they protected with CrowdStrike, or is CrowdStrike saving those systems that would otherwise fall? You know, if CrowdStrike is actually successfully blocking enterprise-wide catastrophe, hour by hour and day by day, then that significantly factors into the risk reward calculation. Our CrowdStrike user wrote they have saved our butts numerous times, and he said and I'm sure other enterprises feel the same Well, that is a crucially important fact that is easily missed. It may well be that right now, corporate CIOs are meeting with their other C-suite executives and boards and reminding them that while, yes, what happened 10 days ago was bad, but even so it's worth it, because the same system had previously prevented, say I don't know for example, 14 separate known instances of internal and external network breach, any of which may have resulted in all servers and workstations being irreversibly encrypted, public humiliation for the company and demands for illegal ransom payments to Russia. So if that hasn't happened because, as our listener wrote, crowdstrike saved our butts numerous times then the very rational decision might well be to stick with this proven solution, in the knowledge that CrowdStrike will have definitely learned a very important and valuable lesson and will be arranging to never allow anything like this to happen again. This superior solution is the obvious win, but if it ever should happen again, then in retrospect, remaining with them will have been difficult to justify and I you know you could imagine not being surprised if people were fired over their decision not to leave CrowdStrike after this first major incident. But even so, if CrowdStrike's customers are able to point to the very real benefits they have been receiving on an ongoing basis for years from their use of this somewhat okay, can be dangerous system, then even so, it might be worth remaining with it.

Before we talk about CrowdStrike's response, I want to share another interesting and important piece of feedback from a listener, and also something that I found on Reddit. So our listener, dan Mutal, sent. He wrote. I work at a company that uses CrowdStrike and I thought you would appreciate some insight. Crowdstrike and I thought you would appreciate some insight. Thankfully, we were only minimally affected, as many of our Windows users were at a team-building event with their laptops powered down and not working, and our servers primarily run Linux, so only a handful of workstations were affected.

However, the recovery was hampered by the common knowledge which turned out to be false, that the BitLocker recovery key would be needed to boot into safe mode. When you try to boot into safe mode in Windows, you are asked for the BitLocker recovery key. Most knowledgeable articles, even from Microsoft, state that you need to enter the BitLocker key at this point, he says but this is not required. It's just not obvious and not well known how to bypass this. He says here's what we discovered over the weekend Cycle through the blue screen error when the host continues to crash until you get to the recovery screen.

Perform the following steps First, navigate to Troubleshoot Advanced Options, startup Settings, press Restart. Skip the first BitLocker Recovery Key prompt by pressing Escape. Skip the second BitLocker Recovery Key prompt by selecting skip this drive at the bottom right. Navigate to troubleshoot advanced options command prompt. Then enter bcdedit space forward slash set and then default safe boot minimal. Then press enter. Close the command prompt window by clicking the X in the top right. This will return you back to the blue screen, which is the Windows RE main menu. Select continue. Your PC will now reboot. It may cycle two to three times, but then your PC should boot into safe mode.

He said. I confirmed this worked and allowed us to recover a few systems where we did not have the BitLocker keys available. He says I think Microsoft deserves a lot of blame for the poor recovery process. When safe mode is needed, they should not be asking for BitLocker keys if they're not needed. At the bare minimum. They need to make this knowledge much more well-known so system admins who don't have BitLocker keys handy can still boot into safe mode when disaster strikes. I also want to send a shout out to CrowdStrike's technical support team. I'm sure they were swamped by support requests on Friday, but despite that we were talking to them after waiting on hold for only 10 minutes, and they were very helpful. Most vendors aren't not quick or useful on a good day, let alone on a day when they are the cause of a massive outage. Crowdstrike is an expensive product. But it is clear that a large chunk of that expense pays for a large and well-trained support staff. So that was from a listener of ours.

0:23:40 - Leo Laporte
Those are really excellent points, I mean aren't they yes? Yeah, I mean. So much worse to get bit by ransomware than a temporary boot. You know, flaw.

0:23:51 - Steve Gibson
Yes, exactly. So for our listeners to say, yes, you know this was not good, you know that first listener had 20,000 endpoints, but he said still, it has saved our butts. Endpoints, but he said still, it has saved our butts. Yeah, well, saved our butts must mean that he has evidence that something bad would have happened to them, had crowd strike, not blocked it. So what's that worth? You know it's worth a lot, um.

Okay, so on reddit I found a posting. Oh and, by the way, leo, I just should mention I don't normally have our monitor screen up so that I know if you're there or not. So just a note to your control room.

0:24:37 - Leo Laporte
I'm here, by the way, nice, to hear your voice.

0:24:41 - Steve Gibson
So this post on Reddit said just exited a meeting with CrowdStrike, you can remediate all of your endpoints from the cloud. There's news, he said. If you're thinking that's impossible, how he says this was also the first question I asked and they gave a reasonable answer. And they gave a reasonable answer. To be effective, crowdstrike services are loaded very early in the boot process which, of course, is what we talked about last week and they communicate directly with CrowdStrike. This communication is used to tell CrowdStrike to quarantine Windows System, 32 drivers, crowdstrike and then the infamous, you know 291 star dot sys file. So he said, to do this, you must first opt in. That is for this, for this cloud based recovery, by submitting a request via the support portal providing your customer IDs and request to be included in cloud remediation. And request to be included in cloud remediation At the time of the meeting. He says when he was in this meeting, the average wait time for inclusion was less than one hour. Once you receive email indicating that you have been included, you simply have your users reboot their computers. In other words, it's a self-repair of this, of this problem. Yeah, yeah, he said. Cloud strike noted cloud strike noted, um, that sometimes the boot process completes too quickly for the client to get the update and a second or third try is needed, but it is working for nearly all of their affected users. At the time of the meeting, they had remediated more than 500,000 endpoints this way. He says it was suggested to use a wired connection when possible, since Wi-Fi connected users have the more the most frequent trouble with this approach, probably because Wi-Fi connectivity becomes available later in the boot process, after a crash will have occurred. He says this also works with home and remote users, since all they need is an internet connection any internet connection. The point is they do not need to be, and should not be, vpned into the corporate network. So, anyway, I thought that was really interesting, since essentially, we have another one of those race conditions We've talked about those recently, right, in this case it's one where we're hoping that the CrowdStrike network-based update succeeds before the crash can occur. Okay, so with all of that, what more do we know today than we did a week ago?

At the time of last week's podcast, the questions that were on everyone's mind were variations of how could this possibly have been allowed to happen in the first place? How could CrowdStrike not have had measures in place to prevent this? And even, you know, staggering. The release of the buggy file would have limited the scale of the damage. Why wasn't that at least part of their standard operating procedure? For last week's podcast? We had no answers to any of those questions. Among the several possibilities I suggested were that they did have some pre-release testing system in place, and if so, then it must have somehow failed has received from them since that time. I have no doubt about its truth, since CrowdStrike executives will be repeating that under oath shortly. Last week we shared what little was known which CrowdStrike had published by that point under the title what Happened, but they weren't yet saying how, and we also had that statement that I shared from George Kurtz, crowdstrike's founder and CEO. This week we do have there what happened, which is followed by what went wrong and why.

And okay, despite the fact that it contains a bunch of eye-crossing jargon which sounds like gobbledygook, I think it's important for everyone to hear what CrowdStrike said. So here's what they have said to explain this and in order to create the context that's necessary, we learn a lot more about the innards of what's going on. Necessary. We learn a lot more about the innards of what's going on. They said CrowdStrike delivers security content configuration updates to our sensors. And when they say sensors they're talking about basically a kernel driver. So some of that, some of it, is built into the kernel driver or delivered with driver updates and the other is real time. So they said so. Whenever you hear me say sensors you know it's not anything physical, although it sounds like it is. It is software running in the kernel. They said in two ways sensor content that is shipped with our sensor directly and rapid response content that is designed to respond to the changing threat landscape at operational speed. They wrote. The issue on Friday involved a rapid response content update with an undetected error.

Sensor content provides a wide range of capabilities to assess an adversary response. It is always part of a sensor release and not dynamically updated from the cloud. Sensor content includes on-sensor AI and machine learning models and comprises code written expressly to deliver long-term reusable capabilities for CrowdStrike's threat detection engineers. These capabilities include template types which is this figure strongly in the response which have predefined fields for threat detection engineers to leverage in rapid response content. Template types are expressed in code. All sensor content, including template types, go through an extensive QA process which includes automated testing, manual testing, validation and rollout.

The sensor release process begins with automated testing both prior to and after merging into our code base. This includes unit testing, integration testing, performance testing and stress testing. This culminates in a staged sensor rollout process that starts with dogfooding internally at CrowdStrike, followed by early adopters. It's then made generally available to customers. Customers then have the option of selecting which parts of their fleet should install the latest sensor release N, or one version older, n-1, or two versions older N-2, through sensor update policies. Now, okay, to be clear, all of that refers to the, essentially the driver, the so-called sensor. So for that stage they are doing you know, incremental rollout, early adopter testing and so forth, unfortunately not for the rapid response stuff. So they said the event of Friday July 19th was not triggered by sensor content, which is only delivered with the release of an updated Falcon sensor, meaning, you know, updated static drivers. They said customers have complete control over the deployment of the sensor, which includes sensor content and template types.

Rapid response content is used to perform a variety of behavioral pattern matching operations on the sensor using a highly optimized engine. Right, because you don't want to slow the whole Windows operating system down as it takes time out to analyze everything that's going on. So they said rapid response content is a representation of fields and values, with associated filtering. This rapid response content is stored in a proprietary binary format that contains configuration data. It is not code or a kernel driver, but I'll just note. Unfortunately it is interpreted, and how much time have we spent about interpreters going wrong on this podcast.

They said rapid response content is delivered as template instances, which are instantiations of a given template type. Each template instance maps to specific behaviors for the sensor to observe, detect or prevent. Template instances have a set of fields that can be configured to match the desired behavior. In other words, template types represent a sensor capability that enables new telemetry and detection, and their runtime behavior is configured dynamically by the template instance. In other words, specific rapid response content. Rapid response content provides visibility and detections on the sensor without requiring sensor code changes. This capability is used by threat detection engineers to gather telemetry, identify indicators of adversarial behavior and perform detections and preventions.

Rapid response content is behavioral heuristics, separate and and distinct from CrowdStrike's on-sensor AI prevention and detection capabilities. Rapid response content is delivered as content configuration updates to the Falcon sensor. There are three primary systems the content configuration system, the content interpreter and the sensor detection engine. The content configuration system is part of the Falcon platform in the cloud, while the content interpreter and sensor detection engine are components of the Falcon sensor, in other words running in the kernel can censor, in other words running in the kernel. So we've got a content interpreter running in the kernel. What could possibly go wrong? Well, we found out, they said. The content configuration system is used to create template instances which are validated and deployed to the sensor through a mechanism called channel files. The sensor stores and updates its content configuration data through channel files which are written to disk on the host.

The content interpreter on the sensor reads the channel files and interprets the rapid response content enabling the sensor detection engine to observe, detect or prevent malicious activity, depending on the customer's policy configuration. The content interpreter is designed to gracefully handle exceptions from potentially problematic content. I'm going to read that sentence because that's what failed. The content interpreter is designed to gracefully handle exceptions from potentially problematic content, except in this instance, as we know, it did not. And they finish with. Newly released template types are stress tested across many aspects, aspects such as resource utilization, system performance, impact and event volume. For each template type, a specific template instance is used to stress test the template type by matching against any possible value of the associated data fields to identify adverse system interactions. In other words, that's the nice way of saying crashes. And Tepland instances are created and configured through the use of the content configuration system, which includes the content validator that performs validation checks on the content before it is published. Okay, now I've read this a total of, I think maybe five times and I now finally feel like I understand it. So you know, don't be put off if that's just all. Like what did he just say? I get it. They then lay out a timeline of events, which I'll make a bit easier to understand by interpreting what they wrote. So recall that there was mention of named pipes being involved with this trouble and I explained that named pipes were a very common means for different independent processes to communicate with each other within Windows. So way back on February 28th of this year, crowdstrike Sensor version 7.11 was made generally available to customers. It introduced a new inter-process communication IPC template type which was designed to detect novel attack techniques that abused named pipes. This release followed all sensor content testing procedures outlined above with the, you know, in that sensor content section. So again, sensor content is the. This was an update essentially to the driver and all of its AI and heuristic stuff, and that happened on February 28th. A week later, on March 5th, a stress test of the IPC, this newly created IPC template type, was executed, they said, in our staging environment, which consists of a variety of operating system workloads. The IPC template type passed the stress test and was thereby validated for use.

Later, that same day, following the successful stress testing, the first of the IPC template instances were released to production as part of a content configuration update. So, like what happened on this fateful Friday, that happened back on March 1st for the first time for this new IPC template type, this new IPC template type, they said. After that, three additional IPC template instances were deployed between April 8th and April 24th. These template instances performed as expected in production.

Then, on that fateful day of Friday, july 19th, two additional IPC template instances were deployed, much as multiples of those had in February, march and April. One of these two, one of the two deployed on July 19th, was malformed and should never have been released due to a bug in the content validator and also in the interpreter. That malformed IPC template instance erroneously passed validation despite containing problematic content data, they said. Based on the testing performed before the initial deployment of the new IPC template type back on March 5th, and in the trust in the checks performed by the content validator and the several previous successful IPC template instance deployments, on Friday the 19th, these two instances, one of them malformed, were released and deployed into production.

When received by the sensor and loaded into the content interpreter, which is in the kernel, problematic content in channel file 291 resulted in an out-of-bounds memory read, triggering an exception. Memory read triggering an exception. This unexpected exception could not be gracefully handled, resulting in a Windows operating system crash, the infamous blue screen of death which then followed its attempt to recover. And, leo, we're going to talk about how they prevent it from happening again.

0:41:44 - Leo Laporte
But let's take a break so I can uh catch my breath and sip a little bit caffeine bessie in our youtube chat says how can all those pcs fail, but no pcs at crowd strike itself fail? Don't they use crowd strike at crowd strike? Uh, I'm curious and I'm sure you'll cover this how quickly they figured out that this update was causing a problem. Um, surely maybe crowd strike was closed for the day. I don't know. It happened in australia.

0:42:17 - Steve Gibson
It did happen at the wee hours of the morning on in in the united states it's wild, it's just wild.

0:42:23 - Leo Laporte
Anyway, we get to that the post-mortem in just a moment. But first a word from our sponsor, aci Learning. You know their name as IT Pro. We've talked about IT Pro for years. It Pro is now provided by our friends at ACI Learning, but it's the same binge-worthy video-on-demand IT and cybersecurity training we've talked about all this time.

With IT Pro you get cert-ready, certification-ready, with access to a library of more than 7,250 hours of up-to-date, complete, informative, engaging training. There's nothing old in this training. They're updating it all the time. That's why they have so many studios. What is it? Seven or eight studios running all day, monday through Friday. They're always updating it because the tests change, the certs change, the programs you're using change, new versions come out. That's a nonstop. It's like painting the Golden Gate Bridge they never finish. They never get all the videos done before they have to start over. But that's good for you because it means those 7,250 hours of training are as up to date as possible.

Premium training plans also include, besides the videos, you get practice tests, which is, I will tell you from my own experience. The best way to take an exam is to take practice tests ahead of time. It prepares you for the format, gives you the confidence that you know the material and if you don't know it, lets you know pretty quickly. Back to the books. They also offer virtual labs, which have so many useful features. First of all, you can set up a complete enterprise environment with Windows servers and everything, without having any of it. All you need is an HTML5 compatible browser. You do it on a Chromebook and if you break something it's okay because you just you know you close that tab and you move on with your life. But you also a lot of MSPs use this to test, to trial new software, new setups, new configurations. It's a really nice tool. Itpro from ACI Learning makes training fun. All the videos produced in an engaging talk show format by experts in the field whose passion for what they're teaching you communicates, you get, or you're engaged and you start to get passionate about it as well Super valuable.

You could take your IT or cyber career to the next level by being bold, training smart with ACI Learning. Now we've got a great deal for you. Visit goacilearningcom. Slash twit the offer code SN100. Okay, sn for security, now 100. Enter that at checkout. 30% off your first month or first year if you sign up for a year of IT Pro training. Can I make a recommendation? That's the most savings and it's worth it. You will not regret it. So many of our listeners got their certs, got their first jobs in IT, got better jobs in IT from IT Pro. We know it works. Goacilearningcom slash twit the offer code SN100. Thank you ACI Learning for supporting Steve and the work he's doing here. Steve, I'm trying to decide. I'm bringing you know I'm packing up stuff. As you probably noticed, stuff's starting to disappear from the studio. I'm trying to decide. Should I take this needlepoint that says Look, humble Can't?

0:45:56 - Steve Gibson
decide. Where did it come from? Does it have a special meaning for?

0:46:01 - Leo Laporte
you. No, it wasn't my work. Grandma was probably a listener sent it to me. I just always thought it was very funny. I've always had it. I am definitely taking the Nixie clock.

0:46:10 - Steve Gibson
You cannot have a studio without a Nixie clock.

0:46:12 - Leo Laporte
No, I think you should keep the Nixie clock, it's definitely everything that blinks I'm taking good, I am not taking this darn clock, which has been the bane of my existence since since the brick house, people get mad when that clock is not visible on the show, that digital clock. But I have other clocks. I'm not going to bring that one. We'll see?

0:46:31 - Steve Gibson
well, I guess the question is also how much room do you have? Well, it's, that's the point is I?

0:46:35 - Leo Laporte
you know, I'm pretty much everything you see behind me. I'm going to leave here. We've got a, a company that doesn't leave behind. Yeah, there's a company that's a liquidator that they come in and they sell what they can, they donate what they can, recycle what they can and toss the rest and uh, I guess that's all we can do. I'm. This is a crazy amount of stuff we've accumulated over the years as one does as one does.

We are opening the studio on the 8th to any club twit members who want to come and get something. Come on down. We're blowing it out to the bare walls. You know what I'm giving away and I hate to do it, but I have that giant demonstration slide rule. I don't think you can see it as you just see the bottom of it. It's one of those yellow.

0:47:20 - Steve Gibson
I do see the bottom of it with the plastic slider, but where am I going to put that?

0:47:26 - Leo Laporte
Yeah, hanging off the roof, I don't know. Yeah, I only have, you know, a tiny little attic studio. So a lot of stuff getting left behind.

0:47:35 - Steve Gibson
I'm sad to say I've got some friends who some of my high school buddies are actively lightening their load. Yeah, lightening their load, yeah. Like you know, one guy deliberately converted his huge CD collection over to audiophiles and threw away all the CDs. I mean it hurt to do it right, because those are like.

0:47:54 - Leo Laporte
Lifetime. It's a lifetime of collecting.

0:47:56 - Steve Gibson
Yeah, but it's like eh, you know, I want to travel more. I don't want to like.

0:48:00 - Leo Laporte
Am I going to dump this on my kids? I going to dump this on my kids? I don't want to do that. So yeah, they call it. There's a there's a book about it, called swedish death cleaning, where you prepare for your death and you and do do your heirs a favor. You get rid of the stuff that you don't think they'll be interested in. It's hard, though. Yeah, I don't know what those pdp8s that I've got are gonna I'm taking. I'm taking my mI, the two.

Raspberry Pi devices and the Pi DP8. I'm absolutely taking those. They're too cool.

0:48:30 - Steve Gibson
Blinking lights have got to stay and I have been remiss in not telling our listeners that just so much has been happening on the podcast, but the guy that did the fantastic Pi DP8 has done a PDP-10, and it is astonishing it. I mean it, it is, it is I. I'll have to make time to, to I don't know when, the podcast has just been so crazy lately, but the but the emulated pdp 10. He gathered all the software from MIT.

0:49:05 - Leo Laporte
It was Oscar Vermeulen right Oscar yes.

0:49:09 - Steve Gibson
Oscar has done a PDP-10 also with a little Raspberry Pi running behind it, a gorgeous injection molded PDP-10 console recreation. So there's the 8.

0:49:22 - Leo Laporte
Yeah, this is the one we have.

0:49:25 - Steve Gibson
And the 10.

0:49:27 - Leo Laporte
Is there a link to it there? His vintage computer collection. I think this is.

0:49:33 - Steve Gibson
Yeah, he gave it his own website.

0:49:36 - Leo Laporte
Oh, okay, I'll find that.

0:49:38 - Steve Gibson
I'm surprised he's not linked to it, but you might put in PDP-10 recreation or something.

0:49:44 - Leo Laporte
Yeah, let's see what I can find. To find it because because, oh my goodness, Obsolescence guaranteed.

0:49:50 - Steve Gibson
Look at that.

0:49:52 - Leo Laporte
Oh, okay, Oscar.

0:49:55 - Steve Gibson
I want it. It is astonishing oh it's beautiful, it's very. Star Trek.

0:50:01 - Leo Laporte
That is Star Trek.

0:50:03 - Steve Gibson
It's an injection molded, you know, full working system. All of the software is there. You're able to connect a normal PC to it so you can work with a console and keyboard.

0:50:16 - Leo Laporte
Oh, it's got Adventure. Uh-oh, I might have to buy this.

0:50:21 - Steve Gibson
So it's running a.

0:50:23 - Leo Laporte
Raspberry Pi, but that's the same performance as a PDP-10?.

0:50:27 - Steve Gibson
Oh, it blows the PDP-10 away. He had to slow it down in order to make it.

0:50:35 - Leo Laporte
Oh, this is so cool. I mean the original software running.

0:50:41 - Steve Gibson
And so there they were, comparing the operation of their console to their emulator, to a real one to the emulator. In fact, I was thinking about this because Paul Allen was selling some of their original machines, right?

0:50:57 - Leo Laporte
Oh wow, Oscar Vermeulen. Obsolescencewixsitecom is obsolescence guaranteed and you can. You can build the kit.

0:51:09 - Steve Gibson
You can buy it it is or build the kit yeah, embarrassingly inexpensive again and uh oh boy, but I mean for from he and his, his, his, uh, buddy, would, they were out demonstrating it to the Boston Computer Museum. Oh wow, and he said, steve, could you make time for us to show you? So he came and set this up plugged its. Hdmi output into our screen in our family room and gave Lori and me a full demonstration of the operation of this.

0:51:46 - Leo Laporte
Did Lori know what she was getting into when she married you?

0:51:50 - Steve Gibson
And later we're going to have somebody demonstrate a replica PDP-10.

0:51:54 - Leo Laporte
Won't that be fun. I love it. Well Oscar, well done. I'm glad he did it again.

0:52:03 - Steve Gibson
Yeah, I got the email too.

0:52:05 - Leo Laporte
And I guess I'll have to build because I have the Pied Pied is already on the set in the attic and I think I have to upgrade.

0:52:13 - Steve Gibson
Oh, just look at that console.

0:52:15 - Leo Laporte
It is just gorgeous.

0:52:16 - Steve Gibson
It was funny too, because her 28-year-old son, Robert, happened to also be there, oh, and he was watching this and he'd never seen a console before right with like lights and switches and he said are those bits good question?

0:52:37 - Leo Laporte
it was.

0:52:38 - Steve Gibson
It was like question it was, it would have never occurred to me to ask that question, but I it's like yes, those are bits. Those are what bits are. Are you know? Those individual lights turning on and off and the switches are bits.

Wow, wow and what's cool is that, whereas the the pdp8 is a pain to program because you've only got a three-bit op code, so you got seven instructions. The pdp 10, oh, it's a 36-bit system and it's got. It's a gorgeous instruction set. I mean really just a joy. So I mean so this is a complete recreation.

0:53:19 - Leo Laporte
You can be using it with its editors and its compilers, and that's what you want, right, steve there by the way are osco, oscar and auto uh showing off the entire line, oscar's on the left at the vintage computer festival very cool, very cool. So they've got a 1, 8, 11 and a and a 10.

0:53:48 - Steve Gibson
Yes, and he says down there, and the 1 are at the, I guess 1 and 10. At that time we're in prototype, 10 is finished and they're now working on a PDP-1. And I have to tell you he credits this podcast as changing this from a hobby to a business, because there was so much interest shown in the PIDP 8 and then in the 11 that they turned it into a business. And so, anyway, I'm glad that this came up, because I felt badly that I have not found time, because I mean, our podcasts have been running more than two hours.

0:54:29 - Leo Laporte
I know, I know Recently, but I'm so glad because I wanted to get this into. So I'm glad we could mention it as I, as I pack up my pie DP and bring it home.

0:54:39 - Steve Gibson
My party and gentlemen, listening, if you saw the actual size it's not a huge thing. So it does store in the closet. If you're talking about how patient.

0:54:53 - Leo Laporte
Lori is with me. Don't put it on the dining room table, whatever you do. I know you're tempted, gentlemen, but don't. Yeah, it's the white one right here, Yep exactly.

0:55:05 - Steve Gibson
So it's what.

0:55:06 - Leo Laporte
It's about a couple of feet wide, maybe it is a scale.

0:55:09 - Steve Gibson
Yes, it's a scale size replica of the console of the original pdp-10. Yeah, and I have that and back, but loaded with loaded with all of the original software. They even have one guy who specializes in recreating, recovering data from unreadable nine-track magnetic tapes, and so they were getting mag tapes. You can see one right there behind my head that is a nine-track magnetic tape that actually came from sale from Stanford's Artificial Intelligence.

0:55:48 - Leo Laporte
Lab and that's got my code on. Oh, that's really cool.

0:55:52 - Steve Gibson
Wow, so so they, so they, they've. They went back and recreated the original files and, in some cases, hand editing typos out in order to get everything to recompile again. In order to, because there was no like preservation, uh, project until now. So they've, they've, really they did a beautiful job and congratulations yeah okay, so following on what happened Now, back to the bad news. Naturally, CrowdStrike wants to explain how this will never happen again.

0:56:33 - Leo Laporte
Yeah.

0:56:41 - Steve Gibson
So, under a subhead of software resiliency and testing, they've got bullet points and I have to say that this first batch sounds like the result of a brainstorming session rather than an action plan. They have improve rapid response content testing that was the problematic download right by using testing types such as local developer testing, content update and rollback testing, stress testing, fuzzing and fault injection stability testing, content update and rollback testing, stress testing, fuzzing and fault injection stability testing and content interface testing. And of course, many people would respond to that why weren't you doing all that before? Unfortunately, that's the generic response right to anything that they say that they're now going to do is well, why weren't you doing that before? Anyway, so they also have add additional validation checks to the content validator for rapid response content. A new check is in progress to guard against this type of problematic content from being deployed in the future. Good, enhance existing error handling in the content interpreter right, because it was ultimately the interpreter that crashed the entire system when it was interpreting some bad content, so that should not have been able to happen. And then, under the subhead of rapid response content deployment, they've got four items Implement a staggered deployment strategy for rapid response content in which updates are gradually deployed to larger portions of the sensor base, starting with a canary deployment. Improve monitoring for both sensor and system performance, collecting feedback during rapid response content deployment to guide a phased rollout. Provide customers with greater control over the delivery of rapid response content updates by allowing granular selection of when and where these updates are deployed. Provide content update details via release notes which customers can subscribe to. Details via release notes which customers can subscribe to.

And I have to say kind of reading between the lines. Again, you know, programmers have egos right. We write code and we think it's right, and then it's tested and the testing agrees that it's right. And it's difficult, without evidence of it being wrong, to go overboard. They did have systems in place to catch this stuff. It turns out in retrospect something got past that system or those systems. Now they know that what they had was not good enough and they're making it better. More yes, but can't you always do more yes? And then one could argue in that case, if it's possible, if it's in any way possible for something to go wrong, then shouldn't you prevent that? Well, they thought they had. They thought that the content interpreter was bulletproof, that it was an interpreter, it would find any problems and refuse to interpret them if they were going to cause a problem. But there was a bug in that, so this happened, okay.

So the bottom line to all of this, I think, is that CrowdStrike now promises to do what it should have been doing all along, like the staggered deployment. Again, I mean, that's indefensible, right? Why were you guys not incrementally releasing this? Well, it's because they really and truly believed that nothing could cause this to happen. They really thought that they were wrong. But it wasn't negligence. I mean, in the same sense that most programmers don't release buggy code, right, they fix the bugs. Microsoft is an exception. They've got a list of 10 000 known bugs when they release windows, but they're small and they figure they won't actually hurt anybody. And they'll, they'll. You know they're. They're not show stoppers, they actually use that term. So it's like okay, fine, it works. Um, so is this another of those small earthquake tremors I've been recently talking about? You know, I guess it would depend upon whom you ask.

The source of the problem was centralized, but the remediation of the problem was widely distributed. A half million machines, several hundred thousand individual technicians got to work figuring out what had happened to their own networks and workstations and each was repairing the machines over which they had responsibility, because initially you know CrowdStrike, as we saw they ended up coming up with a cool cloud-based solution, but initially, no, that didn't exist. Presumably it will now be deployed in some sort of a permanent fashion and, as we know, in the aftermath some users of Windows appear to have been more seriously damaged than others. In some cases, machines that were rebooted repaired themselves by retrieving the updated and repaired template file, whereas in other situations, such as Delta Airlines, the effects from having Windows system crashing lasted days.

I have no direct experience with CrowdStrike, but not a single one of our listeners, from whom we have heard even after enduring this pain, sounded like they would prefer to operate without CrowdStrike level protection and monitoring in the future, and I think that's a rational position. No one was happy that this occurred, and it really is foreseeable that CrowdStrike has learned a valuable lesson about using belts, suspenders, velcro and probably some epoxy. They may have grown a bit too comfortable over time, but I'll bet that's been shaken out of them today, and I have no doubt that it will be like that. They've really raised the bar on on having this happen to them again. Another little bit of feedback, since it's relevant to the CrowdStrike discussion. I wanted to share what another listener of ours, vernon Young. I am the IT director for a high school and manage 700 computers and 50 virtual servers.

A few weeks before get this, leo. A few weeks before the Kaspersky ban was announced, I placed a $12,000 renewal order with Kaspersky $12,000, which will now be lost since the software won't work after September, he wrote. He said after the ban was announced, I started looking for alternatives. He said after the ban was announced, I started looking for alternatives. I decided on Thursday, july 18th, to go with CrowdStrike. Oh my God.

1:04:34 - Leo Laporte
The day before.

1:04:40 - Steve Gibson
Oh my God, he says, the day before didn't feel like walking to the copier to scan the purchase order to send to the sales rep before I left for the day. Needless to say, I changed my mind Friday morning.

I cannot imagine being Vernon our listener and needing to corral a high school campus full of mischievous and precocious high schoolers People like you, I can't imagine you know as high schoolers will that they're more clever than the rest of the world and whose juvenile brains sense of right and wrong hasn't yet had the chance to fully develop. But think about the world Vernon is facing. He invests $12,000 to renew the school's Kaspersky AV system license, only to have that lost. Then decides that CrowdStrike looks like the best alternative, only to have it collapse the world For what it's worth worth. I stand by the way I ended that crowd strike discussion. I would go with them today.

Yes all of the feedback we've received suggests that they are top notch, that they're very clearly raising the bar to prevent another mistake like this from ever slipping past. There's just no way that they haven't really learned a lesson from this debacle, and I think that's all anyone can ask at this point, and the fact that our listeners are telling us they are the best there is.

1:06:22 - Leo Laporte
There's no other choice, no choice.

1:06:26 - Steve Gibson
Microsoft is number two and they don't hold a candle Right. I mean, certainly all of the systems that we talk about succumbing to ransomware are at least running Microsoft Defender, and that's not helping them, okay. So who is to blame? Our wonderful hacker friend, marcus Hutchins, posted a wonderfully comprehensive 18-minute YouTube video which thoroughly examines, explores and explains the history of the still-raging three-way battle among Microsoft, third-party AV vendors and malware creators. That video is on YouTube. It's this week's GRC of the week, so your browser can be redirected to it. If you go to grcsc slash 985, today's episode number, grcsc, as in shortcut, grcsc slash 985. When you go there, be prepared for nearly 18 minutes of high-speed, non-stop, perfectly articulated techie detail, because that's what you're going to get. I'll summarize what Marcus said.

Since the beginning of Windows, the appearance of viruses and other malware and the emergence of a market for third-party antivirus vendors the emergence of a market for third-party antivirus vendors there's been an uncomfortable relationship between Microsoft and third-party AV To truly get the job done correctly. Third-party antivirus has always needed deeper access into Windows than Microsoft has been willing or comfortable to give, and the CrowdStrike incident shows what can happen when a third party makes a mistake in the Windows kernel. But it is not true. Marcus says that third-party antivirus vendors can do the same thing as Microsoft can without being in the kernel. This is why Microsoft themselves do not use the APIs they have made available to other antivirus vendors. Those APIs do not get the job done and the EU did not say that Microsoft had to make the kernel available to other third parties. The EU merely said that Microsoft needed to create a level playing field where the same features would be available to third parties as they were using themselves.

Since Microsoft was unwilling to use only their own watered-down antivirus APIs and needed access to their own OS kernel, must recognize that this area of Windows has been constantly evolving since then and that the latest Windows 10 1703 has made some changes that might offer some hope of a more stable world in the future. The problem, of course, is that third parties still need to be offering their solutions on older Windows platforms which are still running just fine, refuse to die and may not be upgradable to later versions. So Marcus holds Microsoft responsible. That's his position, and I commend our listeners to go to grcse. Slash 985 to get all the details. Slash 985 to get all the details. Anyone who has a ticky bent will certainly enjoy him explaining pretty much what I just have in his own words. And Leo, we're at an hour.

Let's take another break, and then we're going to look at what happened during Entrust's recent webinar, where they explain how they're going to hold on to their customers.

1:11:11 - Leo Laporte
Some of these never-ending stories are really quite amusing, I must say. I must say Our show today, brought to you by Panoptica. Panoptica is Cisco's cloud application security solution. It provides end-to-end lifecycle protection for cloud-native application environments. It empowers organizations to safeguard their APIs, their serverless functions, their containers, their Kubernetes environments, and on and on. Panoptica ensures comprehensive cloud security, not to mention compliance and monitoring at scale, offering deep visibility, contextual risk assessments and actionable remediation insights for all your cloud assets. Powered by graph-based technology, panoptica's attack path engine prioritizes and offers dynamic remediation for vulnerable attack vectors, helping security teams quickly identify and remediate potential risks across cloud infrastructures. A unified cloud-native security platform minimizes gaps from multiple solutions. It provides centralized management and reduces non-critical vulnerabilities from fragmented systems. You don't want that. You want Panoptica. Panoptica utilizes advanced attack path analysis, root cause analysis and dynamic remediation techniques to reveal potential risks from an attacker's viewpoint. This approach identifies new and known risks, emphasizing critical attack paths and their potential impact. Panoptica it's got several key benefits for businesses at any stage of cloud maturity, including advanced CNAP, multi-cloud compliance, end-to-end visualization, the ability to prioritize with precision and context, dynamic remediation and increased efficiency with reduced overheads.

It sounds like a lot. It is a lot. That's why you, right now, should go to panopticaapp and learn more. P-a-n-o-p-t-i-c-a panopticaapp Secure your cloud with confidence with Panoptica. We thank Panoptica so much for supporting steve's hard work at security. Now see you know you you talk about. Well, are there options to bit warden? There are, and a lot of them are. Are seeing this as a as an opportunity? Right? This is the time? By the way, did my clock disappear? I think stuff's leaving the building.

1:13:46 - Steve Gibson
Yeah, the Nixie clock is gone.

1:13:47 - Leo Laporte
I had an interesting giant sword that seems to have disappeared as well. So if you see somebody going down the street with a sword about yea long, you might call the authorities.

1:13:59 - Steve Gibson
Looks like the ukulele is still there, though.

1:14:01 - Leo Laporte
The uke. No one's taking the uke. I don't know why.

1:14:06 - Steve Gibson
All right, continue on. My friend, entrust held a 10 am webinar last week, which included the description of their solution with the partnership we mentioned last week with SSLcom. It was largely what I presumed from what they had said earlier, which was that behind the scenes, the certificate authority, sslcom, would be creating and signing the certificates that Entrust would be purchasing from SSLcom and effectively reselling. There were, however, two additional details that were interesting. Before any certificate authority can issue domain validation certificates, the applicant's control over the domain name in question must be demonstrated. So, for example, if I want to get a certificate for GRCcom, I need to somehow prove to the certificate authority that GRCcom is under my control. I can do that by putting a file that they give me on the root of the server, the web server that answers at GRCcom, showing that, yeah, that's my server, because they asked me to put this file there and I did. I can put a text record into the DNS for GRCcom again, proving that I'm the guy who's in charge of GRCcom's DNS. And there's a weak method the weakest is to have email sent from the GRCcom domain, but when all else fails you can do that. So, anyway, you need to somehow prove you own the domain, you have control over it. So it turns out SSLcom is unwilling to take Entrust's word for that. Entrust customers who wish to purchase web server certificates from Entrust after this coming October 31st is that they will need to prove their domain ownership, not to Entrust as they have in the past, but to SSLcom. Not surprising, but still not quite what Entrust was hoping for.

The second wrinkle is that Entrust does not want SSLcom's name to appear in the web browser when a user inspects the security of their connection to see who issued a site's certificate. No, it's got to be Entrust. So, although SSLcom will be creating each entire certificate on behalf of Entrust, they've agreed to have SSLcom embed an Entrust intermediate certificate into the certificate chain. Since web browsers only show the signer of the web server's final certificate in the chain, by placing Entrust in the middle, sslcom will be signing the Entrust intermediary and Entrust's intermediary will be signing the server's domain certificate. In this way, it will be Entrust's name that will be seen in the web browser by anyone who is checking.

So you know the webinar was full of a lot of you know all of this, how they're going to get back into good graces of the CA browser forum and all the steps they're taking and blah, blah, blah. We'll see how that goes with the passage of time. For now, that's what they're doing in order to, in every way they can hold on to the customers that they've got who've been purchasing certificates. I should mention that the webinar also explained that all of the existing mechanism of using Entrust is in place. Everything is Entrust-centric with whoops, the exception of needing to prove domain ownership to somebody else. No way around that one, leo. This actually came as a result of our talking about the GRC cookie forensic stuff last week.

Something is going on with Firefox that is not clear and is not good. After last week's discussion of third-party cookies and you playing with GRC's cookie forensics pages and you playing with GRC's cookie forensics pages, several people commented that Firefox did not appear to be doing the right thing when it came to blocking third-party cookies in what it calls strict mode. Strict mode is what I want, but, sure enough, strict mode behavior does not appear to be what I'm getting. Under Firefox's Enhan protection, we have three settings, three overall settings standard, strict and custom. Standard is described as quote balanced for protection and performance. Pages will load normally. In other words, third-party cookies we love you, you can. Anybody who wants one can munch on one. Strict is described as stronger protection but may cause some sites to or content to break details this further by claiming it says firefox blocks the following social media trackers, cross-site cookies in all windows, tracking content in all windows, crypto miners and finger printers. Well, that all sounds great. The problem is it does not appear to be working at all under Firefox. This issue arose initially came to my attention in our old school news groups, which, where I hang out with a great group of people, so it grabbed a lot of attention, and many others have confirmed that, as have I, as have I. Firefox's strict mode is apparently not doing what it says, what we want and expect. It says cross-site cookies in all windows. That's not working. Chrome and Bing work perfectly.

In order to get Firefox to actually block third-party cookies, it's necessary to switch to custom mode. Tell it that you want to block cookies. Then under which types of cookies to block? You cannot choose cross-site tracking cookies. I mean you can, but it doesn't work. You need to turn the strength up higher. So cross-site tracking cookies and isolate other cross-site cookies Nope, that doesn't work either. Neither does setting cookies from unvisited websites. Nope, still doesn't work. It's necessary to choose the custom mode and then the cookie blocking selection of all cross-site cookies, which then, it says in parens, may cause websites to break. Once that's done, grc's cookie forensics page shows that no third-party session or persistent cookies are being returned from Firefox, just as happens with Chrome and Bing. When you tell them to block third-party cookies, they actually do. Firefox actually does not Back.

When I first wrote this, when third-party cookies were disabled some of the broken browsers they had really weird behavior. It's why I'm testing eight different ways of setting cookies in a browser because they used to all be jumbled up and some worked and some didn't. Some were broken, some weren't, so in some cases, when you told it to disable third-party cookies, it would stop accepting settings for new cookies, but if you still had any old, you'd call them stale cookies. Then those would still be getting sent back. All of that behavior's been fixed, but it's broken under firefox.

Looking at the wording, which specifically refers to cross-site tracking cookies, it appears that Firefox may be making some sort of value judgment about which third-party cross-site cookies are being used for tracking and which are not. That seems like a bad idea. I don't want any third-party cookies. Chrome and Safariaris had that all shut down for years. Chrome and bing will now do it if you tell them to. So what do they imagine? That they can and and have somehow maintained a comprehensive list of tracking domains and won't allow third-party cookies to be set by any of those. The only thing that comes to mind is like some sort of heuristic thing, and all of that seems dumb. Just turn them off, like everybody else does.

You know, there may be more to what's going on here, though. One person in GRC's news group said that they set up a new virtual machine installed Firefox and it is working correctly. If that's true and it's not been confirmed, that would suggest that what we may have is another of those situations we have encountered in the past. We have encountered in the past where less secure behavior is allowed to endure in the interest of not breaking anything in an existing installation, whereas anything new is run under the new and improved security settings. But if so, that's intolerable too, because it appears to be completely transparent.

That is no sign of that is shown in the user interface, and if that's really what's going on, then Firefox's UI is not telling us the truth, which, anyway, that's a problem. I wanted to bring all this up because you know it should be on everyone's radar. In case others, like me, were trusting and believing Firefox's meaning of the term strict, I would imagine that some of our listeners will be interested enough to dig into this and see whether they can determine what's going on. As everyone knows, you know, I now have an effective incoming channel for effortless sending and receiving of email with our listening community, so I'm really glad for that, and you know I'm getting lots of good feedback about that from our listeners.

1:25:09 - Leo Laporte
Okay.

1:25:10 - Steve Gibson
Leo, get a load of this one. Pc Magazine brings us the story of a security training firm who inadvertently hired a remote software engineer, only to later discover that he was an imposter based in North Korea. They wrote a US security training company discovered it mistakenly hired a North Korean hacker to be a software engineer after the employee's newly issued computer became infected with malware. The incident occurred at known before or, I'm sorry, no, before and apparently they didn't. Uh, k-n-o-w-b-e. And then numeral four, which develops security awareness programs to teach employees about phishing attacks and cyber threats. So yeah, you know they're certainly a security forward security aware company. The company recently hired a remote software engineer who cleared the interview and background check process, but last week KnowBe4 uncovered something odd. After sending the employee a company-issued Mac, knowbe4 wrote in a post last Tuesday quote the moment it was received it immediately started to load malware. The company detected the malware thanks to Mac's onboard security software. An investigation with the help of the FBI and Google's security arm Mandiant then concluded that the hired software engineer was actually a North Korean posing as a domestic IT worker. Fortunately, the company remotely contained the Mac before the hacker could use the computer to compromise NoB4's internal systems. Right, so it was going to VPN into their network and get up to some serious mischief.

When a malware was first detected, the company's IT team initially reached out to the employee who claimed quote that he was following steps on his router guide to troubleshoot a speed issue unquote. But in reality KnowBe4 caught the hired worker manipulating session files and executing unauthorized software, including using a Raspberry Pi, to load the malware. In response, nobe4's security team tried to call the hired software engineer, but he quote stated he was unavailable for a call and later became unresponsive. Yeah, I'll bet. Oh, I should also say a stock photo of a Caucasian male was modified by AI to appear to have Asian descent, and that's the photo that this employee submitted as part of his hiring process. This employee submitted as part of his hiring process, know before says it shipped the work computer and get this to an address. That is basically an IT mule laptop farm, which the North Korean then accessed via VPN.

Although no before, yeah VPN, although no before, managed to thwart the breach. The incident underscores how North Korean hackers are exploiting remote IT jobs to infiltrate US companies. In May, the US warned that one group of North Koreans had been using identities from over 6D60 real US citizens to help them snag remote jobs. The remote jobs can help North Korea generate revenue for their illegal programs and provide a way for the country's hackers to steal confidential information and pave the way for other attacks. In the case of KnowBe4, the fake software engineer resorted to using an AI edited photo of a stock image to help them clear the company's interview process. So this should bring a chill to anyone who might ever hire someone sight unseen, based upon information that's available online, as you know, opposed to the old-fashioned way of actually taking a face-to-face meeting to interview the person and discuss how and whether there might be a good fit. One of the things we know is going on more and more is domestic firms We've talked about it recently are dropping their in-house teams of talent in favor of offshoring their needs as a means of increasing their so-called agility and reducing their fixed cost. This is one of those things that accountants think is a great idea, and I suppose there may be some places where this could work, but remote software development I'd sure be wary about that one. The new tidbit that really caught my attention, though, was the idea of something that was described as an IT mule laptop farm. Whoa, so this is at a benign location where the fake worker says they're located. Being able to receive a physical company laptop at that location further solidifies the online legend that this phony worker has erected for themselves. So this laptop is received by Confederates who set it up in the IT mule farm and install VPN software, or perhaps attach the laptop to a remote KVM over IP system to keep the laptop completely clean. Either way, this allows the fake worker to appear to be using the laptop from the expected location, when instead they're half a world away in a room filled with North Korean hackers North Korean hackers all working diligently to attack the West. I wish this was, you know, just a B-grade sci-fi movie, but it's all for real and it's happening as we speak. Wow, the world we're in today. Okay, some feedback from our listeners.

Robert said hello, steve. I'll try not to take too much of your time, but I'd like to mention one thing that irked me about the entire Google trying to eradicate third-party cookies is a good thing. Business, he said. Tldr Google tries to get rid of third-party cookies to gain a monopoly in the ad market, not to protect users. He says that. I don't see it that way. But okay, he said.

First and foremost, eradicating third-party cookies is a good thing as a vehicle to stop tracking of website visitors. The only reason why google would be able to actually force website owners to move from their cookie-based ad strategies to something else flock topics, labels, whatever they call it is that they have a near monopoly in the browser market. Of course, I 100% agree with that. The only way the world would ever be able to drop third-party cookies would be if it was forced to do so, and at this time in history, only Google is in the position to have the market power to force such a change. Anyway, he goes on, it's important to keep in mind that Google is still a company that makes most of their money selling ads. Right, okay, agreed, every move they have made so far smelled like they wanted to upgrade their browser monopoly into an ad tech monopoly. Ok, I would argue they already have that, he said. My suspicion is that it wasn't necessarily the ad companies directly that threatened Google about its plan to eradicate third party cookies, but rather some pending monopoly concern about the ad market. But maybe I'm just too optimistic about that. So well, just a thought I felt was a bit underrepresented. Thanks again for the work, robert.

Okay, so the problem I have with Robert's analysis is that I cannot see how Google is giving itself any special privileges through the adoption of their privacy sandbox, while it is absolutely true that they were dramatically changing the rules. Everything that they are doing was a 100% open process with open design and open discussion and open source, and they themselves were also being forced you know, forcing themselves to play by those same rules that they were asking everyone else to play by. You know very much like Microsoft we were just talking about them versus AV. Microsoft is unwilling to accept the same limitations that that they're asking the AV vendors to accept by using the API that they provide. Google not doing that. They're going to use the same privacy sandbox that they're saying everyone else should. So there was no advantage that they had over any other advertiser just because it was their Chrome web browser, and had they been successful in bringing about this change, the other browsers would have eventually adopted the same open privacy sandbox technologies, but, as we know, that hasn't happened. I did not have time to address this fully last week due to the CrowdStrike event. So anyway, I'm glad for Robert's note. The EU bureaucrats' apparent capitulation to the slimy tracking and secretive user profiling underworld, which in turn forced Google's browser to retain its historically abused third-party cookie support, represents a massive privacy loss to the world. This was the wrong outcome and I sincerely hope that it's only a setback that will not stand for long.

Lisa in Worcester Massachusetts wrote Steve another intriguing podcast, many thanks. I find it interesting the influence Google has and doesn't have. It seems more powerful over one company like Entrust than a whole market like third party cookies. Is it influence or is it calculated cost benefit analysis that helps Google slash Alphabet decide where to flex its muscles? Thoughts from Worcester Massachusetts, lisa.

Okay, as an observer of human politics, I often observe the simple exercise of power In US politics. We see this all the time. Both major political parties scream at each other, crying foul and unfair, but they each do what they do simply because they want to and they can, when they have the power to do so. And I suspect the same is true with Google. Google has more power than Entrust, but less power than the European Union, so Google was able to do what it wished with Entrust, whereas the EU had the power to force Google to do what it wished. And who knows what's really going on inside Google? I very much wanted to see, as we know, the end of third-party cookie abuse, but we don't really know that Google or at least that all of Google did.

Others are suspicious of Google's motives, and maybe they're right to be. The EU's pushback against Google's privacy sandbox might not be such a bad thing for Google. I would imagine that an entity the size of Google has plenty of its own internal politics and that not everyone may have identical motivations. So I'm sure that some of Google was pleased and relieved by this turn of events, but I mostly wanted to share Robert's and Lisa's notes as a segue to observing that this entire issue is more of a symptom than a cause and that there's an underlying problem. The cause of the actual problem is that, unfortunately, the online collection and sale of personal information has become a large, thriving, highly profitable and powerful industry all unto itself, and that it may now be too big to stop. This is what keeps the EFF awake at night. We know enough from our previous examination of the EU's extended decision process here to have seen that their decision was directly influenced by commercial interests that wanted the status quo to remain unchanged, and those interests were powerful enough to have their way.

The question then becomes how will this ever change? The only thing more powerful than a minority of strong commercial interests is a majority of even stronger voting citizens, but people cannot dislike what they're unaware of, and the personal data collection industry has always been careful to remain in the shadows, since they know how vulnerable they would be if we, the larger public, were ever to learn a lot more about what was really going on. We've often observed on this podcast that conduct that goes unseen is allowed to occur in darkness, and we know that users sitting in front of browsers have nearly zero visibility into what's taking place right before their eyes on the other side of the screen. Those who perform this spying and data collection claim that no one really cares, but the only reason people don't appear to care is that they don't really know what's going on. When iOS began requiring apps to ask for explicit permission to track people outside of their own apps, their response was overwhelmingly negative. People who were asked said no. Similarly, it likely never occurs to the typical consumer that their own ISP, who provides them with Internet bandwidth and who knows their real-world identity because they're being paid every month to catch and aggregate our unencrypted DNS queries and the IP connections our account makes to remote sites. This data represents a profit center, so it is routinely collected and sold. It's allowed to happen only because it goes undetected and unseen. Happen only because it goes undetected and unseen.

Nearly three years ago, in October of 2021, the US Federal Trade Commission, rftc, published a news release with the headline FTC Staff Report Finds Many Internet Service Providers Collect Troves of Personal Data. Providers collect troves of personal data. Users have few options to restrict use and the subhead reads report finds many ISPs use web browsing data and group consumers using sensitive characteristics such as race and sexual orientation. Why would they be doing this if it wasn't of some commercial use to them? It seems obvious that if any consumer ever gave their permission, it was not truthfully made clear to them what was going to transpire. I certainly never gave my cable provider permission to do that, but I have no doubt that the consumer agreement I originally signed but, you know, never read, or any of the updated amendments which may have been sent to me about it. I'm sure it contained the sort of language we've talked about before, where, you know, information about our online use may be shared with business partners and so forth.

Anyway sometimes, enterprise can be a little too free. This is why the protection of consumers from this sort of pervasive privacy violation for profit is the role of government. Unfortunately, government is just people too, and people can be purchased. Government is just people too, and people can be purchased. In the US at least, lobbyists for commercial interests hold a great deal of sway over how the government spends its time and our tax dollars. The US has a couple of senators who see and understand the problem, but they're investing a great deal of their time in doing what they can, but most legislators appear to feel they have bigger fish to fry. They can, but most legislators appear to feel they have bigger fish to fry. I think what all this means is that it's up to those of us who care to do what we can. I'm disappointed that Google appears to have lost this round, but I understand that it probably had no choice. I'm sure we'll be talking about this once again once we see what Google says that they'll be coming up with, you know, as a compromise of some sort.

Ok, lee Mosner shared a Mastodon posting by Brian Krebs about speaking of the devil the collection and reselling of automotive data being done by automakers without their driver's clear knowledge or permission. Brian posted and this you know I mentioned a couple of the senators who are doing what they can. Senator ron wyden has released details. Brian krebs posted about an investigation into automakers disclosure of driving data, such as sudden braking and acceleration, to data brokers for subsequent resale to insurance companies. General Motors also confirmed to Wyden's office that it shared consumers' location data with two other companies which GM refused to identify.

The senator's letter to the FTC included new details about GM, Honda and Hyundai's sharing of driver's data with data brokers, including details about the payments the data broker Verisk made to automakers. Based on information widened obtained from automakers, the senators revealed Hyundai shared data from 1.7 million cars with Verisk, which paid Hyundai a little over a million dollars $1.043 million. Honda shared data from 97,000 cars with Verisk, and automakers used deceptive design tactics, known as dark patterns, to manipulate consumers into signing up for programs in which driver data was shared with data brokers for subsequent resale to insurance companies. So yes, technically we're giving permission, but not intending to or not understanding what it is that will be done as a result, what it is that will be done as a result, and, of course, I have no answer to this, other than for us to be aware of what's going on and take whatever measures make sense. I presume that it's no longer possible to purchase any modern vehicle that isn't connected to the Internet and dutifully feeding back everything that goes on within its perimeter to some hidden agency in the cloud.

So for the time being, you know, that's part of what it means to be an owner and operator.

And finally Alex Niehaus, the podcast's, one of our earliest supporters, or maybe it was. He was the earliest supporter. He was with Astaro and they were advertising the Astaro Security Gateway in the early days. He sent an interesting note asking an interesting question. He said Hi, steve, I tried the DNS benchmark today running on Windows 11 ARM64 in am hosted on a macbook pro using apple silicon see the image below. He said it appeared to run flawlessly and at full speed. And he says in parens and you'll like this, leo windows 11 arm this is alex saying runs faster, in my humble, on Apple Silicon than on any real PC. I've tried. It's astonishing, he says so. He says I'm wondering if you have an opinion about accuracy of the app's results in this scenario. Emulation of x86 instructions in a VM, he said. I think I remember you saying the DNS benchmark is highly timing dependent for accuracy. I wonder if sheer brute computing capability, as provided by Apple's silicon processor, can overcome the costs of double emulation, really enjoying security now these days. Thanks, alex. And Alex attached a screenshot of GRC's DNS benchmark running on a Windows 11 desktop hosted on a MacBook Pro.

I included this because, with the rise of ARM and Windows for ARM finally becoming real after so many false starts. I've been thinking about the fact that I write x86 code and that all of my utilities are written in Intel x86 assembly language. I've always felt that this meant that my code had unnecessary performance overkill, since it uses so very little of the processor's available resources to run. But this becomes something of an advantage when an ARM-based machine is used to emulate Intel's x86 instruction set. My utilities are objectively small because there are so many fewer instructions in them, and that means significantly less emulation overhead. So I think that my approach is going to stand the test of time, because there's no way any version of Windows, whether hosted on Intel or ARM, will not be able to run x86 instructions one way or another, and the fewer of those there are, the faster the code will go.

And to answer Alex's question, yes to the DNS benchmark's proper operation under ARM. The only thing the benchmark requires for accuracy is reliable timing snapshots from the system's processor clock counter, and that would be one of the first things the designers of the x86 emulation and its VM would have provided. So accurate, like what is this instant right now, that's what we need, and I'm sure that would be available within a VM emulating an Intel x86 environment something of potentially significant interest to our listeners and there is a way to find out, as we will see, if your systems are vulnerable. So we've got some takeaway user action, too.

1:49:36 - Leo Laporte
Good news. We will talk about that in just a moment. First, a word from our sponsor, of course. I'm talking about Bitwarden, the password manager offering a cost-effective solution that can dramatically improve your chances of staying safe online. Listen up, business leaders. Bitwarden has just announced its new collections management settings.

I love Bitwarden because it's open source and they're always adding features. This is an interesting one. It lets an owner choose how much or how little access admins have to everything in the vault. So you've got your owner. In the case of Twit, that's me, our admins, russell and Lisa, and I can control what they can see. It can be set so the users can create and delete their own collections where admins can't see inside. So that's great. You can have your personal vault, private to you, on the same place that the admin stuff is. I love that. This is useful for a policy of least privilege, for example. Another possible setup can require full admin control for everything you know. You may say, oh no, no, we don't want any secrets in this company. It's up to you. With these new settings and the new can manage permission you could choose how access control and sharing works in your Bitwarden organization.

This is just one more really nice feature of Bitwarden. It's more than a password manager, you see. That's the point. Passkey's support is remarkable. It really works. It's really solid and now on all mobile devices as well. Bitwarden empowers enterprises, developers. They have that secrets technology that lets you keep your APIs and your secrets your S3 secrets safe from prying eyes. Even if you're committing a repo to GitHub, it doesn't get pushed up there, it's still encrypted. It's a great way to safely store and share sensitive data.

With transparent, open-source approach to password management, bitwarden makes it easy for users to extend robust security practices to all their online experiences, and it's just a great piece of software. Get started with Bitwarden's free trial of a Teams or Enterprise plan for your organization. Or and this is really important you can get started for free, forever, across all devices, unlimited passwords, passkeys, yubikeys, whatever as an individual user. That's the beauty part. In fact, you have family and friends who say I don't want to pay for a password manager. Tell them Bitwarden's free, forever. It's open source, free for individuals, unlimited passwords. It's incredible. I actually pay for it because I just want to support them. It's $10 a year big deal and, of course, your company will definitely want an enterprise plan Bitwardencom slash twit. If you've been looking for the right password manager, this is the one Bitwardencom slash twit. We thank them so much for supporting security now, steve.

1:52:39 - Steve Gibson
Okay so, platform key disclosure. Okay so, platform key disclosure. Today's topic will reveal the details behind a widespread and by that I mean industry-wide shockingly industry-wide security failure that was discovered in the supply chain of hundreds of today's PCs from some of our largest manufacturers. The upshot of this is that these machines are unable to boot securely, despite the fact that that's what they say excuse me, they're doing. And while that'll be our primary focus, the larger point I hope to drive home is that this is additional evidence to substantiate my belief that Microsoft's recall is an inherently doomed proposition, at least if we require absolute privacy for the accumulated device history that Recall proposes to aggregate. The reason for this is that, despite all of Microsoft's assertions about how tightly and deeply they'll be protecting the accumulated history of their users, doing so with sufficient security is simply not possible due to the pervasive lack of security that pervades the entire pc ecosystem. It's like leo asked earlier in this podcast you know there's really no security anywhere, is there? It's like well, you know there's barriers, but I've always referred to security as porous, and this is why much lip service is given to how securable everything is, yet it keeps getting broken over and over and over. Okay, so get a load of what's happened now For their discovery of serious problems with the platform keys this is abbreviated PK being used across our industry to provide the root of trust for our system's secure boot technology.

Here's what they explained.

They said today we disclose PK fail, a firmware supply chain issue affecting hundreds and I should say just shy of 850 hundreds of device models in the UEFI ecosystem.

The root cause of this issue lies with the secure boot master key, called the Platform Key in UEFI terminology. The Platform Key used in affected devices is completely untrusted because it's generated by independent BIOS vendors and widely shared among different hardware vendors. This key is used to manage the secure boot databases that determine what is trusted and what, instead should not be executed, effectively maintaining the chain of trust from the firmware to the operating system. Given its importance, the creation and management of this master key should be done by the device vendors following best practices for cryptographic key management, for example, by using hardware security modules. Specifically, it must never be disclosed or known publicly right now. Now, the hardware that supports the firmware is a hardware security module. The hardware is designed to keep secrets, but if what you store in there is not secret, then it doesn't matter, if you keep it because it's already known secret, then it doesn't matter if you keep it because it's already known, so they said.

However, the Binary Research Team found that these keys are generated and embedded in a BIOS vendor's reference implementation as sample keys under the expectation that any upstream entity in the supply chain I guess I would call that a downstream entity, but anyway, you know any subsequent entity in the supply chain, such as OEMs or device vendors, would replace them. Chain, such as OEMs or device vendors, would replace them. When this does not happen, devices are shipped with the original sample untrusted keys in place. Binary researchers identified the private part of one of these platform keys in a recent data dump following a leak. This key is currently being used by hundreds of devices in the market, putting every one of them at immediate risk. A peculiarity of PK fail is that it represents yet another example of cross Silicon issue, as it affects both X 86 and arm devices. They wrote. We've developed proofs of concept to demonstrate that attackers with access to a device vulnerable to PK fail can easily bypass secure boot by signing their malicious code and thus enabling them to deliver any sort of UEFI rootkit, like those recently discovered Black Lotus.

Modern computing relies on establishing and maintaining trust, starting with trusted foundations and extending through operating systems and applications in a chain-like manner. This allows end-users to confidently rely on the integrity of the underlying hardware, firmware and software. In modern systems, trust is typically rooted in hardware-based implementations such as Intel Boot Guard or AMD's Platform Security Processor. The root trust is then propagated to the operating system via secure boot, which ensures that only digitally signed and verified boot loaders and OS kernels are executed by the boot manager. Secure boot technology uses public key cryptography and relies on four keys and databases for authentication. The four keys are one, the platform key PK, and they say the root of trust key embedded in the system firmware establishes trust between the platform owner and platform firmware. Two, the key exchange key KEK. This key establishes trust between the operating system and the platform firmware. Two, the Key Exchange Key, k-e-k. This key establishes trust between the operating system and the platform firmware.

Third is the Signature Database, abbreviated DB, this database containing trusted signatures and certificates for third-party UEFI components and bootloaders, which are thus granted execution. And then, fourth, the DBX, which is the forbidden signature database, a database containing signatures and certificates used to sign known malicious software, which are thus denied execution known malicious software, which are thus denied execution. In other words, these are specific signatures and certificates that would otherwise be valid because they somehow got themselves signed by keys that are valid, but these are specifically known to be not valid. And I should mention that it was Security Now, podcast 500. We're at 985. We're approaching the famous 999 boundary and going to 1,000. So this was exactly half of that ago.

Episode 500 was the one where the entire podcast talked about this trusted platform technology and the UEFI with platform keys, and all that, if anyone wants to go back and get more information. Anyway, they said each database is stored in its corresponding, each of those four things corresponding non-volatile RAM variable PK, kek, DB and DBX. These variables are authenticated, meaning that when secure boot is enabled, updates are only allowed if the update data is signed by a higher-level key, the highest level being PK, the platform key being PK. The platform key, the platform key can be used to add or remove keys from the key exchange key database, while the key exchange database key can be used to update the signature database and the forbidden signature database being at the root of the trust hierarchy. That master platform key, pk, plays a critical role in the security of secure boot. Access to the private part of the platform key allows an attacker to easily bypass secure boot. The attacker can update the key exchange key database with a malicious KEK, which can be subsequently used to tamper with the signature database and the forbidden signature database, since these databases are used during the verification process. An attacker exploiting PK fail can cause untrusted code to be run during the boot process, even when secure boot is enabled.

The Binerly research team discovered that hundreds of products use a sample test platform key that was generated by American Megatrends International, ami. This key was likely included in their reference implementation with the expectation that it would be replaced with another, safely generated key. That never happened. Since these keys are shared with commercial partners and vendors, they must be treated as completely untrusted. Several facts give us confidence in this assessment. They wrote Okay, so here they are three of them. One by scanning an internal data set of firmware images, we confirm that devices from unrelated vendors contain the same platform key, meaning that these keys must have been generated at the root of the firmware supply chain. Number two get this these test keys have strong indications of being untrusted. The certificate issuer contains the clear strings DO NOT TRUST and DO NOT SHIP, in all capital letters, like the common name, the CN. The common name in the certificate is do not trust and the issuer is do not ship. It couldn't be made any more clear. Yet they are trusted and they did ship. Number three, they said. More importantly, binary Research discovered the private component of one platform key in a data leak, where an alleged OEM employee published the source code containing the private platform key to a public GitHub repository, protected they have in quotes by a weak four-character password and thus easily cracked with any password-cracking tool. Thus the untrustworthiness of this key is clear.

Shortly after discovering PK fail, it became apparent that this was not a novel vulnerability. In fact, it's quite the opposite. A quick search on the Internet returned numerous posts from users finding keys marked as Do Not Trust in their systems, worried about the security implications of that. About the security implications of that. But even more concerning, we discovered that the same vulnerability was known as far back as 2016, and it was even assigned CVE-2016-5274. I'm sorry, backwards 5247. Why are so many devices still vulnerable to this issue almost 10 years after its public disclosure?

The harsh truth is that the complex nature of the firmware supply chain, where multiple companies contribute to the production of a single firmware image, and the security of each component relies on others' security measures demands inspection capabilities that are far from the current industry standards or simply unavailable from a technological point of view. In 2019, the Linux Vendor Firmware Service, project 19,. The Linux Vendor Firmware Service Project, lvfs, introduced a check based on YARA rules to detect non-production keys. This rule matches on the strings do not trust or do not ship with the intent of identifying and reporting firmware vulnerable to PK fail. This rule works well when the platform key is stored in an uncompressed form, but fails when the key is compressed and stored in a raw section or within the data section of UEFI modules, as is often the case. To address this and other software supply chain security vulnerabilities, binary Transparency Platform analyzes firmware images and autonomously unpacks all nested components, creating a detailed blueprint of the input allowing for the detection of PK fail and other known and unknown security vulnerabilities.

To understand the actual scope of PK fail and its historical patterns, we scanned an internal dataset of UEFI images using our Binary Transparency Platform. This dataset is representative of the UEFI ecosystem as it contains tens of thousands of firmware images released in the last decade by every major device vendor, including Lenovo, dell, hpe, hp, supermicro, intel, msi and Gigabyte. The macro results of this scan are quite alarming. More than 10% of firmware images in our data set use an untrusted platform key, meaning a key that was specifically branded, labeled, do not trust, do not ship, and has been cracked and broken and is known, and are thus vulnerable to PK fail more than 1 in 10. When reducing the data set to only more recent firmware released in only the past four years, the percentage drops to 8%, though remaining at concerning levels. The first firmware vulnerable to PK fail was released back in May of 2012, while the latest was released last month, in June of 2024. Overall, they write, this makes this supply chain issue one of the longest lasting of its kind, spanning more than 12 years. The list of affected devices, which at this moment contains nearly 850 devices, can be found in our Binerly 2024-005 advisory.

A closer look at the scan results revealed that our platform extracted and identified 22 unique untrusted keys. The table below reports the five most frequently used keys, along with a breakdown of effective products and vendors. Thank you, leo, for putting that on the screen. So what we see in this table? We have the certificate, serial number, five different certificates. The certificates subject, that is, the CN, says do not trust AMI test PK platform key. All five of them clearly labeled do not trust AMI test platform key. The issuer is do not trust AMI test platform key.

2:10:38 - Leo Laporte
Who issued these keys? Some guy named do not trust. That's right.

2:10:42 - Steve Gibson
So where are they in use? Acer, dell, fujitsu, gigabyte, intel, intel themselves, lenovo and Supermicro. First seen in April of 2018. Most recently seen last month, in June, in firmware released on a new machine from some one of these guys in June of 2024. And, leo, what this means is secure boot is subverted on that platform. Malware can install itself even with secure boot enabled?

2:11:20 - Leo Laporte
Do you have to have access to the machine?

2:11:28 - Steve Gibson
boot enabled? Do you have to have access to the machine? Physical access definitely allows it to happen, but we've seen many instances where, once malware got into the system, they were able to modify the UEFI boot just using their software access to the machine.

2:11:40 - Leo Laporte
And then the machine can't detect it because Exactly.

2:11:44 - Steve Gibson
And then you've got a boot kit installed in your firmware permanently, where even reinstalling the OS, reformatting the drive, taking the drive out, blah, blah, blah, nothing gets rid of it. Okay, so they explain and finish. From the certificate subject and issuer strings, we conclude what which all say do not trust. We conclude that these keys were generated by AMI because it says AMI test PK. This conclusion is further supported by how these test keys ended up in devices sold by unrelated vendors, as shown in the last column of the table. Okay, so who? Acer, dell, fujitsu, gigabyte, intel, lenovo, super micro? A repeat on the on the second certificate looks like same people in the third certificate, same people in the fourth certificate and same people in the fifth. So acer, dell, fujitsu, gigabyte, hp, lenovo say, oh, samsung appeared in the fifth, uh, most recently, uh, oh, earlier. Uh, first appeared in 2012, in may of 2012, and last seen in March of 2021.

So for that fifth one, they said, by looking at the actual product names and models, we found another concerning issue. The very same key is used to protect a highly heterogeneous set of products, set of products. For example, the key with serial starting, 55fbef, was found both in gaming laptops and in server motherboards. Moreover, as we so probably what an Acer gaming machine and a Supermicro server, since Acer and Supermicro were both found to be using that first key. Moreover, as we can see in the last seen and first seen columns, these keys survive in the ecosystem for years, up to almost 10 years in the case of the key with a serial number beginning 1BED93. When looking at the historical trends of PK fail, several worrisome observations can be made.

First, even after CVE-2016-5247 was made public, the release rate of images vulnerable to PK fail remained its previous increasing trend, suggesting that the firmware industry was not responsive to that 2016 vulnerability discovery. This behavior is consistent with the findings from our retrospective analysis of logo fail patches, where we found that the industry needed several months after the disclosure before security patches were ready and propagated to end users. The reaction to CVE-2016-5247 can finally be seen in the period from 2017 to 2020, where the number of vulnerable images steadily decreased. This trend, however, changed again after 2020 and has persisted until the current day, with a constant increase of vulnerable devices, which means people forgot about this problem and then started replicating the bad keys once again across their own devices. Another observation related to the private platform key leaked on GitHub is that this leak did not result in any visible impact on the number of vulnerable devices. This is, once again, unsurprising when put into the historical context of the firmware industry. Just not caring. To be clear, this leak went almost unnoticed. In fact, they write, we were the first to report that it contained the private part of a platform key. Second, the slow reaction to this security incident by mining the data provided by the GitHub archive and available on the Wayback Machine. We confirmed that the repository remained publicly available for at least four months before getting noticed and removed by its original author, while it took five months to delete all the forks of the offending repository. Quite concerningly, the leaked key is still in use today in many devices and has been used for quite some time. The first occurrence of this key in our data set dates back to april of 2018. Okay, so this means that while much has been made of Secure Boot because Secure Boot is utterly reliant upon the secrecy of the private key at its root around one out of every ten machines in use today and even shipped as recently as last month, is only pretending to have secure boot because its private key is one of the handful that AMI originally provided and clearly marked. They believed well enough as do not trust and do not ship. Yet shipped they were and trusted. They are still being one very cool bit of tech.

That Binary Lease shared is the way any Linux or Windows user can check their own PCs for the presence of these insecure keys. Will have the strings DO NOT TRUST or DO NOT SHIP in the subject and issuer fields of the platform key certificate. On Linux, pk fail can be easily detected by displaying the content of the PK variable. And they show the command EFI hyphen read var R-E-A-D-V-A-R space hyphen lowercase v, space p, key length 862, and then basically you're looking at the PK variables certificate where you can clearly see subject is CN equals do not trust, ami test PK and issuer is CN do not trust and AMI test PK.

Then they write on Windows running the following command and a privileged PowerShell console will return true, unaffected devices. And this command is a little long for me to to read out on the podcast it's at the bottom of the pod of the show notes, page 23. Page 23. Basically it is a PowerShell command get-securebootuefi and then space pk, then dot bytes. That returns a bunch of ASCII which you then run a match on matching on do not trust or do not ship.

2:19:27 - Leo Laporte
Now will this work on all machines? Because didn't you say some of them were obfuscated. Yeah, no, no, this will work because it's looking at the certificate.

2:19:45 - Steve Gibson
Well, it's looking at the certificate of the platform key. So anybody who wants to know whether they're one of the one in 10 that have the, whose machine has this, is able to run it. I'm sure I'd love to hear some anonymous feedback from our listeners who, either under Linux or Windows, run these commands and do or don't find that they've got this insecure, well-known key. The Binary Research Team recommends that affected users update their firmware when device vendors release an updated version with fixes for this PK fail event. Expert users, they said, can rekey the platform key PK with a trusted key. The other secure boot databases, the KEK, the DB and the DBX, must also be assumed to be compromised. Thus, expert users should check and replace them with new databases of trusted signatures and certificates, and I'm sure that will be part of the pack that the vendors release. So, just so everyone is clear here, there is no obvious remote vulnerability. Just because you have like a bad secure boot doesn't in any way make your system actively vulnerable. The primary danger is from local boot tampering with a machine that was believed to be secured against exactly such manipulation and, as I said when leo asked, we have seen plenty of instances where remotely injected instances of malware were then able to establish persistent boot kit root kits by messing with the user's motherboard firmware. On that you know from that, from that machine, that would you know. And this, this compromise, means that could work again. So you know, this is not the end of the world by any means, and since this time, unlike previous times, biner Lee's research, which is breathtaking in its scope, has generated far more interest than the issue did back in 2018. And since there really does appear to be, you know, a great deal more interest in security in general today than in eight years ago, I would expect that the manufacturers of all still supported systems will arrange to respond to the egg that they now all have on their faces, because this is all their fault for not ever generating their own platform key and just using the one that came with the firmware, which said do not trust and do not ship. There is no excuse for being so negligent as to ship systems with those keys in place, with those keys in place. So, and again, I'm not going to tell anybody that they should not use recall once it becomes available. That's not my place.

I understand 100% that the reward from its use may well justify the very slim chance that the data it gathers might be put to some use that its owner would find objectionable. After all, that's certainly happening, no matter where we go today on the web or where, or even how, we drive our cars in the real world. You know, why should this be any different? But, that said, what does appear to be imperative you know Microsoft's repeated and well-meaning assertions about their ability to secure this information notwithstanding is for every individual to be given the well-informed choice about whether or not they want this for themselves, this recall storage, and for that choice to be honored without exception or excuse. Here's another example today that you know microsoft says oh yeah, we got secure boot, turn it on. You know and, and, and, and what windows hello log on. Well, that was violated two weeks ago, but I didn't have time to talk about that because of CrowdStrike that crashed the world.

2:24:04 - Leo Laporte
So yeah, I have to say. You know, in order to install Linux, I'm in the habit of turning off secure boot anyways. Nowadays you can keep it on in many Linux distros and so forth, but it's not, I mean no it's not.

2:24:19 - Steve Gibson
It's not a huge loss, I guess, and as somebody who would love people to be able to boot DOS on their systems, secure boot is just a thorn in the way.

2:24:30 - Leo Laporte
Yep, I understand it was created at a time when we were really worried about BIOS resident malware which you can't ever get rid of. So I understand that Root kits and that kind of thing, but I don't know.

2:24:45 - Steve Gibson
I guess I lived in yeah, leo, just use crowd strike. What could go wrong?

2:24:51 - Leo Laporte
chromebooks do the same thing. They have a secure boot. Max now do that as well. They verify their boot code it does make sense to do that? I think it really does. Yeah, and most linux dist distros now support UEFI and secure boot.

2:25:05 - Steve Gibson
Yeah, I would say that the vulnerability is the targeted, you know state level actor you know the kind of vulnerability that Stuxnet was like wanting to take advantage of where you've got somebody who is able to get brief. You know the kind of mission impossible thing where they switch out someone's laptop and then take it in the back room and and install the root kit and then then switch it back before they, before they know.

2:25:33 - Leo Laporte
Now they've got a root kit that they didn't have before, didn't have to take in the back room anymore. Yeah, maybe you don't think you're worried about.

2:25:39 - Steve Gibson
You know ethan hawke coming down, uh, from from a guy, guy wire and make on a cable making the switch from a hanging from the side of a cliff, Then I think you're probably All right.

2:25:53 - Leo Laporte
This has been another mission impossible to figure out what the hell is going on and to keep you safe in the face of extraordinary threats.

2:26:06 - Steve Gibson
That's this guy An ever-changing world.

2:26:08 - Leo Laporte
An ever-changing world. Mr Steve Gibson. He's at GRCcom. There are many things, many things at GRCcom. I'll mention a few Spinrite, the world's best mass storage performance, maintenance and recovery utility. Performance is important. We talked on Ask the Tech Guy about a guy had an SSD, was worried that you know what did he need to do and he had this very elaborate thing of copying everything off the SSD and then formatting it and copying everything back on and I thought I don't think that's really what you should be doing. But maybe check out Spinrite. It'll help your SSD if you're having performance problems. That's all there. That's Steve's bread and butter. Version 6.1 is the current version is available now. While you're there, you can get a copy of this show. Steve has two versions versions he's got the traditional canonical, everybody's got it 64 kilobit audio version, but he also has the very unusual 16 kilobit audio version for the bandwidth impaired and, by the way, we started doing that what? 15 years ago.

I mean a long time are people still that bandwidth impaired? Is this still a problem?

2:27:24 - Steve Gibson
it's popular oh really, people download the 16 kilobit and you just heard, you heard from somebody in in a chat last week while we're talking about this and he said I, that's what I download. I like the 16 kilobit ones why not?

2:27:37 - Leo Laporte
they're small, got him got. He also has the show notes, which are excellent and a must read, and transcripts which are written by an actual human, not ai, but real, genuine transcripts of every show. We met her once yes, elaine ferris, the great transcriber, she's not a north korean hacker she's a great transcriber in the sky.

Uh, she, she does a good job and you can get it free at GRCcom, along with a lot of other freebies that Steve does give away. He's very generous. If you want to email him or send him a note or just say hi or subscribe to his newsletters, it's very easy. Just go to GRCcom, slash email and fill out the form. Anything else to say about that? I think that's just enough reasons, right, we're good. You got all sorts of great tools at uh, like valid drive to make sure you got the right amount of storage, the storage you bought on your thumb drive, um shields up, make sure you've got your router configured properly. I can go on and on. We have 64 kilobit audio at our website too, but we also have video which Steve refuses to carry because he doesn't like it. But we do and we have it. That's at twittv slash SN. While you're there, you can also see links to the YouTube channel dedicated to security.

Now you can also subscribe to your favorite podcast client and make sure you get it every week of a Tuesday afternoon. That's when we do the show Tuesday, right after Mac break, weekly, roughly 1 3030 pm Pacific, 4.30 Eastern, 20.30 UTC. I mention that because you can watch us live. If you want the very freshest, unedited, unmediated version of this show, you can get it on YouTube, youtubecom, slash twit, slash live, twitchtv, slash twit, linkedin, xcom, kick and, of course, club members got their own special channel. They can get it on discord. In fact, club members have another advantage they can get ad free versions of every show we do just seven bucks a month. You get access to the discord and a great community of of listeners, and it really helps us. It's, uh, it's, it's really the only thing that's keeping us going at this point. So if you're not yet a member, I invite you to join twittv slash club.

Twit Steve. I will be back here next Tuesday for our last Security. Now in the studio.

2:29:58 - Steve Gibson
That's right, this is the penultimate episode.

2:30:02 - Leo Laporte
Steve's favorite word. Now that I know what it means.

2:30:06 - Steve Gibson
We'll see you next week, Steve Bye. 

All Transcripts posts