Transcripts

Security Now 939, Transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.

Jason Howell (00:00:00):
Coming up next on Security Now it's me, Jason Howell filling and for Leo LaPorte. But of course you check out security now because you want to hear Steve Gibson share everything that there is to know this week about well security news. And actually there's news about the UK's online child safety legislation that might be good news for encryption. Steve tells you all about that. Why Steve's happier than ever before to be driving a 19 year old vehicle makes [00:00:30] me rethink my purchase of a new vehicle just last year. Tons of feedback, some really great feedback from you, the listeners and viewers of the show. And then finally, Steve ends things off by sharing why it's pretty clear that the data stolen during that devastating LastPass hack is actually being decrypted and cashed in on you Won't want to miss it. Security now is next.

Steve Gibson (00:00:57):
Podcasts you love from people you [00:01:00] trust. This is twit.

Jason Howell (00:01:07):
This is Security now with Steve Gibson. Episode 939 recorded Tuesday, September 12th, 2023, last Mess.

(00:01:17):
This episode of Security Now is brought to you by Duo Protect Against Breaches with a leading access management suite, providing strong multi-layered defenses to only allow legitimate users in. For any organization [00:01:30] concerned about being breached and in need of a solution, fast Duo quickly enables strong security and improves user productivity. Visit cs.co/twit today for a free trial. Security now is also brought to you by our friends at IT protv now called ACI. Learning acis New cyber skills is training. That's for everyone, not just for the pros. Visit go dot ACI learning.com/twit twit listeners [00:02:00] will receive at least 20% off or as much as 65% off an IT Pro enterprise solution plan. After completing their form, you'll receive a proper quote based on the size of your team and by Pan Optica reduce the complexities of protecting your workloads and applications in a multi-cloud environment. Panoptix provides comprehensive cloud workload protection integrated with a p i security to protect the entire application lifecycle.

(00:02:30):
[00:02:30] Learn more about Pan Optica at Pan Optica app.

(00:02:34):
It's time for security now I am Jason now filling in for Leo LaPorte, who is I think on the other side of the country right now, but you know who's going to be here with me? Of course, none other than Steve Gibson. It's good to see you again, Steve.

Steve Gibson (00:02:50):
Jason, it's great to be with you. Leo is with his mom. That's right. Who everyone thinks is the cutest thing they have ever seen. Yeah, she's been on the network a number of times [00:03:00] and it's always an enjoyable experience when she's on. So yeah, he's assisting her, so it's good to see you. So we are here for Security. Security Now. Security, not now Security. I was going to say Security nine because it's episode 9 39 on oh 9 12 23. So yeah, this week we're going to share some exciting and hopeful news about the UK's online [00:03:30] child safety legislation and we're crossing our fingers.

(00:03:35):
Once again. The tech press kind of launched a little prematurely, but then got corrected and there's some fun story there and then we're going to explore what that suggests for the future. We also learned how it was that Microsoft's super secret authentication key escaped into the hands of Chinese attackers who were then able to use it to breach the secure enterprise [00:04:00] email to some dramatic effect. Also, what did Microsoft learn from that, if anything? Also, we're going to look at why I am more glad than ever that the car I'm driving is 19 years old. Still goes. Great. And this is after the Mozilla Foundation shared what they learned about all of today's automobiles from a privacy standpoint. Well, [00:04:30] there was only one thing they've ever encountered that was worse and I don't remember what it is, but it's in the show notes. So we'll get there and then after sharing and exploring some feedback from our listeners, believe it or not, we're going to examine the horrifying evidence that supports the belief that the data stolen from the LastPass breach is being successfully decrypted [00:05:00] and being used against LastPass users.

(00:05:04):
Oh dear. As a consequence, this podcast is titled Last Mess.

Jason Howell (00:05:10):
Probably not the Last Mess From The Last Pass Mess, but yeah. Holy moly. Yeah, it just keeps on giving unfortunately. Oh, all right. Well, we will get there. That's a whole lot of interesting stuff to talk about today on security now. But before [00:05:30] we do, we do want to take a moment and thank the sponsor of this episode of Security. Now it's brought to you by duo. Duo protects against breaches with a leading access management suite. It's what it's all about. It's strong multi-layered defenses and innovative capabilities that only allow you legitimate users in and also keeps the bad actors out. So for any organization that's concerned about being breached, that needs protection Fast Duo quickly enables strong security [00:06:00] while also improving user productivity. Duo actually prevents unauthorized access with multilayered defenses and modern capabilities that thwart sophisticated malicious access attempts.

(00:06:13):
You can increase authentication requirements in real time, and you actually do that when the risk is rising. When it rises. DUO enables high productivity by only requiring authentication when needed. That enables Swift, easy and secure access. [00:06:30] Duo provides an all-in-one solution for Strong M F a passwordless single sign-on and trusted endpoint verification and Duo helps you implement zero trust principles by verifying users and their devices. And you can check it out for yourself. Start your free trial and sign up today at css.co/twit. That's CSS co slash t w. Check it out for yourself. The free [00:07:00] trial is there. You can sign up today and you'll be happy you did it. We thank Cisco and we thank Duo for their support of security. Now, all right, before we get to all the news, we've got a picture that makes a whole lot of sense when you look at it.

(00:07:14):
I mean I guess it made sense to someone to do it, but yeah, this is, yeah,

Steve Gibson (00:07:19):
we may have shown this before, but it came up again and it's just such a perfect picture for the podcast. I gave this picture the title, [00:07:30] the plumber's contract didn't say anything about moving any rocks. And so we have this, it's not clear which way the water is flowing through this pipe, but the pipe is trying to go. It comes from off screen on the left and it wants to go to a pipe which disappears into the ground over on the right. Unfortunately there is a big honking rock [00:08:00] in blocking the pipe and the rock looks, it doesn't look like it's glued down or anything. I mean it looks like it's a big rock. You could presumably move the rock For whatever reason, the plumber did not want to move this rock Instead he did once.

(00:08:21):
Okay, those of us who are old enough to remember the three Stooges will remember that famous plumbing episode where [00:08:30] I don't know, Mo or Curly or somebody was in the bathtub and they were trying to plumb around a leak. And anyway, so this pipe goes basically circumnavigates the rock pretty much as few pieces as possible so the water is flowing and the rock stays where it originally was. So anyway, one of our fun pictures I think. I mean it really does look like that rock could be moved but maybe, doesn't it? It's like [00:09:00] what I mean maybe it's not movable. Maybe it's firmly embedded into the ground in a way that would require it to be destroyed and yeah, I'm a plumber, I'm not a rock destroyer and you can see behind it there's like a rock ledge or like a wall, but this rock does not appear to be part of that.

(00:09:18):
It looks like it's separately sitting on the ground and I don't know. I know apparently there was some reason why that rock wasn't going anywhere. [00:09:30] Maybe it Jason's exactly an incident. That's exactly where I was. I was about to say there's a story here. Yes, and we do not know the answer, but I want to know the answer. We will never know. Now, given the spread and breadth and ingenuity of our listeners, I wouldn't be surprised if one of them finds this at some point and goes and tries to move it. So if any of our listener encounters this particular rock, we want to know can it be moved or yeah, if you run into this rock, [00:10:00] if you find yourself out in the world and you cross paths with this rock, just do us a favor and see if you can nudge it. Does it even shake in place?

(00:10:08):
And if it shakes in place, then we've got our answer. Give it a kick. Maybe get out a crowbar if you have one handy, we want to know. Sure. So just a little follow up on my current side project. I was getting ready. Remember I talked about Val Drive last week, which arose from one of the spin, right? Six one testers [00:10:30] having spin, right rejecting one of his smart drives, one of his many thumb drives. And then we dug into it and we figured out that this was a fake drive. So I decided, and Leo was really moved by this revelation and he said, whoa, we need to know about these. I created this little freeware called Valla Drive, which I had expected I would finish with last [00:11:00] week, but just as we were kind of getting ready to get there, one of our listeners showed that a drive, which we believe is fake, and I'm now since then I've absolutely confirmed it is passing valid drives test when it should not, I believe.

(00:11:25):
So what's happening is Valla drive jumps around a drive [00:11:30] in a random sequence checking 576 equally spaced out locations to verify that there is actual storage there. I think what's happening is that some caching that exists in the chain between MyCode and the actual U S B stick somewhere is generating false positives. So we are seeing valid drive is showing green [00:12:00] when it should be showing red anyway, as a consequence, I didn't finish, I didn't publish this thing yet. We will have it next week. In fact, I was cheating a little bit Jason, while you were reading our first advertiser sponsor. I was wondering why an undocumented command I'm using wasn't being anyway to say I'm at work on it literally as we speak. You are amazing, Steve. You're podcasting while you're validating you. That's amazing. [00:12:30] So it'll be soon and it'll be working correctly.

(00:12:34):
Okay, so who blinked what can only be called wonderful and welcome news serviced in the middle of last week from the uk. Now the short version is the UK appears to have blinked and in the face of all secure messaging apps. And now as we just recently covered [00:13:00] Apple taking their stand, which was the topic of last week's podcast saying firmly Uhuh, we're not doing this. Everybody said they were going to pull their services from the UK rather than sacrifice the privacy of their users rather than compromising in any way. The UK apparently said, oh well we never said that we wanted you to do that. [00:13:30] Right? But of course nothing involving politicians and bureaucracies is ever clean and simple and the details here are at least somewhat interesting. What first happened was that last Tuesday the Financial Times was the one who broke the story and then the tech press jumped on it because this was big news nine to five max headline was Future of iMessage Safe [00:14:00] in the UK as government backs down on encryption, wired carried the headline, Britain admits defeat in controversial fight to break encryption and their subhead was the UK government has admitted that the technology needed to securely scan encrypted messages sent on Signal and WhatsApp doesn't exist weakening its controversial online safety bill Computer World story began with [00:14:30] UK rolls back, controversial encryption rules of online safety bill, cyber Scoop headline, their coverage UK lawmakers backed down on encryption, busting spy clause and infosecurity Magazine's.

(00:14:44):
Headline was UK government backs down on anti encryption stance. Alright, okay, so anyway, everybody gets the idea. This is what all the headlines were unfortunately, much as those were all attention commanding and welcome [00:15:00] headlines, none of that was true. Well, or at least they were all probably deliberate oversimplification and exaggeration clickbait, which predictably did not set well with the uk. The politicians there didn't like those headlines. So then the following day last Thursday, we see follow-up headlines such as UK tries to claim [00:15:30] it hasn't backed down on encryption at all and Reuters headline was UK has not backed down in tech encryption RAO Row, Rao Minister says. And so anyway, here's what Reuters explained because their coverage is short and succinct. London, September 7th, Reuters Britain will require social media companies [00:16:00] to take action to stop child abuse on their platforms and if necessary work to develop technology to scan encrypted messages.

(00:16:11):
As a last resort technology minister Michelle Donnellan said on Thursday and we've talked about Dear Michelle in the past, she's the one who's ahead in charge of this. So Reuters said platforms including Meta's, WhatsApp and [00:16:30] Signal have been fighting Britain's online safety bill, which is currently being scrutinized by lawmakers because they say it could threaten the end-to-end encryption that underpins their messaging services. Junior Minister Steven Parkinson appeared to concede ground to the tech company's arguments on Wednesday saying in parliament's upper chamber that the off com, [00:17:00] that's their communications regulator would only require them to scan content where technically feasible. Okay, now that's the first time anyone had heard that of course. And then Reuters reminds us that tech companies have said scanning messages at end-to-end encryption are fundamentally incompatible. Okay, so in other words, not technically feasible. So essentially by [00:17:30] admitting and facing reality, this junior minister Steven Parkinson set off a firestorm.

(00:17:39):
Was the fire set deliberately? Did the senior minister set this up to have junior drop this and then hide, maybe I'm being too cynical, I don't know. Reuters continues their coverage saying Senior technology minister Michelle Donolan, however denied the following day [00:18:00] on Thursday that the bill had been watered down in the final stages before it becomes law. She told times radio, we have not changed the bill at all, which doesn't seem to be true, but that's what she said. If there was a situation where the mitigations that the social media providers are taking are not enough, we already know they won't be. She said, and if after [00:18:30] further work with the regulator, they still cannot demonstrate that they can meet the requirements within the bill, then the conversation about technology around encryption takes place. Okay? Huh?

(00:18:48):
What does that mean anyway? She said further work to develop the technology was needed but added that government funded research had shown it was possible. Okay. All [00:19:00] of this is new information, right? Okay, so that's the official c y a story from the UK's senior technology minister, but that's not the whole story because the online safety bill actually was amended despite the fact that Michelle Donnelly just said it had not been changed at all and she's desperately trying to obfuscate that fact. So here's [00:19:30] the way Apple Insider explained what happened. They said despite introducing a clause that means it's online safety bill is no longer a concern for Apple WhatsApp or users. The UK government is insisting with a straight face that it's still exactly as tough on big tech as before. On Wednesday the uk, the UK Parliament debated an [00:20:00] online safety bill that in its original form would've seen Apple WhatsApp signal and more shutter their messaging and social media services in the country.

(00:20:10):
Bowing to that pressure wrote Apple Insider, the UK regulator off com introduced a face saving clause that effectively stopped the country's nonsensical demands to break end-to-end encryption. Except the conservative government [00:20:30] that was pushing for this against the advice of security experts and even an X M I five head insists that it is not even blinked as spotted by Reuters UK technology minister Michelle Donnelly told radio the same thing I just shared. We haven't changed the bill at all. If there was a situation where the mitigations of the social media providers are taking was not enough, and if after further work with a regulator they still can't demonstrate [00:21:00] that they can meet the requirements within the bill, then the conversation about technology around encryption takes place anyway. I don't think anyone's ever going to listen to her or take her seriously again, but Apple Insider says off comm's amendment to the bill said that firms such as Apple would be ordered to open up their encryption only where technically feasible and where technology [00:21:30] has been accredited as meeting minimum standards of accuracy in detecting only child sexual abuse and exploitation content.

(00:21:44):
In other words, not really, but we had to say something, right? So Apple Insider opines saying There is no technology today that will allow [00:22:00] only the good guys do break end-to-end encryption and there never will be. Consequently, they write, the Tory government can argue and is arguing that no word has been changed in the bill, but words have been added and they neuter the entire non Yeah, we didn't change anything, didn't change anything we but we added a few more words sprinkled a little. Yeah, yeah, exactly. They're just kind of like, oh, [00:22:30] as Apple Insider says, they neutered the entire nonsensical and unenforceable plan. Okay, so that's the story and it's big news because of the critically important precedent that this sets for their coverage of this wired interviewed signals quite outspoken precedent. Meredith Whitaker, who we've also often quoted here, so here's what Wired wrote of what Meredith had to say. They said Meredith Whitaker, [00:23:00] president of the Signal Foundation, which operates the signal messaging service said it's absolutely a victory. It commits to not using broken tech or broken techniques to undermine end-to-end encryption.

(00:23:19):
And then Wired said Whitaker acknowledges that, okay, it's not enough that the law simply won't be aggressively enforced, but it's major. She said, [00:23:30] we can recognize a win without claiming that this is the final victory that Wired continues saying The implications of the British government backing down even partially will reverberate far beyond the uk. Whitaker says Security services around the world have been pushing for measures to weaken end-to-end encryption and there is a similar battle going on in Europe over C S A where [00:24:00] the European Union Commissioner in charge of home affairs has been pushing similar unproven technologies. Whitaker said it's huge in terms of arresting the type of permissive international precedent that this would set. The UK was the first jurisdiction to be pushing this kind of mass surveillance stops that momentum and that's huge for the world.

(00:24:28):
And yes, [00:24:30] I believe this is authentically a huge deal. No one has any real problem with face saving legislation being created to allow the politicians to tell their C S A M activists that they now have powerful new legislation on the books, which will the moment it can be shown to be technically feasible to do this with the required level of accuracy, compel all encrypted [00:25:00] messaging providers to protect the children and those politicians can truthfully state that this was the strongest legislation they were able to obtain because indeed it was. We know that this will in no way pacify Sarah Gardner, whom we talked about last week after she threatened Tim Cook at Apple with her forthcoming pressure campaign, which starts this week to [00:25:30] compel Apple to perform client side scanning for known CSS a imagery. But Sarah appears to be a lost cause. She has no problem demanding whatever concessions to everyone else's security and privacy might be needed to even incrementally offer improved protection for children.

(00:25:52):
Everyone is for improving child protection, but there's no way to do that without compromising everyone's security and privacy, [00:26:00] including the children's. So the free world appears to have just taken the first big step toward the resolution of the encryption dilemma. It's going to be interesting now to see what the European Union does. Maybe they'll also just put the same sort of caveat into their legislation and everyone can continue ignoring it, which would be wonderful. And then does that end up trickling down here into the US as the US government [00:26:30] tries to pursue a no encryption policy, what we see with things like the EU and the G D P R annoying as that G D P R is, it's helping us, I think and us politicians to say, oh yeah, I guess privacy really is good. Maybe we should have some of that here too.

(00:26:57):
It's at least forcing more of a conversation and actual [00:27:00] taking a look at these issues and all of the encryption providers can say when the US tries to do this, Hey, over there. Yeah, exactly. Over there. They worked it out. They're okay with this. So just cool your jets. Yeah, interesting. Okay, so as we know from July, a Chinese based [00:27:30] attacker known as Storm 0 5 5 8 somehow managed to acquire one of Microsoft's what was supposed to be a very secret key, which then allowed it to forge login tokens, which they were able to use to access the private email of AWA and outlook.com users to like serious effect. In the wake of these revelations, [00:28:00] the entire security world has been left wondering exactly how Microsoft had managed to fumble the crucial protection of this very important key. So last Wednesday after a series of preliminary blog postings, Microsoft finally provided what may be the conclusion of their investigation and it did answer some of these questions.

(00:28:28):
Here's what Microsoft shared. [00:28:30] They wrote, Microsoft maintains a highly isolated and restricted production environment controls for Microsoft employee access to production infrastructure include background checks, dedicated accounts, secure access workstations and multifactor authentication using hardware token devices. Controls in this environment also prevent the use of email conferencing, web research and other collaboration [00:29:00] tools which can lead to common account compromise vectors such as malware infections or phishing, as well as restricting access to systems and data using just in time and just enough access policies. So that paragraph went to explaining all the things they did and designed on purpose to prevent anything like this from ever happening. Then they go on our corporate environment, which also requires [00:29:30] secure authentication and secure devices allows for email conferencing, web research and other collaboration tools. So in other words, there's the production environment and the corporate environment. They said while these tools are important, they also make users vulnerable to spearfishing token stealing malware and other account compromise vectors.

(00:29:54):
For this reason by policy and as part of our zero trust and assume [00:30:00] breach mindset key material should not leave our production environment. That is the first one that's highly protected. Our investigation found that a consumer signing system, a consumer signing system crash in April of 2021. So two and a half years ago [00:30:30] a consumer signing system crash in April of 2021 resulted in a snapshot of the crashed process, a crash dump. The crash dumps, which redact sensitive information should not include the signing key and I'll just put a pin in this here and add that. Okay, [00:31:00] the crash dumps should not need to redact sensitive information since a sighting key should never be in RAM to be dumped after a crash, they should be in a hardware security module. What the heck is a sighting key ever doing in ram? But we'll get back to that.

(00:31:24):
So Microsoft says in this case, a race condition allowed the key, [00:31:30] which obviously was present in RAM to be put out in a dump. They said to be present in the crash dump. Okay, a race condition allowed the key to be present in the crash dump. This issue has been corrected, so if we had a bell, we would ring it, bring dinging. There's the first bug fixed. The key materials presence in the crash dump was not detected by our system [00:32:00] dinging. They said this issue has been corrected. Bug number two, we found that this crash dump believed at the time not to contain key material, was subsequently moved from the isolated production network into our debugging environment on the internet connected corporate network, which we've already said is not as secure as a production environment. They said this is consistent with our standard [00:32:30] debugging processes.

(00:32:32):
Then our credential scanning methods did not detect its presence. Dinging bug number three, this issue has been corrected. Okay, so so far we've got three strikes. They continue after April, 2021, that's when this crash occurred. When the key was leaked to the corporate environment and remember the key was leaked because it passed through three bugs that none of [00:33:00] which should have existed. They've been fixed Now, okay, when the key was leaked to the corporate environment in the crash dump, the storm 0 5 5 8 actor was able to successfully compromise a Microsoft Engineer's corporate account. This account had access to the debugging environment containing the crash dump, which incorrectly contained the key due to log retention policies. [00:33:30] We don't have logs with specific evidence of this exfiltration by this actor, but this was the most probable mechanism by which the actor acquired the key. Okay, so why was a consumer key able to access enterprise email?

(00:33:52):
Right? Microsoft explains to meet growing customer demand to support applications [00:34:00] which work with both consumer and enterprise applications. Microsoft introduced a common key metadata publishing endpoint in September of 2018 as part of this converged offering Microsoft updated documentation to clarify the requirements for key scope validation, which key to use for enterprise accounts and which to use for consumer accounts. [00:34:30] As part of a preexisting library of documentation and helper APIs, Microsoft provided an A P I to help validate the signatures cryptographically, but did not update these libraries to perform this scope validation automatically. Dinging number four, this issue they say has been corrected. [00:35:00] The mail systems were updated to use the common metadata endpoint in 2022. Developers in the mail system incorrectly assumed libraries performed complete validation and did not add the required issuer scope validation. Thus the mail system would accept a request for enterprise email using a security token signed with the consumer key [00:35:30] dinging.

(00:35:31):
Number five, this issue has been corrected using the updated libraries. Finally, Microsoft is continuously hardening systems as part of our defense In depth strategy investments which have been made related to M S A key management are covered and they have a blog. The storm 0 5 5 8 blog items related or detailed in this blog are a subset of these [00:36:00] overall investments. In other words, as a consequence of this, we're now doing better than we were before. They said we're summarizing the improvements specific to these findings here. For clarity, we have four bullet points identified and resolved race condition that allowed the signing key to be present in crash dumps, enhanced prevention, detection and response for key material erroneously included in crash dumps, [00:36:30] enhanced credential scanning to better detect presence of signing key in the debugging environment and finally released enhanced libraries to automate key scope validation in authentication libraries and clarified related documentation.

(00:36:48):
And really those last two things, the library kind of stuff, synchronization and so forth, that just kind of feels like huge corporation stuff. Yeah, [00:37:00] it's understandable that something that changed over in department A and department B was using it but they didn't get theirs refreshed and updated. I mean, yeah. Okay, so as Microsoft has explained this, mess up a series of five separate and previously undiscovered bugs, all of which they have since found and fixed were respons for allowing [00:37:30] a key which should have never left Microsoft to be exfiltrated by Chinese attackers are then used to remotely compromise, which should have been high security enterprise email. It's obvious that the key should have never been allowed to leave Microsoft, but as I said before, what they appear to have conveniently skipped over is why that key ever left the H S M, the hardware [00:38:00] security module, which was the only place it should have ever existed.

(00:38:08):
It's really worth having all of us note that not one of those five flaws would have caused any trouble if that secret key had not been in the system's RAM at the time of that fateful crash. This is precisely why HSMs [00:38:30] exist. It's why for example, GRCs code keys do not ever exist in any ram. They are sequestered in hardware completely inaccessible to the outside world once installed there and only able to be used to sign signature hashes. You cannot query the hardware for the key. It won't give it to you. It will only agree to use it until it expires. [00:39:00] I've always said that anyone can make a mistake in this instance Microsoft, Microsoft made five big ones, but policy is a different matter and Microsoft completely dodged the question of how they could have ever had a policy to allow a crucial signing key to be present in ram.

(00:39:25):
That's just never okay and it got 'em in trouble here. [00:39:30] So anyway, at least now we understand how it happened. It was a bunch of mistakes. I'm impressed that they have all of those buggy things in the pipeline that were meant to work and meant to catch this problem. I mean there was an intention to prevent this from happening again. They're doing an awful lot of work which could have been resolved by having this [00:40:00] in a hardware security module in the first place. So I don't get that, but the fact that they had those things demonstrated noble intent. Unfortunately they were buggy and so the key slipped right past all three of them in that chain, but kind of cool that it was there. But anyway, at least we know how it happened. It was a mess and they're going to do better. But really the number one takeaway from this entire debacle is [00:40:30] don't have important keys in RAM ever. No matter how tightly you believe you have protected them from ever being divulged. That's why we have HSMs and they're not even expensive. I'm sure Microsoft can afford one. I've got several.

(00:40:54):
Jason, let's tell our listeners why we're here and then I'm going to explain why I'm glad my car [00:41:00] is as old as it is. I can't wait for you to explain to me why I shouldn't be driving these new cars or newer cars at least newer than 19 years old. Sorry to tell you my friend that Tesla is the worst. I am not that surprised to hear that. We're going to talk all about that coming up next, but first let's take a break and thank the sponsor of this episode of security. Now this episode is brought to you by our friends at IT pro TV now called a C i Learning in [00:41:30] today's IT talent shortage. Whether you operate as your own department or maybe you're part of a larger team, your skills must be up to date. 94% of CIOs and CISOs agree that attracting and retaining talent is increasingly critical to their roles and you can access more than 7,200 hours of content available with a c i learning.

(00:41:56):
They're consistently adding new content to keep you at the top [00:42:00] of your game. So your team is actually going to thank you for entertaining training. A C I learning's completion rate is actually 50% higher than their competitors. So your team's going to learn and they're going to love to learn. A C I learning is excited to introduce cyber skills. This is a solution to future-proof your entire organization, not just the IT department. This new cybersecurity training tool is for all members of your organization. It's cybersecurity awareness training for non-IT [00:42:30] professionals because we can all use a little bit of extra education in cybersecurity with cyber skills, you get flexible on-demand training, covering everything from password security and phishing scams to malware prevention and network safety. Your employees will stay motivated, they'll stay engaged throughout their entire learning process with easy to follow material. There's a simple one hour course overview and in that overview your employees gain attack specific [00:43:00] training and knowledge check assessments based on common cyber threats that they will encounter on a daily basis that we all will, right?

(00:43:09):
There are certain security things that happen to everybody whether you recognize them or not. That's what's so important about this. They'll also gain access to bonus courses, documentary style episodes so your employees can learn about cyber attacks and breaches in their own style. A C I learning helps you invest in your team and [00:43:30] entrust them to thrive while increasing the entire security of your business. You can boost your enterprise cybersecurity confidence today with a C I learning. Be bold, train smart visit go dot ACI learning.com/twit and our listeners TWIT listeners will receive at least 20% off or as much as 65% off an IT Pro enterprise solution plan. The discount there is based on the size of your team and [00:44:00] when you fill out their form, you'll receive a proper quote tailored to your needs. How about that? Go dot aci learning.com/twit and we thank them for their support of security now.

(00:44:12):
Alright, here's where the world comes crashing down on us. If we have a new car or a newer than 19 years year old car like Steve has, apparently we might not be so happy about that after hearing this story. Well, to be as we often [00:44:30] say to be forewarned is to be forearmed. Sure. I titled this piece, the car I drive is 19 years old and I'm more glad than ever. The reason is it's a car. It's not a continuously connected mobile entertainment system on wheels. It's a car, it does what a car is supposed to do. It moves my butt from one place to another. The reason I'm more glad than ever that all my car does [00:45:00] is move me is that I read the research that was just conducted and published last Wednesday by the Mozilla Foundation. They titled this, it's official cars are the worst product category we have ever reviewed for privacy and a subhead might be 25 car brands tested and 25 car brands failed.

(00:45:29):
So here's what their research [00:45:30] uncovered. I've edited their posting a little bit for the podcast. They said, ah, the wind in your hair, the open road ahead and not a care in the world except for all the trackers, cameras, microphones and sensors capturing your every move. Ugg modern cars are a privacy nightmare. They wrote, car makers have been bragging about their cars being computers on wheels for years to promote their advanced features. However, [00:46:00] the conversation about what driving a computer means for its occupant's privacy has it really caught up and as we'll see I make this point later, I think that's exactly the case. It's happened quickly and we haven't caught up. They said while we worried that our doorbells and watches that connect us to the internet might be spying on us, car brands quietly entered the data business by turning their vehicles into powerful data [00:46:30] gobbling machines.

(00:46:31):
Machines that because of all those brag worthy bells and whistles have an unmatched power to watch, listen and collect information about what you do and where you go in your car. All 25 car brands we researched earned our privacy not included warning label making cars the official worst category of products for privacy that [00:47:00] we have ever reviewed. The car brands we researched are terrible at privacy and security. For one thing, they collect too much personal data every single one of them. We review 25 car brands in our research and we handed out 25 dings as in a dent in your car, 25 dings for how those companies collect and use data and personal information. [00:47:30] That's right. Every car brand we looked at collects more personal data than necessary and uses that information for a reason other than to operate your vehicle and manage their relationship with you.

(00:47:46):
For context only 23% of the, by comparison, I'm sorry, only 63% by comparison of the mental health apps [00:48:00] and they said another product category that stinks at privacy. We reviewed this year, received this dinging but it was 100% for automobiles. So cars are worse than mental health apps at managing privacy is their point. They said, and car companies have so many more data collecting opportunities than other products and apps. We use more than even smart devices in our homes or cell phones. [00:48:30] We take wherever we go. They can collect personal information from how you interact with your car, the connected services you use in your car, the car's app which provides a gateway to information on your phone and can gather even more information about you from third party sources like SiriusXM or Google Maps. It's a mess. The ways car companies collect and share your data are [00:49:00] so vast and complicated that we wrote an entire piece on how that works.

(00:49:05):
The gist is they can collect super intimate information about you from your medical information, your genetic information to your sex life. Then they put in parent seriously to how fast you drive, where you drive and what songs you play in your car in huge [00:49:30] quantities. They then use it to invent more data about you through inferences about things like your intelligence, your abilities and your interests and get this most 84% share or sell that data. Okay, and I'll just to stop here, that was the surprise to me. It's like what they are data [00:50:00] retailers. They are retailing the data that they're collecting about their drivers. Mozilla said it's bad enough for the behemoth corporations that own the car brands to have all that personal information and their possession to use for their own research marketing or the ultra vague business purposes. But then most 84% of the car brands [00:50:30] we researched say they can share your personal data with service providers, data brokers or other businesses we know little to nothing about and worse 19 of the 25, which would be 76% say they can sell your personal data.

(00:50:52):
A surprising number, 56% of those of the total 25 also say they can share your [00:51:00] information with the government or law enforcement in response to a request and they write not merely a high bar court order, but something as easy as an informal request. A very low bar card company's willingness to share your data is beyond creepy rights. Mozilla, it has the potential to cause real [00:51:30] harm and inspired our worst cars and privacy nightmares. And keep in mind they say that we only know what companies do with your personal data because of the privacy laws that make it illegal not to disclose that information such as California's Consumer Privacy Act, so-called anonymized and aggregated data can and probably is shared too with [00:52:00] vehicle data hubs who are the data brokers of the auto industry and others. So while you are getting from point A to point B, you're also funding your car's thriving side hustle in the data business in more ways than one next most.

(00:52:22):
92% of there in their study give drivers little to no control over their personal data. [00:52:30] All but two of the 25 car brands we reviewed earned our dinging for data control, meaning only two car brands, Renault and Dacia, both owned by the same parent company say that all drivers have the right to have their personal data deleted. None of the others do. While we would like to think this deviation from the norm is one car company taking a stand [00:53:00] for driver's privacy, it's probably no coincidence that these cars are only available in Europe, which is protected by the robust general data protection regulation, the G D P R privacy law. In other words, car brands often do whatever they can legally get away with to your personal data.

(00:53:24):
They wrote could not confirm whether any of them meet our minimum security [00:53:30] standards. They said it's so strange to us that dating apps and sex toys publish more detailed security information than cars. Even though the car brands we researched each had several long-winded privacy policies, Toyota wins with 12. We could not find confirmation that any of the brands meet our minimum security [00:54:00] standards. Our main concern is that we can't tell whether any of the cars encrypt the personal information that sits on the car and that's the bare minimum. We don't call them our state-of-the-art security standards. After all, bare our minimum security standards. We reached out as we always do by email to ask for clarity, but most of the car companies completely ignored us. Those who at least [00:54:30] responded Mercedes-Benz, Honda and technically Ford don't still didn't completely answer our basic security questions. A failure to properly address cybersecurity might explain their frankly embarrassing security and privacy track records.

(00:54:50):
We only looked at the last three years but still found plenty to go on with 17 of them. That [00:55:00] 68% of the car brands earning the bad track record dinging for leaks, hacks and breaches that threatened their driver's privacy. Okay, so then in the article they provide a car by car table of these transgressions, several columns of classification of problems by 25 rows for each of the car brands, but frankly the table's not [00:55:30] worth examining. They're all really bad. I think that, as I said, I think we're seeing a classic case of oversight. This is a recently emerged feature category that no one has yet really focused on and boy, I really do hope that the privacy people take a look at this and say, whoa, what this still the wild west out there. So they had then a few additional points. They said, Tesla [00:56:00] is only the second product we have ever reviewed to receive all of our privacy dings.

(00:56:08):
The first was an AI chatbot we reviewed. We reviewed earlier this year. They said what set them apart was earning the untrustworthy AI dinging Tesla's AI powered autopilot was reportedly involved [00:56:30] in 17 deaths and 736 crashes and is currently the subject of multiple government investigations. So ouch Nissan earned its second to last spot for collecting some of the creepiest categories of data we have ever seen. They wrote it's worth reading the review in [00:57:00] full, but you should know it includes your sexual activity not to be outdone. Kia also mentions they can collect information about your sex life in their privacy policy. Oh and six car companies say they can collect your genetic information or genetic characteristics. They said yes. Reading car privacy policies [00:57:30] is a scary endeavor. Now I'll just interject here to suggest that the fact that it can be done doesn't mean that it is being done or has ever been done.

(00:57:46):
These sorts of statements in privacy policies feel like overly broad policies that arise after some wing nut brings an unfounded lawsuit against an automaker that the [00:58:00] firm's attorneys will then overreact by adding a clause stating for example, they cannot be held responsible for anything that happens if you're picked up by space aliens while operating their motor vehicle. This is not meant to suggest that those things will happen if you are abducted. They're just saying if they should, then don't go suing us because the policy you already agreed to by driving our car says [00:58:30] it's not our fault if something happens. And with regard to references to sexual activity and sex life, that could refer to the car's G p S.

(00:58:44):
The G P S is recording where you're going and so if it's used after the fact to infer something about the driver based upon when they went where, well then again, they've included a broad exclusion because [00:59:00] of some past lawsuit that they suffered. So that's probably what that is about I hope. Mozilla also said none of the car brands use language that meets Mozilla's privacy standard about sharing information with the government or law enforcement, but Hyundai goes above and beyond in their privacy policy. It says they'll comply with lawful requests, whether formal or informal. All of the car brands on this list except [00:59:30] for Tesla, Renault, and Dacia signed on to a list of consumer protection principles from the US Automotive Industry Group Alliance for Automotive Innovation Inc. The list includes great privacy preserving principles such as data minimization, transparency and choice. But how many of the car brands actually follow these principles?

(00:59:57):
Zero. It's [01:00:00] interesting if only because it means the car companies do clearly know what they should be doing to respect your privacy even though they absolutely don't do it. This is usually where we'd encourage you to read our reviews and to choose the products you can trust when you can, but unfortunately cars aren't really like that. Sure, there are some steps you can take to protect more of your privacy and we've listed them all in each of our reviews under tips to protect yourself. They're definitely worth doing. [01:00:30] You can also avoid using your car's app or limit its permissions on your phone, but compared to all the data collection you can't control these steps feel like tiny drops in a massive bucket. Plus you deserve to benefit from all the features you pay for without also having to give up your privacy and they finish.

(01:00:52):
We spent over 600 hours researching the car brand's privacy practices. That's [01:01:00] three times as much time per product than we normally spend even so we were left with so many questions. None of the privacy policies promise a full picture of how your data is used and shared. If three privacy researchers, that's how many they had on this project can barely get to the bottom of what's going on with cars. How does the average time pressed person stand a chance? No kidding. Many people [01:01:30] have lifestyles that require driving. So unlike a smart faucet or voice assistant, you don't have the same freedom to opt out of the whole thing and not drive a car at all. We've talked about the murky ways that companies can manage, can manipulate your consent and car companies are no exception. Often they ignore your consent, sometimes they assume it. Car companies do that by assuming that you have read and agreed [01:02:00] to their policies before you step foot in their cars. Supers privacy policy even says that the passengers of a car that use connected services have consented to allow them to use and maybe even their personal information just by being in the car. When car companies say they have your consent or won't do something without [01:02:30] your consent, it often means what it should.

(01:02:36):
As I said, I think this is an area that has until now escaped oversight, hopefully research like this, which puts the problem squarely on the map, will eventually help that to happen. And wow, we are driving around inside of connected computers Jason, and they do indeed have sensors galore. [01:03:00] They're connected, they know where we are, they know what time it is. They know everything we do with our computerized entertainment systems. What stations we're listening to, you could imagine they're filled with cameras. Are they not going to monetize that? Why would we imagine people Of course they would not monetize that. Yeah, I mean none of this is that. I mean it's shocking but it's also unsurprising, right? I guess it is saddening. I don't know that [01:03:30] it's saddening. Yeah, it is saddening, but at the same time we live in the data economy and man, at this point I just feel so beaten down about how my data is used all over the place.

(01:03:44):
It would be easy for me to be like, well, but I carry around the smartphone and it does a lot of those things and apparently I've said that that's okay. I still have a smartphone and have for years. But I mean you're right, vehicles have the potential of having [01:04:00] cars inside that can be monitored potentially and that's just one example and it's always following you wherever you go. So yeah, that data has value. That's the unfortunate reality. Yeah, and I guess it seems to me that the minimum we could request is the ability for transparency to know exactly what data is being collected and then the [01:04:30] ability to ask for it to be deleted and true. Most drivers never will. They're not listening to this podcast, they're just not concerned. They go, oh yeah, well whatever. But the good news is there is legislation which is moving in this direction which gives consumers control if they want it.

(01:04:50):
Totally. And at this point the automobile industry is way behind on that score obviously. Okay, so we've got some neat feedback [01:05:00] from our listeners, someone whose name I've known him from years of transacting with him. His handle is Ram Riot, but his name, he's named himself in Twitter four 18, which of course is one of the error codes that can be returned by H T T P. It's about, I think it's Are you a teapot? Anyway, so his is four 18 is the tready. [01:05:30] Anyway, he said, hi Steve, this dom meaning the document object model for our web browsers. This DOM issue is a tough but old nut raised again in connection to extensions, which we talked about last week. Would this be an opportunity for browser vendors to tighten up the same origin rules for access to form fields, perhaps make them write only immutable objects when accessed cross origin?

(01:05:59):
And so [01:06:00] great point and question. I strongly suspect that the real problem at this point that we face from a practical standpoint is breakage of the already existing quite rich browser extension ecosystem. Not to mention the loss of third party password managers, which do have to poke into every website's forms in order to do their work. But even more broadly, [01:06:30] there would be breakage that would result from any further tightening of access by extensions. We already saw the uproar that chrome's rather modest move to the V three manifest caused and the trouble we're talking about last week was after this move. So these are things that you could still do even under the V three manifest. So things remain extremely permissive today. Just think of the degree [01:07:00] to which we must trust today's browser, and this is really sobering when you think about it. The degree to which we must trust today's browser, the browser itself through it passes everything nearly our entire interaction with the world today is through our chosen browser.

(01:07:23):
Interactive applications have moved or are moving from desktop applications to browser [01:07:30] hosted apps. All of our usernames and our passwords and the private information we fill out as we interact with anything, the i r s or credit bureaus or loan applications or our doctor's offices or dating sites or any retailer. Everything we do today passes through our browser and now we're reminded that if our password managers are able to see everything we do, [01:08:00] then so are other extensions which we might trust far less yet here they sit watching because they provide some little browser doodle that we like and we don't want to now live without. We started off without much concern for browser extension, security and privacy many years ago, but now that we feel we need more security and privacy, it's difficult to take it back [01:08:30] without sacrificing the rich feature set and environment that these extensions provide to us.

(01:08:38):
Both Firefox and Chrome are aware of this problem, which is why both of them allow their users to decide which extensions should be allowed to follow them into the browser's private viewing mode and depending upon how extension laden any user's normal browsing is, [01:09:00] it might be worthwhile to consider trimming back on the extensions that are allowed to run well, I would argue first normally, but also specifically in chrome's incognito mode or browsers, private window modes. So you're able to then switch into there and have many fewer things watching what you do. It's not just the browser that's watching and as I've said, we have to utterly and absolutely trust the browser [01:09:30] because it sees everything we do. Turns out extensions which we may trust far less than the browser, who knows where they came from, are able to see what's going on too. The alternative, and I've heard that some of our listeners are doing this with B two reserve a secondary browser because we certainly don't lack for choice of browsers today.

(01:09:53):
Everyone can use Firefox or Chrome, so reserve a secondary browser, which is [01:10:00] running maybe only your password manager and nothing else. Obviously we we're trusting our password manager a lot also, so have a browser that only does that so it's able to log you in places but then you don't have any other perhaps sketchy extensions that are doing things. You may not want to live there without all your extensions, but that's where you might want to go when you're doing something that is much more confidential. In any event, we [01:10:30] do currently have a problem that's going to require some eventual resolution because right now, as we noted last week, the need to trust every extension with everything we do just like we do with our password manager and our browser, that's a problem. Someone posting as person typing number 22, I guess he feels he's rather generic.

(01:10:56):
He said, Hey again, Steve and last week's SSN 9 38. [01:11:00] The trade-off between security and convenience was mentioned with respect to websites and browser extensions like password managers. I figured it was worth mentioning that on the Mac and on iOS, I use Apple's Universal autofill with a compatible password manager and he says one password is an example, and I use KeyPass. He said for most browsing I use Firefox, but to log into [01:11:30] banking and similar sites, I use Safari on the Mac without any extensions. The OSS itself recognizes websites, password fields, and allows me to choose a password to autofill from my password manager. I feel like this provides lots of both security and convenience and I completely agree that is a great solution due to it's sort of a solution of using [01:12:00] the dual browser approach, but it also strengthens the isolation from even the password manager by interposing an OSS that's as secure as Apple has been able to design any OSS into the path.

(01:12:16):
I think that's very nice and I had a great comment from a guy whose name I don't know because he uses what is now we're calling X what used to be known as [01:12:30] Twitter so frequently that it wouldn't let him log in. So he posted from his wife's account. Anyway, whoever he is, he said, hello Steve, thank you for the many great shows. I've been listening since episode one and was overjoyed to hear you're not stopping at 9, 9 9. With regard to the web extension security research story, I helped develop a web application used internally by the majority of banks [01:13:00] in the US a few years ago. We implemented content security policy, C S P headers. C S P has a wonderful feature where all violations can be reported to the website to help fix bad rules. During the initial rollout, we reviewed the violations and found that JavaScript was being injected into our sites by browser extensions.

(01:13:30):
[01:13:30] A few of these extensions seem to have questionable intentions and were likely installed by adware. I do not think the research papers solution of adding a secure input element or alerting the user of nefarious activities is adequate. Since any extension could alter the source before the DOM is rendered and therefore could strip out these protections. A website can try their best [01:14:00] to obfuscate input and output, but at the end of the day, a browser extension can access or modify headers including cookies, requests and responses. It is an ideal position for a man in the middle attack while the user thinks their connection is secure and private. Maybe something similar to C S P or H S T Ss where a site with sensitive information could request the browser to disable all browser extensions, [01:14:30] could help protect users. Of course, sites with advertisements would quickly abuse this power to block good extensions like you block origin.

(01:14:40):
So maybe this would just add complexity to an already impossible problem where I work. He said browser extensions have been disabled. It is annoying that many useful tools are blocked and yet I cannot argue with their decision. So wow, [01:15:00] this guy's workplace said, nope, sorry, there's just too much danger there. All browser extensions are disabled and it would be a pain not to have the benefit of a built-in password manager, but on the other hand, browsers are now universally offering their own built-in password managers, so he's not without autofill. What was interesting was that the moment I heard this [01:15:30] listener talk about some means to allow websites to force disabled extensions, I was reminded that exactly such a proposal was floated through the industry. I'm not exactly sure when, like a month ago, I don't recall the details and I don't think we discussed it here, but I believe I remember Leo, Stacey and Jeff discussing it on this week in Google, and I also recall that many naysayers were suggesting that this was just a slimy way of disabling ad [01:16:00] blockers.

(01:16:00):
So right to this listener's point, this is all a mess, Anthony, sorry, Anthony Bosio. He said, I think topics might, meaning Google's topics solution for profiling or providing some information about what people are currently interested in. He said, I think topics might be D o A. [01:16:30] People are interpreting it and spreading it basically as shares your browser history with other sites. Okay, let's hope that this is just the initial uninformed reaction to anything that's new. Chrome's, massive market power. Any technology Google creates and enforces by virtue of foreclosing on all alternatives, which they [01:17:00] have set, they have said they're going to do next year is going to succeed because there won't be any alternative to using it. So I believe topics cannot be d o a if Google doesn't want it to be. Then to that we add that topics is also an extremely benign non tracking privacy enforcing system and I expect that while it may take some time for the less technical types [01:17:30] to catch up and to understand it, it's where the entire industry is going to go.

(01:17:35):
And to that I say yeah, yay, Barbara said some downloadable software basically is a stub that phones home to download the rest while installing. I don't think giving the stub to virus total would be helpful. And of course she's referring to our previous conversation about sending things to virus total and sending things [01:18:00] you download to virus total and having it check on them before you trust them. And I think Barbara makes a very good point. As we know, not all software we download is in the entire package. We're now often seeing a much smaller and smaller that immediately connects back to the home base to download the entire package, which is often many modules deep. The promise is that you select the things you want and it only downloads those [01:18:30] things that you have said you want and intend to use. But again, it doesn't give you a chance to check all of them against virus total.

(01:18:39):
I someone tweeting from the handle Skynet said Hi Steve regarding Martin's duh about virus total being served extensively clean files from a malicious source. How would such a site even know who or when, [01:19:00] who or when they would be doing this to know to send a clean executable? Do websites even actively monitor who's downloading their content and if so, wouldn't they have to time it so as to know when to give virus total a clean one instead of a malicious one? He says, I don't get where Martin is getting this idea from. You'd have to be checking logs of IP addresses, wouldn't you? And by the time they discovered that, oh look, virus Total is trying to get one of [01:19:30] our most malicious executables Quick give them this one instead. I don't see how it works. Even with some redirect link, I think it would be too late to detect that it was virus total asking for the file.

(01:19:41):
No. Am I being a doofus for missing a bug or for missing a big duh? Please explain. Okay, so no one's being a doofus here. When I created Shields Up 24 years ago back in 1999, [01:20:00] it was because I knew that the IP address of anyone connecting to my web server was immediately known to the server. So I was able to return custom webpage results based upon the security I had detected at their connecting ip. So it would in fact be simple for the IP address blocks which have been assigned to known security [01:20:30] researchers and virus total to cause different clean software to be delivered on demand. And we've seen other non-web server examples of this where malware is actually aware of the IP addresses of researchers and acts differently and changes its behavior to be non-malicious when it realizes that known researchers are downloading it or examining [01:21:00] it.

(01:21:00):
So yeah, this actually does work and can't happen. E Remington says, hi Steve. One email provider people overlook is iCloud. You can set up your own domain, which by the way I had no idea of. He said, while iCloud limits you to, he says, if I remember right, five email addresses by using a form like something plus email [01:21:30] address@yourdomain.com, you can have an infinite number of email addresses and he says RFC 8 22, which is the original email, and all of its updates have supported the idea of using the plus symbol added to a tag in order to differentiate it basically that the tag is ignored and it goes into your main ID for the domain. So anyway, I'm glad that Remington [01:22:00] mentioned iCloud since he's correct that iCloud as an email provider is easily overlooked and certainly they are reputable magnified. 2 47 said Steve, with Windows 12 being prepped for 2024, will in control be updated or will the current version allow for the version and release application to be locked accordingly?

(01:22:30):
[01:22:30] Well, you know the old expression about fool me once as we know the predecessor to InControl was never 10. I would've named the next one never 11, except after Microsoft changed what was clearly their original intention for Windows 10 to be the last Windows ever, which everyone recalls. Even if now Microsoft claims it was never what they said, I decided that I had [01:23:00] to drop any major version numbering from the utility so in control gets to live on without any further name changes. And since it's all about controlling just a few registry keys, which Microsoft officially supports, it should keep working as long as Microsoft honors those settings and since their enterprise users depend upon those, I can't see anything changes. So I think when Windows [01:23:30] 12 occurs next year in control will probably continue to work and of course if not, if something changes, I'll update it, but I don't expect that I'm going to have to.

(01:23:43):
Christian p said, hi Steve. Just an observation about the concern from the user on the last Security Now podcast about testing a file directly from virus total and the risk of the file being swapped based on the source of the test. [01:24:00] You can remove any risk of testing the file directly by getting virus total to download the file, then recheck the hash before you execute it virus total reports the s H A 2 56 and the hash is also in the U R L of the result. So perhaps a sensible process he says would be to get virus total to download and check. Then if that looks largely okay, [01:24:30] then download directly to your system and test again with virus total. The second test should take you to the same page virus total doesn't automatically retest files that have already been submitted. It just recompute the hash and looks up the last submitted report.

(01:24:52):
You do have the option to reanalyze, but there's little point if the hash is the same, perhaps if a new scanning has [01:25:00] been added since the last test, perhaps if a new scanning has been added since the last test. Anyway, he says great podcasts, et cetera. Okay, so I wanted to share this since the way the world is evolving, keeping virus total in one's back pocket I think makes a lot of sense. Christian is right about the way virus total operates before it does anything else. It first calculates the S H a 2 56 hash of the file that it [01:25:30] either downloaded or that the user submitted. It then checks to see whether that file's hash already exists in its known library of previously scanned files. And if so, it just returns the previous result. No need to re-scan since it already did. And the matching s h a 2 56 signature is absolute proof that the file is not changed from the one that it previously scanned.

(01:25:57):
And Christian's suggestion [01:26:00] of them uploading your own copy of the hopefully identical file that virus total first approved of to see whether they indeed match makes sense. I wanted to also note that Windows has for a while now had a built-in hash file command, which is an actual sub command of the cert uil, which also allows a user to quickly and easily generate [01:26:30] an S h A 2 56 hash of any file they may wish to check on their own. So you just say you open a command prompt and say Certil, C E R T UT hash file, then the path to the file, then s h a 2 56 and hit enter and you'll immediately get the SS h a 2 56 hash of the file, which you can [01:27:00] then manually check if you wish. And lastly, inspector CSO says at SS G G R C, Steve, what d n s services do you recommend for children under 10 to avoid unsafe and unsuitable sites?

(01:27:19):
I think that LEO uses and recommends open d n s. They are definitely reputable. They've been [01:27:30] acquired by Cisco and they have a free family use tier, which they refer to as their Family Shield service. Using it is as easy as configuring the family's router to use OpenDNS's servers at two very specific IP addresses, 2 0 8 0.6 7 2 2 1 2 3 and 2 0 8 6 7 2 2 0 1 2 3. [01:28:00] Once you've done that, you can go to welcome.opendns.com, which will confirm that you're now using their filtered D N Ss. And I suppose if you wanted a bit more proof that it was working, you could also try going over to PornHub and see whether that works and I would expect you'll not be able to go there [01:28:30] when you're using the Family Safe Family Shield service. So anyway, a little quickie and I'm glad that Inspector CSO thought to ask, so Jason, let's share our last sponsor piece and then wow, spill the beans.

(01:28:52):
Speaking of, yeah, when Last Pass finally really screwed this and we had the podcast [01:29:00] leaving LastPass, which was where I finally said, okay, there's just no more excuse for this. A lot of people were nervous about what that meant and their nervousness was based on or should have been on their password and the number of iterations that LastPass had been using for them. And we know that unfortunately not everybody was set to the iteration count of 100,100, [01:29:30] which was where we last left it. When we've last visited this issue for LastPass, many people were set to one, some were set to 500, some were set to 5,000. It turns out that there is pretty convincing evidence now, which is what we're going to share, that the data that was stolen is being decrypted and is being used to hurt hopefully previous LastPass [01:30:00] users. Yeah. Wow.

(01:30:02):
Alright, well we are going to, like I said, spill the beans on all of this and a couple of spit takes. Oh my goodness, that's coming up next. But first, this episode of Security Now is brought to you by Pan Optica in the rapidly evolving landscape of cloud security, Cisco Pan Optica is at the forefront revolutionizing the way you manage your microservices and workloads [01:30:30] with a unified and simplified approach to managing the security of cloud native applications over the entire lifecycle. Pan Optica simplifies cloud native security by reducing tools, vendors and complexity by meticulously evaluating them for security threats and vulnerabilities. Pan Optica ensures your applications remain secure and resilient. Pan Optica detects security vulnerabilities on the go in development, testing [01:31:00] and production environments, including any exploits in open source software. It also protects against known vulnerabilities and container images and configuration drift all while providing runtime policy-based remediation as Cisco's comprehensive cloud application security solution.

(01:31:20):
Pan Optica ensures seamless scalability across clusters and multi-cloud environments. It offers a unified view through a simplified dashboard experience, [01:31:30] reducing operational complexity and fostering collaboration among developers, SREs and SecOps teams. So take charge of your cloud security and address security issues across your application stack faster and with precision, embrace Pan Optica as your trusted partner in securing APIs, serverless functions, containers and Kubernetes environments allowing you to transform the way you protect your valuable. Learn more [01:32:00] about Pan Optica. At Pan Optica app. We thank them for their support of security. Now the LastPass saga, it continues and I feel like every time we check in on it, it's worse than it was before. Really happy. I'm not with LastPass anymore, but I'm super curious to hear all about this. Nothing I'm going to say is going to disabuse you of that concern. Jason, any regular listener of [01:32:30] this podcast can probably guess that today's title of, as I said, last Mess will have something to do with LastPass.

(01:32:39):
So there's growing significant circumstantial evidence, which under the circumstances is probably the only sort of evidence anyone would ever be able to obtain, which strongly suggests that the encrypted [01:33:00] LastPass vault data, which LastPass had been storing for its many users in which they famously had exfiltrated from their backup location is now being and has successfully been decrypted. I don't mean all of it, I mean incrementally, but that's bad enough, right? And it's being used by those unknown cyber assailants. Brian Krebs reported the news of this last Tuesday [01:33:30] on his Krebs on security site. Under the title Experts Fear Crooks are cracking keys stolen in LastPass breach. So here is some of what Brian reported he wrote in November of 2022, the password manager service LastPass disclosed a breach in which hackers stole password vaults containing both encrypted and plain text data from more than 25 million users.

(01:34:00):
[01:34:00] Since then, a steady trickle of six figure cryptocurrency heists targeting security conscious people throughout the tech industry has led some security experts to conclude that crooks likely have succeeded at cracking open some of the stolen LastPass vaults. Taylor Monaghan is lead product manager of [01:34:30] Meta Mask, a popular software cryptocurrency wallet used to interact with the Ethereum blockchain since late December, 2022. Monaghan and other researchers have identified a highly reliable set of clues that they say connect recent thefts targeting more than 150 people. Collectively, these individuals have been robbed [01:35:00] of more than $35 million worth of their cryptocurrency. Monaghan said virtually all of the victims she has assisted were longtime cryptocurrency investors and security minded individuals. Importantly, none appeared to have suffered the sorts of attacks that typically preface a hive dollar crypto heist such as the compromise of one's email [01:35:30] and or mobile phone accounts. Monaghan wrote the victim profile remains the most striking thing. They truly all are reasonably secure. They are also deeply integrated into this ecosystem, including employees of reputable crypto orgs, VCs, venture capitalists, people who built defi protocols, deploy contracts and run [01:36:00] full nodes. Monaghan has been documenting the crypto thefts via Twitter now X since March of 2023, frequently expressing frustration in the search for a common cause among these victims. Then on August 28th, Monaghan said she'd concluded that the common thread among nearly every victim was that they'd previously used LastPass [01:36:30] to store their seed phrase the private key needed to unlock access to their cryptocurrency investments.

(01:36:42):
Brian Krebs included a screenshot in his coverage of Taylor's tweets on August 28th. She tweeted the diversity of key types drained is remarkable, 12 and 24 word seeds generated [01:37:00] via all types of hardware and software wallets, Ethereum, presale wallet, Jason's wallet, DATs private key generated via M e W and others. And she also noted that the diversity of the chains and coins which had been drained was striking. So it wasn't as if there was a fault in any specific chain or crypto contract [01:37:30] that had been exploited or service. There was no common denominator. Well, except for LastPass. Again, Brian writes, armed with your secret seed phrase, anyone can instantly access all of the cryptocurrency holdings tied to that cryptographic key and move the funds anywhere they like. This is why the best practice for many cybersecurity enthusiasts has long been [01:38:00] to store their seed phrases either in some type of encrypted container such as a password manager or else inside an offline special purpose hardware encryption device such as a reor or ledger wallet.

(01:38:16):
Nick backs director of analytics at Unceded, a cryptocurrency wallet recovery company said the seed phrase is literally the money. If you have my [01:38:30] seed phrase, you can copy and paste that into your wallet, then you can see all my accounts and you can transfer my funds back. Said he closely reviewed the massive trove of cryptocurrency theft data that Taylor Monaghan and others had collected and linked together. He said it's one of the broadest and most complex cryptocurrency investigations I've ever seen. [01:39:00] I ran my own analysis on top of their data and reached the same conclusion that Taylor reported the threat actor moved stolen funds from multiple victims to the same blockchain addresses making it possible to strongly link those victims. Bs. Monaghan and others interviewed for this story say they've identified a unique signature that links the theft of more than $35 million [01:39:30] in crypto from more than 150 confirmed victims with between two and five high dollar heists happening each month since December, 2022.

(01:39:45):
So in other words, it's not all at once. It's even following the pattern of brute forcing something and the common link of somethings is LastPass. [01:40:00] Anyway, Brian says the researchers have published findings about the dramatic similarities in the ways that victim funds were stolen and laundered through specific cryptocurrency exchanges. They also learned the attackers frequently grouped together victims by sending their cryptocurrencies to the same destination crypto wallet. By identifying points of overlap in these destination addresses, the researchers were then able to track down and [01:40:30] interview new victims. For example, the researcher said their methodology identified a recent multimillion dollar crypto heist victim as an employee at chain analysis, a blockchain analysis firm that works closely with law enforcement agencies to help track down cyber criminals and money launderers. Okay, so just to make sure everyone is following this, based on what they were seeing, they followed victims [01:41:00] to the bad guys.

(01:41:04):
Then they looked at all of the transactions on the bad guy's wallets that identified new victims. Then they looked up the known victim's wallets and were able to find, for example, in this case, essentially new victims, in this case an employee at chain analysis. Then they went to chain analysis and said, Hey, has anybody [01:41:30] had a problem? Brian writes chain analysis confirmed that the employee had indeed suffered a high dollar cryptocurrency heights late last month, but otherwise declined to comment Further, BS said the only obvious commonality between the victims who agreed to be interviewed was that they had all stored the seed phrases for their cryptocurrency wallets. In LastPass [01:42:00] BS told Brian Krebs on top of the overlapping indicators of compromise, there are more circumstantial behavioral patterns and tradecraft, which are also consistent between different thefts and support. This conclusion. I'm confident enough he said that this is a real problem that I've been urging my family and friends who use LastPass to change all of their passwords and migrate any crypto that [01:42:30] may have been exposed.

(01:42:31):
Despite knowing full well how tedious that is. Brian Krebs asked LastPass for any comment about which Brian wrote. He said LastPass declined to answer questions about the research highlighted in this story citing an ongoing law enforcement investigation and pending litigation against the company in response to its 2022 data breach. [01:43:00] Yep, that's the standard Dodge. Of course, LastPass said in a written statement to Brian, last year's incident remains the subject of an ongoing investigation by law enforcement and is also the subject of pending litigation. Perhaps some additional litigation now and why they continued since last year's attack on LastPass, we have remained in contact with law enforcement and continue to do so. We've shared various technical information [01:43:30] indicators of compromise and threat actor tactics, techniques and procedures with our law enforcement contacts as well as our internal and external threat intelligence and forensic partners in an effort to try and help identify parties responsible.

(01:43:46):
In the meantime, we encourage any security researchers to share any useful information they believe they may have with our threat intelligence team by contacting security disclosure@lastpass.com. [01:44:00] So Brian's reporting then covers for his readers everything that this podcast already covered for our listeners back at the time, things like how crucial the P B K DF iteration count is for increasing the difficulty of cracking the user's password by brute force. How the early LastPass users may have originally had iteration counts of one or 500 and how despite [01:44:30] LastPass moving the defaults upward over time as necessary to keep ahead of brute force cracking capabilities for reasons that no one has ever explained, many of the original much too low original defaults remained in place. Nicholas Weaver, a researcher at University of California Berkeley, their International Science Institute, and he also lectures at uc Davis said about brute force attacks [01:45:00] that quote, you just crunch and crunch and crunch with GPUs with a priority list of targets that you target.

(01:45:11):
He said that a password or passphrase with average complexity such as correct horse battery staple is only secure against online attacks and that it's roughly 40 bits of entropy means that a graphics card [01:45:30] can blow through it in no time. An NVIDIA 30 90 can do roughly 4 million password guesses per second with an iteration count of 1000, but that would go down to 8,000 per second with 500,000 iterations, which is why he says iteration counts matter so much. So a combination of not that strong [01:46:00] of a password and an old vault with a low iteration count would make it theoretically crackable. It would take real work, but the work is worth it given the high value of the targets. And here's something else that Brian reported, which is very interesting. Brian interviewed one of the victims tracked down by Monaghan. This person is a software engineer and a startup founder who was recently robbed of approximately $3.4 million [01:46:30] worth of different cryptocurrencies.

(01:46:34):
This engineer agreed to tell his story in exchange for anonymity because he's still trying to claw back his losses. Good luck for his reporting. Brian refers to this person as Connor, which is not his real name. So Brian Connor said he began using LastPass roughly a decade ago and that he also stored the seed passphrase for his primary [01:47:00] cryptocurrency wallet Inside LastPass, Connor chose to protect his LastPass vault with an eight character master password that included numbers and symbols. Okay, so maybe around 50 bits of entropy. Connor said, I thought at the time that the bigger risk was losing a piece of paper with my seed phrase on it. I had it in a bank security deposit vault before that, [01:47:30] but then I started thinking, hey, the bank might close or burn down and I could lose my seed phrase. Those seed phrases sat in his last pass vault for years.

(01:47:44):
Then early on the morning of Sunday, August 17th, 2023, Connor was awoken, meaning just recently, right a couple weeks ago, Connor was awoken by a service he'd set up to monitor his cryptocurrency addresses [01:48:00] for any unusual activity, someone was draining funds from his accounts fast. Like other victims interviewed for this story, Connor didn't suffer the usual indignities that typically presage a cryptocurrency robbery such as account takeovers of his email inbox or mobile phone number. Connor said he doesn't know the number of iterations his master password was given originally or what it was set [01:48:30] at when the LastPass user vault data was stolen last year. But he said he recently logged into his LastPass account and the system forced him to upgrade to the new 600,000 iterations setting, which we know as too little too late. He said, because I set up my last pass account so early, I'm pretty sure I had whatever week settings or iterations [01:49:00] it originally had. Connor said he's kicking himself because he recently started the process of migrating his cryptocurrency to a new wallet protected by a new seed phrase, but he never finished that migration process and then he got hacked. He said, I had set up a brand new wallet with new keys and I had that wallet ready to go two months ago, but had been procrastinating moving [01:49:30] things to the new wallet and I thank Connor for his honesty. No kidding?

(01:49:39):
Wow. Nicholas Weaver, the uc, Berkeley researcher said what we all know, which is that LastPass deserves blame for not having upgraded iteration counts for all users a long time ago and called LastPass latest forced updates, [01:50:00] a stunning indictment of the negligence on the part of LastPass, meaning they could and should have done this years and years ago. He said that they never even notified all those with iteration counts of less than 100,000 who are really vulnerable to brute force even with eight character random passwords or correct horse battery staple [01:50:30] type passphrases is outright negligence. He said, I would personally advocate that nobody ever uses LastPass again, not because they were hacked, not because they had an architecture that makes such hacking a problem, but because of their consistent refusal to address how they screwed up and take proactive efforts to protect their customers unquote.

(01:51:00):
[01:51:00] BS and Monaghan both acknowledged that their research alone can probably never conclusively tie dozens of high dollar crypto heists over the past year to the LastPass breach. But B says at this point he doesn't see any other possible explanation. He said some might say it's dangerous to assert a strong connection here, but I'd say it's dangerous to assert there isn't one. I was arguing [01:51:30] with my fiance about this last night. He said she's waiting for LastPass to tell her to change everything. Meanwhile, I'm telling her to do it now. So yeah, based upon our own experience with LastPass, anyone who's waiting for them to do anything like take responsibility, which might open them to additional litigation, is probably going to be waiting for quite a while. As for all [01:52:00] of our listeners whose LastPass vaults were exfiltrated back at the time of the heist, I'm fairly certain that most of us likely have little to fear the bad guys obtained a massive treasure trove of encrypted information for more than 25 million individual users, but decrypting it while yes, technically feasible on a one by one instance, [01:52:30] depending upon past phrase length and iteration count is still a type consuming and massive undertaking.

(01:52:41):
So attacks on the vault repository are going be highly targeted. They want money, plain and simple. They're not going to waste the cost of decrypting someone's vault unless they're fairly certain that a cash equivalent pot of gold awaits them [01:53:00] if they're successful. Anyone who's iteration count was high, 110, a hundred thousand and 100, which was what we last told all of our listeners to switch to when we talked about it years prior to all of this, or even greater than that, rather than the one that it started at, the 500 that it moved to or the 5,000 that it moved to all without [01:53:30] it ever being done proactively, retroactively, and anybody who's iterating, who's iteration count was high, we set it to a hundred, a thousand and a hundred the last time we talked about it years before all this happened, though you're almost certainly safe.

(01:53:50):
On the other hand, somebody whose iteration count was 100 or one 500 or 5,000, that's a problem. Also, [01:54:00] a high ENT v passphrase, which all of our listeners hopefully had that would be protection. But now we know that the bad guys didn't just grab these 25 million volts and say, oh darn, they're all encrypted. No, they're attacking them. They're going after pots of gold. So if by chance you did own cryptocurrency, do own cryptocurrency and your key, your past phrase or wallet key [01:54:30] was in LastPass, absolutely don't hesitate, create a new wallet, move the cryptocurrency to the new wallet. It really only looks like these attacks are going to be targeted. And I know that all of our listeners have moved over to Bit Warden now anyway, but remember it was the contents at the time of the theft that matters, not what we did afterwards.

(01:54:59):
And this guy backs [01:55:00] said he's telling his friends, you really need to go change all the passwords that LastPass was storing for you at the time of the theft. A pain in the butt. Yes. And again, to me it seemed really unlikely. There's 25 million of us, and if the iteration count was high and you're just some random guy, then I don't think you're ever going to get decrypted and there's no money, right? They [01:55:30] want money and the longer it goes not decrypted, the staler that data gets for them. So probably I'm guessing the lower likelihood that they have to target something that they don't know has some sort of a major payload. Don't be like, Connor, don't set half of this stuff up. If you think there is really something to save on your end and then don't follow through and then don't follow through.

(01:55:58):
That's just so heartbreaking. [01:56:00] The other thing is we were just talking about the power of somebody getting your email address because email is how all the password recovery occurs. So if you do nothing else, change your email password because that's what they will go for. They'll go for your email password, get into your email, then start clicking on sites that they know you have access to and [01:56:30] start doing the whole account takeover routine. So yep, indeed. All right. Well, I am happy that I'm no longer with LastPass. It is disappointing that we're still not getting any sort of real resolute confirmation slash just stating that they've totally and completely messed up in so many ways. I mean, this is just, they're owned by a hedge fund. We're never going to find out, never going to get it. They no [01:57:00] longer care. The guys we knew and cared about are long gone.

(01:57:05):
They cashed out. It's all about private equity and suck as much money out of their corporate install base as they can, and you put it that way. Well, Steve, thank you for diving deep into that. That was fascinating. If not scary, if you're still with LastPass, I think you have yet another reason to get out. Steve Gibson does so many wonderful [01:57:30] things for us, not just of course hosting this podcast, but has a site that you can go to with all sorts of other things that he does. grrc.com, go there. You will find all sorts of things spin, right? Of course, that he's talked about many times. You've been doing it for so long. At this point, the best mass storage, recovery and maintenance tool, you do get audio and video of this show. You can find that@grrc.com. Also, transcripts of this show can be found there as well.

(01:57:59):
So [01:58:00] that's grc.com if you come to our site. We also have security now on our website. That's twit tv slash sn. Yes, we have the audio and the video details there, and you can play through the web browser if you like, but really you should just be subscribing. That's really at the end of the day, what you need to do to help and support us here at twit. Steve and usually Leo when he's not out of town, and I'm filling in for him, record live every Tuesday starting [01:58:30] around 4:30 PM Eastern, 1:30 PM Pacific. That's 2030 utc, and what else am I missing? Oh, club Twit, of course, twit TV slash club twit. You get our members only Discord, which is a lot of fun. It's been super active today in the live chat during this episode, as it always is for security. Now, everybody has so much to say about these stories because sometimes it's just wow what you end up learning about your security, but also you get our [01:59:00] content without any ads.

(01:59:01):
You get a bonus feed with tons of stuff that you don't get outside of the club. That's our TWIT plus podcast feed. Seven bucks a month for twit tv slash club twits, but I think that really does it. That about wraps it up. Steve, thank you so much for joining everybody today and sharing what you got, Steve. Always a pleasure. And Jason, thanks for standing here for Leo, and I heard him say on Mac Break Weekly that until his mom gets settled, he's [01:59:30] going to be going out every month. So we may all be seeing more of you on this podcast. Excellent. Well, I will be right here with you learning along with everyone watching and listening to Steve. Thank you, Steve. Thank you everybody. We'll see you next time on Security now. Bye-Bye.

(01:59:45):
Bye.

Speaker 3 (01:59:47):
Hey, we should talk Linux. It's the operating system that runs the internet, but your game console, cell phones, and maybe even the machine on your desk, and you already knew all that. What you may not know is that Twit now is a show dedicated to it, the Untitled Linux Show. [02:00:00] Whether you're a Linux Pro, a burgeoning CISs man, or just curious what the big deal is, you should join us on the Club Twit Discord every Saturday afternoon for news analysis and tips to sharpen your Linux skills. And then make sure you subscribe to the Club TWIT exclusive Untitled Linux Show. Wait, you're not a Club Twit member yet? We'll go to twit tv slash club twit and sign up. Hope to see you there.

All Transcripts posts