Security Now Episode 906 Transcript
Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.
Leo Laporte / Steve Gibson (00:00:00):
It's time for security. Now. Steve Gibson is here. Lots to talk about. An update on the last pass reach. Steve thought there was a saving grace. Well, I'll let him tell you the story. Norton LifeLock says it saved you from something, but what really? <Laugh> and a look at Rust and how it's helping Google make Chrome safer. All of that and more. Coming up next on Security Now podcasts you love from people you trust. This is TWiT.
This is Security now with Steve Gibson. Episode 906, recorded Tuesday, January 17th, 2023. The Rule of Two Security now is brought to you by PlexTrac the premier cybersecurity reporting and collaboration platform. With PlexTrac, you'll streamline the full workflow from testing to reporting to remediation. Visit plextrac.com/twit to claim your free month of the PlexTrac platform today. And by BitWarden, get the password manager that offers a robust and cost effective solution that can drastically increase your chances of staying safe online. Get started with a free trial of teams or enterprise plan, or get started for free across all devices as an individual user at bitwarden.com/twit and by Barracuda. Barracuda has identified 13 types of email threats and how cyber criminals use them every day. Fishing, conversation hacking, ransomware plus 10 more tricks cyber criminals use to steal money from your company or personal information from your employees and customers.
Get your free ebook at barracuda.com/securitynow. It's time for our security. Now, I know you've been waiting all week for this Steve Gibson's back, and guess what we're not gonna talk about Last pass this week. Hi Steve. We, we actually are. Oh, no, <laugh>. Well, I'm sure there's some follow up. We have other stories. Yes, we do. Yes, we actually do. This is security now number 9 0 6 for the middle of, of what is this? January. and this week, as you said, Leah, we are back to answering some questions that you didn't even know were burning. First <laugh> is, is the last pass iteration count problem much less severe Oh, than we thought. Okay. Because they are doing additional pbk DF two rounds at their end. Well, that would be nice to know that. Wouldn't that be nice? What sort of breach has Norton LifeLock protected its users from and have they, oh yeah, I saw that story.
Hmm. What did Chrome just do? Which followed Microsoft and Firefox? And is the chromium beginning to rust <laugh> will, will Microsoft ever actually protect us from exploitation by old known vulnerable kernel drivers? Hmm. What does it mean that real words almost never appear in random character strengths? And what is Google's rule of two and why does our entire future depend upon it? The answers to those questions and more <laugh> will be revealed during this next gripping episode of security. Now, now that's a T Steve congratulate Golf clap. Very well done, <laugh>. I can't wait to find out all of the answers to those questions. Ah, there are some goodies and a good picture of the week. Ah, yes. But first, before we get underway, or I guess we're underway, but before we get full head of steam going, let's talk about our sponsor. The folks at PlexTrac your security team's secret weapon.
Plextrac is the premier cybersecurity reporting and collaboration platform, and it really transforms away. Cybersecurity gets done. You know, I hope I, I expect that most enterprises have a, a, a cybersecurity team. Maybe you're even pretty sophisticated. You got a red team that's going out there and pen testing and looking for flaws. You got a blue team that does remediation, but red or blue or purple in between. You need PlexTrac to help you get the job done. Are you ready to gain control of the tools that you use and the data that you get from those tools? Are you ready to build better, more actionable reports with less effort on your part? Are you ready to focus on the right remediation and make sure that that remediation happens promptly? Are you working to mature? You're, we're doing questions for the ads. Steve, you started something here.
Are you working to mature your security posture but struggling to optimize efficiency and facilitate collaboration within your team? This is the solution for you. Plextrac, P L E X T R A C. It's a very powerful but very simple, easy to use cybersecurity platform that that does a number of things. But chiefly centralizes your security assessments, your pen test reports, your audit findings, your vulnerability tracking. It transforms the risk management lifecycle, allowing security teams to generate better reports. And you know what that means? They can do it faster. They can aggregate and visualize analytics. They can collaborate on remediation in real time. Better reports means better fixes. It means better reporting to the C-suite and the board. It means the job gets done. But here's the beauty part. It doesn't mean more work for you. The PlexTrac platform addresses pain points across the spectrum of security team workflows and roles.
It's second to none for managing offensive testing, reporting security findings from those tests. You can actually stick code samples, drag and drop screenshots, even videos to any finding. You can import findings from all the major scanning tools. So you don't even have to drag and drop. Just say, I, you know, get that ESUs report, put that in here. Or, or Burp. You can export to custom templates with a click of the button. Design your own or use the pre-built analytics and service level agreement functions help you visualize your security posture so you can quickly assess and prioritize and ensures that you're tracking your remediation efforts to show progress over time so that you can report to the higher ups. Look, look the job's getting done, and to yourself to verify that it is built in compatibility with all the leading industry tools and frameworks, including vulnerability scanners, scanners, pen testing as a service platforms, bug bounty tools adversary emulation plans.
All of that means you can improve the effectiveness and efficiency of your current workflow. Whether you're the red team, the blue team, or the purple team, robust integrations with Jira and ServiceNow. Make sure you're always closing loop on those high priority findings that really need to be responded to. Enterprise security teams use PlexTrac to streamline their pen tests to security assessments, incident response reports, and much more. Plextrac clients report up to a quote, 60% reduction in time spent reporting a 30% increase in efficiency and a five x R o i in year one. And you can see all that on the PlexTrac site and verify that all in all PlexTrac provides a single source of truth for all stakeholders transforming your cybersecurity management life cycle. So book a demo today and see how much time PlexTrac can save your team.
I think you're gonna like it Better yet, just try it free for a month. See what PlexTrac can do to improve the efficiency and the effectiveness of your security team. All you have to do is go to plextrac.com/twit claim that free month, P L E X T r a c.com/t w i t. We thank PlexTrac so much for supporting Steve and, and the work we're doing here, but we also thank you for supporting us by going to that special address so they know you saw it here. Plex track.com/twit plextrac.com/twitt. We thank 'em so much for their support of security. Now, picture time, Steve. It is. So here we have yet another puzzle, <laugh>, it's not so puzzling <laugh> on, we have a, we have a, a sidewalk path running from the right foreground to the left background of this photo to one side is sort of a null covered with dried grass, wheat weedy sort of stuff.
Over on the left is a not very well maintained, you know, typical green grass, which is now dried out. Okay, so that's sort of the setting now, right in the middle, I mean, across this sidewalk is a, a permanently installed sidewalk closed fence. This is not a gate. If you look at it, there's like two, there's a steel pole on either side of the concrete sidewalk, and then there's a, a well built wire gate, which is strapped between these poles. Oh, it's good. They didn't, they didn't stint on this. They put in the best. No, this honey, this is meant to stand the test of time <laugh>. But, and, and, and there's a sign in case you were curious about like, okay, why is my path blocked on this sidewalk that across the front of this wire it says sidewalk closed. Look, sorry, your, your s o l as they say. So, so you're confronted with this gate and it's like, oh, well, I guess I can't proceed. I I had my heart set on going into the distance here and around the corner, but the sidewalk is closed ex except that like, it's not because you, you could, you could choose the direction you wanted to go around this, this small hurdle in your way. Just and there's like sidewalk behind it that the sidewalk looks, looks just fine. Well, that was before it was closed. <Laugh>.
I I mean that if they wanted to close it, why not remove it <laugh>. I mean, like, you know, get out the jackhammer and break this concrete up and haul it off. But no, it's there a puzzle. It a perfectly functioning sidewalk, which no one can use unless they walk around this barricade and look, oh, it works. The sidewalk still works. Leo, even it's like, once I remember, this is long time ago I let my car insurance lapse. Yes. And I thought, oh, I can't drive, but I got in the car and turned the key and it still works. It still worked. It was amazing. <Laugh>. It's a miracle. Anyway, anyway, yeah. So, wow. Okay. I originally ti was gonna to title this podcast a brief glimmer of hope over a pursuit that's took me, I think it was like six or seven hours until I realized how it turned out at how brief the glimmer was.
But it's, it's an interesting issue and it could come up. It already had come up. So and I figured, you know, since all LastPass users could use a bit of good news about now, I, I became excited when it appeared that things were not as bad as we thought last week. As we know, many users discovered that LastPass had never increased their client's local pbk F two iteration count from its earlier setting of 5,000 or maybe 500 or in many cases and they keep mounting. I've seen a lot more reports in the last week as few as one iteration, you know, which of course results in a trivial to bypass la encryption of last backups. Well, one of our listeners with a sharp memory by the name of Parker Stacy wrote with an important quote from a page on last passes website. The page Parker quoted is titled about password iterations.
And most of this we already know, but there's an important line on this page from their website. There was news to me. Okay, so in order for that important line to be understood in context, which is somewhat unclear, I'm gonna share this short piece in its entirety. So this about password iterations on the last pass site says to increase the security of your master password. Last pass utilizes a stronger than typical version of password based key derivation function. Pbk DF two at its most basic pbk DF two is a password strengthening algorithm that makes it difficult for a computer to check if any one password is the correct master password during a compromising attack. Right? They say last pass utilizes the pbk D F two function implemented with an SSHA 2 56 to turn your master password into your encryption key. All this we know last pass performs a customizable number of rounds of the function to create the encryption key before a single, before they said before a single additional round of pbk D F two is done to create your login hash.
Then they said the entire process is conducted client side, the resulting login hash is what is communicated with LastPass. Lastpass uses the hash to verify that you are entering the correct master password when logging into your account. Okay, this is the next line is what, like I go what they said. Last pass also performs a large number of rounds of pbk DF two server side. This implementation of pbk DF two client side and server side ensures that the two pieces of your data, the part that's stored locally and the part that's stored online on LastPass servers are thoroughly protected. But the number of then said by default, the number of password iterations that LastPass uses is 100,100 rounds. Lastpass allows you to customize the, the number of rounds performed during the client site encryption process in your account settings. Okay? So what it's like last pass also performs a large number of rounds of P B K D F two server side.
So, okay, I thought that's not anything we've talked about or looked about. So it also wasn't anything that I recalled Joe Siegrist ever mentioning to me. And since it perfectly responded to the worries we talked about last week, I mean that's what named last week's episode one. And since last pass has now been revealed to be if nothing else a bit clutsy, I was actually just a bit suspicious about exactly when this convenient page first appeared on last passes website. So I dipped into the internet archives way back machine to see when it had first indexed this page. And what I found did little to assuage my suspicion in the show notes. I have a picture of the way back machine's calendar showing that this page didn't exist before 2022. And in fact it was first indexed by the Wayback machine on July 2nd, 2022, about six weeks before we learned about this particular problem.
And so, okay, that I was a little suspicious about that. It's like, well, doesn't that in interesting that some good news for this problem was first indexed then. So, you know, it did dis it did proceed. Public disclosure of the trouble. It's not as if, however, that page had been around for years. I shared my curiosity over this with Parker. You know, the guy who first brought this to my attention and I had the intention to do some more digging before today's podcast, but Parker's curiosity was also peaked and he tracked down a last pass p d document titled Last Pass Technical White Paper, which the internet archive had indexed and stored on December 18th, 2019. So way longer ago, four years ago. And looking at the same titled paper today, because there's also something by the same name last past technical white paper. If you Google that, it pops right up.
None of the PDFs that were found contain an addition date, but both papers contain exactly the same paragraph, which is more clear than the water down help page that I just read. Both, both papers, both PDFs. The old one from 2019 and today's say last pass also performs 100,100 rounds of pbk DF two server side. This implementation of pbk DF two client side and service side ensures that the two pieces of the user's data, the part that's stored offline locally and the part that's stored online on LastPass servers are thoroughly protected. Okay? So, you know, I didn't want to give anyone the wrong impression from last week, so I had to pursue this. I was curious about anything additional that the PDF's that that particular PDF's metadata might reveal. I was, you know, again, still like, okay, I learned that it was created by Microsoft Word 365 by Erin's styles on June 6th, 2022.
You know, and as a quick sanity check, I noted that Erin's Twitter photo shows her proudly sporting a bright red LastPass sweatshirt. Okay? So what would the flow of this system look like? What does it mean for LastPass to be performing apparently another hundred thousand 100 iterations of PB K P B K D F two server side? So as we know to the LastPass client, the user provides their email address, which serves as the salt and their LastPass master password, which is the input along with their accounts default iteration count to an S h hha 2 56 based PBK DF two function that produces a 2 56 bid encryption decryption key, which is never sent to LastPass, right? That mean, that's the whole point is LastPass never has that key. It is only ever used by the local client to de encrypt and and decrypt the master vault blob of data which LastPass stores for them and shares among, you know, the user's, various LastPass clients in order to keep them synchronized.
Okay? So how then does last pass verify that they've logged on using their proper credentials if that key never goes to last pass? Well, after running that local P B K D F two iterations, one additional round of PBK DF two is used to produce a 2 56 bit user log on verification token, which is what LastPass calls the log in hash. That token is sent to LastPass to store and use to subsequently verify the user's proper log on credentials. Lastpass uses the user's ability to provide that login hash token to log them into their online vault and before sending their encrypted vault blob to them, you know, that's how they avoid sending the vault. That does contain a bunch of, you know private unencrypted information to other people. You've got to be able to provide this login hash. So the problem that occurred to me was that only a single round of pbk DF two was separating the user's secret, like super secret vault decryption key from this login hash, which is what the client provides to as its authentication.
That meant that in theory it could be easily brute forced by reversing that hash function. But this apparently also occurred to last pass, although there's language about this in several places In this white paper, the clearest description appears near the bottom of page 19 under the title log in hash storage. Were they right? Lastpass receives the login hash from the user and they said Perens following the default 100,100 iterations on the user's master password using pbk DF two s, HHA 2 56, meaning on the client side, the login hash is additionally salted with a random 2 56 bit salt and an additional hundred thousand rounds of pbk DF two s H a 2 56 are performed. That output is then hashed using S script to increase the memory requirements of brute force attacks. The resulting hash stored by last pass is the output of 200,101 rounds of S HHA 2 56, you know, pbk DF two plus s script.
Okay? So what they're saying is they recognized the, the reversibility of a login hash only being one round away from the user's secret key creation. So when they receive the login hash from the user, that's what they run an additional hundred thousand assaulted, hundred thousand rounds of pbk DF two s h a 2 36 on in order because they're gonna store that permanently and they don't want there to be any chance of it going backwards to the secret key. So, you know, I needed to pursue this all the way out to the end to understand whether, as I was hoping last pass might have used all of this additional, you know, and again, as I said, they talk about it all over this white paper server side, pbk DF two ing, you know, even going so far as to deploy an s script to create a super strength encryption key for use when saving their user's vault data.
But there was no sign of that anywhere in the white paper. So, to to, so from my understanding, they're protecting it from a bad guy trying to log in as you, but if the bad guy's got the vault, it's independent of that. Is that right? Correct. That's exactly right. And you mentioned that script, but I wanna make it clear cuz the way you said it, they, there's no evidence they used s script, they used P B K D F too, right? They didn't mention actually they're saying, they're saying they use both Oh good. But not on the right part. <Laugh> well, correct. Exactly. <laugh>. Exactly. So, so what they, what what they re what what they get from the client, it is, they don't get the secret key ever, but they do one more round of pbk D F two on the client and the client sends them that which is which they call the login hash.
So that's their token that is that the client uses to verify the login. And since they're gonna be storing that for the long term, they realized, oh, this is not safe for us to store cuz it's only one PK DF two iteration Yeah. Away from the key, right? So let's hash the crap out of this and that's what we'll store. So anyway, my hopes were dashed. It is not. I, for a while I was thinking that they were doing something really cool that they were gonna, they were gonna take what they got from the user, hash the crap out of it, and then re-encrypt the vault under that n next generation key, in which case people with an iteration of one on their client would still be okay. Would've be protected. Yeah. Yes. But they're not protecting that vault, they're just protecting the login.
Correct. Yeah. And in fact, there is a place where they specifically say un un un under a section of their page, user data storage. They say sensitive vault data is encrypted client side, then received and stored by last pass as encrypted data. Mm-Hmm. <affirmative>. So there is, so I, I was, I was like thinking for a minute that I might have steered everybody wrong a week ago that one iteration meant one iteration. Well unfortunately it does mean one iteration <laugh> and things are as, as bad as grim. We thought, as we thought this is part of the problem with all of this is the marketing yes. And there's a lot of hand waving. It depends. Different password managers have different amounts of hand waving, but the marketing department, which probably doesn't understand what it's saying, usually takes what the technical people have given them, does some magic hand waving.
And what we get is often not very useful. And Leo, what the digging you had to do for this, when we talk about, we're gonna be talking about Norton LifeLock problems in a minute. Oh, this one? Yeah. Oh, and the the, this, this is exactly to the point you're just making, they said <laugh> at one point they said we have secured 925,000 inactive and active accounts that may have been targeted by credential stuffing attacks. You secured them. What does that mean? No, they send you an email. <Laugh> what? Anyway, how we'll get to that <laugh>. Yeah. We'll, okay. There's a lot of that. And in fact, we so much better, I'm trying to do the same thing. I'm not as knowledgeable or as adept you, so I'm trying to filter through some of this marketing material too from other password managers to, to see what's, what's going on.
I do notice that when I rekeyed my BitWarden Vault, I gave it a new password. They said you can rekey it, you can regenerate a secret. But that's gonna mean we have to do a bunch of stuff. It's gonna take a little while and make sure you log out everywhere because once we do that, we could corrupt your, your data. You could corrupt your data if you haven't logged out and we'll log you out. But just make sure we did successfully, what is that <laugh>? Okay. That, that's a really good point. And it's, it is a subtlety that I was deliberately skipping over just cuz I didn't wanna really just boggle people's brains. Oh, BOGOs, Steve BOGOs, that <laugh> there is, there is the key that you use for decryption. Yeah. And there's the key that you use for decrypting the key.
Okay. So all of this that we've been talking about, they in last pass, for example, where you are, you're, you're doing this pbk DF two to get a key. That key that you get isn't the actual decryption key for the vault. Ah, it's the key that decrypts the key. So there's a, there there's a, there's a level of indirection there. And the reason that's done is that you're, you're able to change your encryption key without having to re-key the vault. Mm-Hmm. <affirmative> mm-hmm. <Affirmative>. And, and thanks to that level of, of, of indirection, it's, it's very much like if, if you had a password protecting a hard drive and you wanted to change the password, well if you, if you actually change the key, you have to, you have Tory everything, encrypt the whole hard drive and then re-encrypt the whole hard drive. Right? So instead nobody does that.
Right? You, you assume that the actual key was achieved through a very high entropy, pseudo random, hopefully actually random you know, bit generator. And then that's the thing that's encrypted by the key that you use, right? And then decrypted when you want to use it. And so that, that, that's also what you were seeing with bit Warton where they said, well, we could actually vote. I did that meet key your vault. I vote. Yeah. Yeah, yeah. I, why? I don't know. I mean <laugh>, I just thought I would just to see what happened. <Laugh>, then the other thing Oh good. A lot of people, yeah, a lot. I, you know, no harm done. Then I think something a lot of people mentioned is that last password has an additional key that they generate that is stored as I understand it in the hardware, or I think, I think you mean one password as I say last password, I meant one password.
Yeah. Yes. And then, then their agile key chain. And it's actually a file. You get it as a file that you store somewhere and then you use as a secondary encryption. And now, correct me if I'm wrong, cause I was talking about this on our, as the tech guys show, the new show we do to replace the radio show at Micah and I, and I said, that's great if you don't have a good master password, because then there'll be a, a backstop for a bad master password. But if you have that, have a very, very good master password. It, I mean, something that's almost, I imposs that's impossible to break isn't more impossible to break with a second master password. Right? I mean, one, one would be good enough. It, it does add some complexity and confusion because people have to save this key as a file and they have to put it on, you know, so there's, so it's great.
I, and I said if you're gonna have naive users use a password manager and you think they might use Monkey 1 23 as their master password, this would be good. Yes. Because what it means is that the actual password is the composite of some true high entropy file. And monkey 1 23. Yes. So that if, if the vault ever escaped, then you know, there's just no point in trying to do any sort of, you know, it's, it's another's another wall. But yes, I, my position is well that's fine and if you, if you're worried you could do that. But if you have a good master password, which you should long random master password, and you're gonna talk about what random means in a bit that's probably belt and suspenders. You, it's not, it's not needed. I agree. So, you know, again convenience versus security, it is technically more secure, but a lot less convenient.
And the point is, at what point does more security not actually buy you anything? Right. You know, where all you're doing is making things way less convenient and, but you know, you already, even without that, you already had enough security. Your point is if you really use a good random master password, you're done. There just isn't any need for anything more. Okay. Especially at the cost of, of significant inconvenience. Yeah. The, so that secret key is nice, but, but better even it, and it costs by the way, costs an additional amount, which is I think the real reason they offer it. You have to pay a subscription fee to get there. Oh my God. <Laugh> Okay. <Laugh> <laugh>. But if somebody is gonna use Monkey 1 23, then they should be using one password to do this. But or train them on how to make a good password.
Yeah. Which Steve will do. I'm sorry. Okay. So on with the show, the other, the other, no, this was all good. The other question is ECB or cbc, that was the other question to be answered this week. Thanks to the past week's worth of listener feedback from all the people who used Rob Woodruff's PowerShell script to peer into their vaults. You know, what we wanted to find out was about the prevalence of the less desirable E C B encryption mode. Well, one of our listeners provided a unique insight, which simultaneously answered a question I had. Mark Jones, he, he tweeted, he said, you requested updates on last pass. He says regret as a loyal listener that I was stuck at 35,000 iterations. Not 100,100, still 35 thousands. Not bad. He said, I found 28 E C B items. Yikes. Mostly secure notes. Oh, he said, I now certainly regret putting so much in notes.
A couple of comments. He said, incrementing iterations and he actually meant changing iterations, but okay. Changed all to cbc. Ah, he said, it appears you're correct in asserting any changes. Get rid of ecb. That's good to know. Well, I didn't assert that, so he is giving me credit for something I didn't say, but thanks. So, but this explains the mystery of why my own vault had no instances of E ec B I know that my vault contained a number of very old account credentials from the beginning of my use of last pass would've ne which have ne would've never been updated. And which I therefore expected to find still encrypted in the original e ECB mode. But what Mark observed was that simply changing his vault's iteration count as a lot of us did, many of us did five years ago when we were covering this issue on the podcast that gave our clients the opportunity to re-encrypt Mm.
All of those older e c B mode entries in C B C mode. Good news that Mark another mystery solved. Thank you. Yeah, cuz I've heard a lot of our listeners, I know you've heard many more, but I've heard a lot of 'em said, well, I didn't have any ECB stuff, but that would make sense. That's why Yeah. Because a lot of them were long time like you and me, longtime users. Yep. Okay. So while we're on the topic of password manager troubles we should touch on Norton LifeLock and more double speak being produced by corporations that have grown too big to need to care. Before we get to this, we should note that what was nor what was formerly Semantec Corporation and Norton LifeLock are now renamed Gen Digital and they just refer to themselves as Jen. So you know, now they're both email@example.com, you know, G e n d I g i t i l.com.
And since Symantec had been acquiring companies over the past few years, this new Gen Digital is now operating the brand's. Norton Avast, LifeLock, Avera, A V G reputation defender, and unfortunately Sea cleaner, which you know, was a beloved tool for a long time. Okay, so what happened this time around 6,450 Norton LifeLock customers were recently notified that their LifeLock accounts had been compromised. And by compromised we mean that unknown malicious parties have somehow arranged to log into those 6,450 accounts, giving them full access to their user's, password manager's store data. Whoops, <laugh>, that's not good. In a notice to Customers Gen Digital, you know, as I said, the recently renamed parent of this collection of companies said that the likely culprit was a credential stuffing attack as opposed to a compromise of its systems. Now this seems very odd since Gen Digital explained that, that by this they meant that previously exposed or breached credentials were used to break into accounts on different sites and services that share the same passwords.
But that wouldn't explain why there was apparently what appears to be a quite successful targeted attack against 6,450 of their users. You know, if these were username and password credentials leaked and or somehow obtained from other unrelated site breaches elsewhere, how is it that they just happened to all be useful against 6,450 of LifeLocks account holders? You know, that, that doesn't smell right. What I suspect actually happened is that Life Locke's web portal is or was lacking in brute force password guessing protection, you know, and in this day and age, it is no longer okay for a website to allow a fleet of bots to pound away on its login page at high speed, hoping to get lucky. Bleeping computer also covered this news and they posted a statement from Jen Digital spokesperson, which is the one I I quoted saying quote, we have secured 925,000 inactive and active accounts that may have been targeted by credential stuffing attacks.
Okay, so wait, first of all, 925,000 accounts and quote, we have secured them. You know, does anyone know what that means? What does that mean? We have secured 925,000 inactive and active accounts that may have been targeted by cred, traditional stuffing attacks. Maybe they reset the password. Could it be, oh my God, they could not have done a password reset on on a million. A million? No, I mean, it would be the end of life as we know it. <Laugh>, <laugh>. So they probably just sent an email out. Oh my god. Yeah, maybe that's, oh, we'd let everybody know that. Like they might be hacked so that we we're gonna call that securing the accounts. Oh Lord. Anyway, you know I guess whatever they did, it's much better than that's, they had not secured them. Okay, so since we're not actually offered any information, Leo, to your point, we don't get any these days, you know, we just get corporate PR speak, it's necessary to read between the lines.
So my would be that Gen Digital currently has 925,000 Norton LifeLock customers, or accounts at least some inactive, they wandered off and due to completely absent web portal security and a lack of monitoring for some length of time, once bad guys realized that there was no security to stop them, a fleet of bots was programmed to assault Norton LifeLocks login page, guessing a crowd account credentials at high speed without limit. And as a result of this lack of security, that Fleet of Bots was able to successfully log in and compromise the accounts of 6,450. Norton LifeLock users, gen Digital did admit that it had discovered that these intruders had compromised LifeLock accounts beginning on December 1st, 11 days before. They're saying its systems that is Gen Digital Systems finally detected a quote, large volume unquote, of failed logins to customer accounts. Well, that large volume would've been going on for weeks, right?
But somehow they didn't see that the red flag finally went up on December 12th when LifeLock became aware of it and presumably brought this attack to a halt. So I suppose that's what they meant when they said that they had secured those 925,000 active and inactive accounts. They basically halted an ongoing login attack after 6,450 successful logins and full account compromise of those customers. When I read this, I thought, oh, it's a credential stuffing attack. I didn't, I didn't realize that they were being able to brute force attack it. I just thought they were copying passwords from other breaches and trying 'em on LifeLock. When, when they say that they, on, on the 12th a large volume of failed login attempts, but that could also be credential stuffing cuz you don't know who's reused passwords. So you might have a database of you know, 10 million Oh, oh, I completely agree.
By brute force, I don't mean, you know, start at 0, 0 0, 0, 0, 0 0. Yeah. I just mean let's try, let's see if this password that we have works. Yeah, yeah, yeah. Right. Yeah. Let's ask Troy Hunt for his master list. <Laugh>. Yes, that's right. We'll, we'll track. That's right. Them all. Yeah. So it's not in a way that would, it's, I don't, I don't have no reason to them, but this is why you don't reuse passwords because of these credentials. It is, it's absolutely why you don't. But also if it took them at least two weeks before they saw before something happened, so, so, okay, so, so here's my theory. You know, they're not saying how much earlier the attack was underway. So why 11 days from first successful compromise to first detection of an attack that had been ongoing for some time, if I had to guess, I would suggest that the bot fleet's attack was probably carefully throttled so as not to trip any alarms, right?
And that after some successful undetected logins, the bot fleet's operators may have started creeping its attack rate upward. Slowly they got greedy <laugh>. Yeah, exactly. Yeah. To see how much faster they could go. And, you know, remember that before they were shut down, they'd successfully scored against 6,450 accounts. I mean, that's a lot of accounts. So, you know, things had been going well for the attacking fleet up in, you know, for like 11 days at least from the first known compromise. You know, and the truth, you know, truth be told, we don't even know that they actually ever were detected. We don't know. That's what tripped the alarm and raised the red flag given that they have been, you know, this, the bots have been stomping around within 6,450 of LifeLock's user accounts. I would be surprised if some user out of 6,450 didn't notice that something was amiss and contact Norton to report suspicious account activity.
So it may have just been, you know, the fact that they were tipped off by a victim and that, and they thought, oh, what maybe we ought to go over and look at that web server and it's like, oh my God. And then they, you know, did whatever they said they did to secure all the accounts. Anyway this does highlight another good reason for choosing an iteration count that takes the web browser a few seconds to obtain a go no-go login decision because it also serves as very good brute force protection against login attempts to your provider's portal, right? In order to log in to lastpass.com to use that example some script has to be run on the browser in order to churn away at a guest password in, I mean, you know, that length of time has to be spent in order to create a token to hand to LastPass to say, please log me in. So again, high iteration counts are just you know, all around protection for many types of problems.
Okay, one last thing, then we'll take our second break. Chrome has followed Microsoft and Firefox. Remember the Certificate Authority Trust Corps Last December, we spent some time looking at the bizarre set of corporate shells and sh various shenanigans being played by that very shady certificate authority trust core. And at the time it was difficult to understand why anyone would trust those clowns based on the, the history that was revealed. Remember that one of the corporate users of their certificates was a deep packet forensics entity that was found to be selling their TLS middle boxes to agencies of the US Federal government. And there was also that individual and private company that was al also affiliated who no one had ever heard of before, who had for some reason received that huge block of previously unused US Department of Defense, I p V four allocation.
You know, there was something really fishy about this whole business. Anyway, finally, in response to a long dialogue with a company representative that convinced no one Mozilla and Microsoft both marked trust cores, root certs as untrusted, thus immediately re revoking trust from any certificates that have been signed by those certificate authority certs. We're talking about this again because as I said, Google and Chrome have now followed suit by removing trust cores ca certs from Android and Chrome's root stores. So it's pretty much game over for that group and good riddens, as I've said, you know, the, the responsibility I, I've always been impressed that the browsers are so reticent and like, like really reluctant to pull trust from a ca you know, you've gotta really be bad in order to have that happen. It's gonna break a lot of things. That's why they don't wanna do it. Oh, it's gonna break everything that that ca ever signs, right? <Laugh>. So they don't want the calls. It's expensive, right? All, all those tech support calls are expensive. I will do an ad. You will hydrate, we shall reconvene in moments and we're gonna learn whether chromium is beginning to rust <laugh>.
That's clever. I think I know where you're going with that one. We're talking, so we're talking about credential passing stuffing account attacks and so forth. This is very appropriate cuz our, our sponsor for this portion of security now is oh, bit warden. Yeah, that little password manager that we've been talking about quite a bit lately. You know, and I always wanna be agnostic and say there are, you know, any password manager is better than none. But I am personally of the opinion, both Steve and I now use Bit Warden. I have for a couple of years now that the best password manager does a lot of things that you want it to do for one thing. Bit warden's open source and that is really great news. In fact, it's one of the things that means perhaps we're gonna get an alternative to pbk DF two.
Steve texted me a message from a listener, right, that I heard the Yabadabadoo, he who said, Hey I heard you talking about S Crypt versus pbk DF two and I thought since it's open source, he issued a pull and and submitted an SSC script plugin for BitWarden. Now they have to approve it. Of course they do. You don't want them any, just any random person to add to the, to bit warden, but I suspect somebody else in our Discord said, you know, they have a setting now in Bit Warden that says, choose which key hashing technique you want to use. Oh, currently only pbk TF two is an option. I think they're getting ready and I hope they're getting ready and I'm really thrilled. That's why you want open source. The other thing, and Bit Warden told me this, the free version will be free forever because it's open source, it's free forever.
So this is a password manager you could try, you could tell your friends to try it, no cost to them. I pay for the premium service 10 bucks a year, <laugh> because I wanna support 'em. And they also have a variety of other services for businesses. Bit Warden Enterprise, bit warden teams bit warden for families. So there are, this is by the way, somebody asked, well if it's open source, how can you trust it? Well, besides the fact that it's open source to people reviewing the code, you can trust it because they make money. They do make money. Offering these additional services is how a lot of open source organizations work and it's great. It's the only cross platform open source password manager they can use at home. You can use it at work, you can use it on the go. It's trusted by millions, including me and Steve with Bit Warden.
You can securely store your credentials across business and personal Worlds January 22nd through 28th. Coming up in a little bit is data privacy Week leading up to Data privacy Day BitWarden would like to remind everyone your data is valuable. So is your privacy. I think you might know that all of your data, of course in the BitWarden Vault is end-to-end encrypted. Not just your passwords. The URLs of all the websites you have accounts for are encrypted. All that metadata is encrypted. Bit warden doesn't track your data in the mobile apps. They do crash reporting and even that is removed in the Android version of it. So you know it's open source, right? Anybody can review their library implementations at any time on GitHub can review their privacy policies. If you're not technical, read at bit warden.com/privacy and be assured they live up to that because they, they've got a higher standard, right?
Protect your personal data and privacy with Bit Warden by adding security here. Passwords. Oh, and there's one other thing we were talking about. Credential stuffing. Well that's when somebody reuses passwords. Shame on you if you do. But in order for credential stuffing to work, they also have to know your email. So with Bit Warden, I love this, you can create a masked email address that's different for every single login. So now a bad guy would not only have to know your password, maybe you reused it, you shouldn't, but maybe you did. But they'd also have to know that unique email and you can do that with they, they, they work side by side with simple login a non ADDIE Firefox Relay, our other sponsor Fast Mail and now Duck, duck go. So all of these are services that will create unique massed email addresses that are only used once.
Just like you only use your password once that adds additional security. And of course I use my Yuba key with Bit Warden. That's one of the reasons I pay for premium. So I know I have a very secure vault BitWarden is great for business too. Fully customizable, adapts to your business needs their team's organization. That's the simplest one for smaller teams, $3 a month per user. Their enterprise organization plan just $5 a month per user. You get in addition to all the other features, the ability to share private data securely with coworkers across departments or in the entire company of, again, you can get a basic free account, free for an unlimited number of passwords forever guaranteed. You might say, well others guaranteed it and they yanked it, but Warden can't. They're open source. If they did anything like that, people would go and say, well look, okay, fine, well go use Leo Warden instead <laugh> or whatever, right?
Family organization love this one. Six users premium features $3 and 33 cents a month for the whole family. Bitwarden supports importing and migrating from many other programs to, I could tell you, and I know Steve did this. In fact he talked about how to do it. It's tra it's so simple to move from last pass to Bit Warden. You just export, BitWarden will read it and your Bob's your uncle, you're done. It's as simple as that. When I moved to Bit Warden, it took me minutes and I have more than 4,000 saved passwords. The only thing I mentioned this I had to do is it doesn't transfer over it's last pass. Doesn't export any binary data, pictures of passports or driver's license. So you just save those out and put 'em back into Bit Warden and they're all in my bit warden now.
At Twit, as you know, at security now we believe in password managers. How many times do we have to tell you? Bit Warden's the one I use. I recommend it's the only open source cross platform password manager you can use everywhere at home on the go at your business, trusted by millions of individuals, teams, and organizations worldwide. Get started with a free trial of a teams or enterprise planner. Get started for free across all devices as an individualat bitwarden.com /twitt. And please do me a favor, we want to keep these guys as a sponsor forever cuz I, I'll probably be plugging 'em for free. I might as well get, get paid for it <laugh>. So please use that address so they know that you came to bit warden from us. Bit warden.com/twit. They'll know it was steve bit warden.com/twit. I really don't even have to do an ad for Bit Warden.
It just sells itself. But I, I want to remind you of all the great features Bit Warden has and and, and we were talking the other last week and I made a completely new, very good long and I used password haystacks to pat it out, long password and I rekeyed it and I turned the pbk DF two iterations up to 2 million and I just, I I didn't need to do any of that, but I just, it makes me feel a little bit better. Now the one thing I still have to do is go through all my passwords and and update them cuz who knows how many were in my last boy. And that's one thing I heard from so many of our listeners is what a pain in the butt it is to change passwords. Well, and as you said last week, there should be a way to do this.
There should be an api, there should be something, right? Yep. But there isn't. Yep. There's no standard way to do it. Okay, so acromium is beginning to Rust. Google's announcement and blog posting last Thursday is titled Supporting the Use of Rust in the Chromium Project they wrote. We are pleased to announce that moving forward the Chromium Project is going to support the use of third party Rust libraries from c plus plus in chromium. That is, you know, libraries called from c plus plus. To do so, we're now actively pursuing adding a production rust tool chain to our build system. This will enable us to include Rust code in the Chrome binary within the next year. We're starting slow and setting clear expectations on what libraries we will consider once we're ready. Our goal in bringing Rust into Chromium is to provide a simpler and safer way to satisfy the rule of two.
And I actually skipped over the fact at the top of the show that that is today's podcast title, the rule of two, which we'll be talking about here in a minute. So they said our goal in bringing rust into chromium is to provide a simpler and safer way to satisfy the rule of two, whatever that is in order to up development less code to write less design docs, less security review and improve the security meaning increasing the number of lines of code without memory safety bugs, decreasing the bug density of code of Chrome. And they said, and we believe that we can use third party Rust libraries to work told toward this goal. And they finished Rust was developed by Mozilla specifically for use in writing a browser. So it's very fitting that Chromium would finally begin to rely on this technology too. Thank you Mozilla for your huge contribution to the systems software industry.
Rust has been an incredible proof that we should be able to expect a language to provide safety while also being performant. And God, I hate that word performance. I know <laugh>, it just seems, it it's like it is it Apple who talks about the learnings? No. Microsoft is learnings. Yeah, but oh, Microsoft is learning. The whole tech industry has its own vocabulary and it's, yeah, it's the business vocabulary. Yeah. Anyway everyone listening to this podcast has heard me lament that we're never gonna get ahead of this beast of software flaws if we don't start doing things differently. You know, what was that definition of insanity? Anyway, it's great news that the Chromium project is taking this step. It will be a slow and very evolutionary move to, you know, to have an increasing percentage of the chromium code base written in rust. But, you know, this is the way that effort and this eventuality gets started.
You know, you gotta start somewhere. So, you know, and you may have noted as I said, the Google's announcement mentioned this rule of two, which we'll be taking an in-depth look at here in a minute. But first we have another instance of B Y O V D. Bring your own vulnerable driver. It was just in the news this past week, CrowdStrike documented their observation and interception of an ePrime. I left, I haven't seen that term before. Now we have ePrime ePrime adversary known variously as scattered spider roasted octopus and U N C 39 44 for those who are not very imaginative. This leverages a combination of credential phishing and social engineering attacks to capture one time password codes or to overwhelm their targets using that multifactor authentication notification fatigue that we were talking about before where they just finally say, okay, fine.
I don't know why I'm being asked, but fine. And then, you know, that lets the bad guys in. Once the bad guys have obtained access, the adversary avoids using unique malware, which might trip alarms. Instead, they favor the use of a wide range of legitimate remote management tools, which allows them to maintain persistent access inside their victim's networks. Crowd strikes instrumentation detected a novel attempt by this adversary to deploy a malicious kernel driver through an old and very well known eight year old vulnerability dating from 2015, which exists in the Intel Ethernet diagnostics driver for Windows. That is a legitimate driver published in 2015 by Intel for performing ethernet diagnostics. That's it. The, the file is IQ VW 64 sis. The distressing factor is it the technique of using known vulnerable kernel drivers has been in use by adversaries for several years and has been enabled by a persistent gap in Windows security.
And we've talked about this before, starting with the 64 bit edition of Windows Vista. And ever since Windows does not allow unsigned colonel mode drivers to run by default. You know, that was easy to do, right? It just shut down one path of exploitation. But clever attackers started bringing their own drivers that were signed like legitimately signed like Intel signing this ethernet diagnostics driver. They would bring that along and cause windows to install it and then exploit the known vulnerabilities in this driver. Okay, so what do we do about that? Well, in 2021, you know, about two years ago Microsoft stated that quote, this is Microsoft quote. Increasingly adversaries are leveraging legitimate drivers in the ecosystem and their security vulnerabilities to run malware. And that quote, drivers with confirmed security vulnerabilities will be blocked on Windows 10 devices in the ecosystem using Microsoft Defender for endpoint attack service reduction, ASR and Microsoft Windows Defender application control, W D A C technologies to protect devices against exploits involving vulnerable drivers to gain access to the kernel.
Well, that was the plan. How did that work out? As we discussed some time ago, multiple security researchers through the years, through the past two years have repeatedly and loudly noted pounding on Microsoft that this was all apparently just feel good nonsense spewed by Microsoft. And that the issue continues to persist as Microsoft continually fails to actually block vulnerable drivers by default. The crux of the problem appears to be that any such proactive blocking requires well pion action from Microsoft and any fair waiting of the evidence. You know, many examples of which we've looked at during the past two or three years here would conclude that Microsoft has long since abandoned any commitment to true proactive security. They no longer, you know, find most of their own problems and windows, they increasingly rely upon the good graces of outside security researchers. And even then, they drag their feet over implementing the required updates, which have been handed to them by the security community.
Okay. That said, though, being fair to Microsoft, there is a flip side to this. We know that the last thing Microsoft ever wants to do is deliberately break anything. Well, they have enough trouble with inadvertently breaking things, let alone doing so deliberately, so proactively blacklisting anything, especially something like a network driver that could potentially cut a machine off from its own network. Now, well, that's the last thing that Microsoft would choose to do, assuming that a choice was being made in the first place. But as I was thinking about this, it occurred to me that this is something that a third party could do on behalf of users who subscribe. I'm bringing this back up again because, you know, this was not supposed to happen the last time we spoke about this last year. The presumption was based upon clear statements, made my Microsoft at the time that this had all been a big oversight mistake for several years, and that all that was gonna be better now, but this latest news is from last week, and this was not actually fixed.
Okay? So there's that. But the bigger point I want to make is that this all needs to be made proactive. Somebody needs to be proactive. The need for Microsoft to be proactively blocking known vulnerable drivers seems like something we are never gonna be able to get from Microsoft for the foreseeable future. At least this appears to no longer be in their dna. N you know, that's just not who they are any longer. And, and we've seen cycles, right? They've like swung from one side or the other. Maybe they'll swing back. We can hope, but that's not where they are today. But, you know, it occurred to me that this presents a huge and significant opportunity for some third party security company solve this problem. Be proactive in a critical area where Microsoft refuses to be. It's a simple thing to do. Like get the list of all the, of the previously known vulnerable drivers create an app that looks to see if any of them are in use.
If so, tell your subscriber to update the driver to a new one so that it can then be blocked from malicious future use. Anyway, my sense is there's a big opportunity here from, you know, for someone to make themselves a lot of money by closing what is a persistent and gaping hole in Windows security. I, I hope somebody will do it. Okay. a piece of closing the loop feedback. My Twitter feed was so overwhelmed with feedback from last week's call for feedback that I was unable to reply to tweets as I usually try to do. So to everyone who tweeted, please accept my thanks for the feedback and my apology that I probably didn't reply. I know that replying is not required, but courtesy and chivalry are not dead here. As many of my regular Twitter correspondence know, you know, I often reply when I can, but one public tweet caught my eye for its cleverness, which I wanted to share.
Somebody tweeting as mammalian tweeted at SG G R C, sometimes it is hard to picture just how many more random strings there are than English words. So try this sort of a thought experiment. How often have you ever seen real words coincidentally appear in randomly generated passwords? He said, I noticed it today for I think the second time ever, period. So anyway, I think that's a very clever and worthwhile observation. It, it helps us to truly appreciate how much less entropy exists in non-random text, which we recognize as words in our language. You know, such that it almost never appears by chance. I don't think I've ever noticed any significant word, you know, maybe is or something, you know, in truly random strings of characters. So anyway, I just thought that was a, a cool observation. Okay the rule of two.
I'm gonna get into this, then we'll take our third break and then we will continue. The rule of two in their posting about the adoption of Rust by Chrome, Google mentioned something they called the rule of two. In Google's official chromium docs, they explain what this so-called rule of two is. They said, when you write code to parse, evaluate, or otherwise handle untrustworthy inputs from the internet, which they said is almost everything we do in a web browser. We like to follow a simple rule to make sure it's safe enough to do so. They said the rule of two is pick no more than any two of untrustworthy inputs, unsafe implementation language, and high privilege. And just to highlight the point in in, in the, on, on the page under the chromium master docs where they showed this, they give us a Venn diagram where one circle is code, which processes untrustworthy inputs is one circle code written in an unsafe language, and they show C and c plus plus as examples.
That's another Venn diagram circle. And then the third one is code, which runs with no sandbox, for example, in the main browser process. And so <laugh>, so they've, they've got all three circles with, with overlapping regions. And in the center where all three circles overlap, <laugh>, they, they've got in big red all caps, doom, exclamation point, don't do this. That is do not operate in that place where all three of these things are true, untrustworthy, unsafe language, uns, sandboxed execution. Now of course, this is reminiscent of that great old saying that in a project you can choose any two, but only two of the following three outcomes. Good, fast or cheap, you can't have all three. You know, you'll need to sacrifice the thing that's least important to you among them. You can have something good and fast, but then it won't be cheap to get it.
Or you can have something fast and cheap, but then it won't also be good. Or you can have something good and cheap, but then you won't be able to get it fast. So, similarly, for Google's rule of two untrustworthy inputs, unsafe implementation, language, and high privilege. Okay, can we just not have any of the above <laugh>? I would like to <laugh>. I mean, why do we have to pick two <laugh>? Well, yeah, wouldn't that be nice? None of the above. On the other hand, our, our browser is going out in the wild west of the internet, right? I mean, you might go to a site Dou and you know, God help you. Who knows what your browser is gonna pick up? Okay? So let's take a look at these. Leo, let's tell us about our last sponsor. Then we're gonna dig into what the Venn diagram, these, the Venn diagram seems to me.
If you're going to the wild west, you know what, any of those <laugh>, let alone two of them <laugh>. Okay? But I guess it's hard to do in the, in, in real world environments. So we do have people who have a separate machine running Yeah. Only a browser. And it's like, it's off their network and that's what they use to, to, it's like isolated. A lot of, to unsophisticated people, because I've recommended to on the radio show for years, get a Chromebook for your banking and just do that. And a lot of people like that idea. Like, good cuz then I can use my Windows machine as I wish, but I'll know that I'm pretty safe when I'm doing my banking. I right. You know, I don't think that's a unusual idea, not a bad idea. And the Chromebook has the wash button too. So the power wash.
Yeah. Yeah. And it of course checks for signed firmware. So you can't, it's harder to yep. You know, do bad things to it. Let me talk about our sponsor and we'll get to the rest of the rule of two in a moment. It's Barracuda. Barracuda just did a email trends survey. <Laugh>, which well, the, I, let's say the results were interesting, 43% of respondents, 43%, this is almost half said they had been a victim of a spearfishing attack, even worse. And we're talking to companies here, enterprises only 23% said they have dedicated spearfishing protection. Ooh, there's a big gap there. Of 20% of the people who well, no wonder they were victims. How are you keeping your email secure? Barracudas identified 13 types of email threats, 13 ways bad guys use to get into your system, to get into you, to get to your employees.
Of course, fishing conversation. Hi hacking or hijacking, ransomware. That's just three. There's 10 more tricks. Cyber criminals use to steal money from your company or personal information from your employees and customers. And you gotta ask yourself, I know every day I ask myself, are we protected against all 13 types? Email, cyber crime is becoming a real bane, right? Mu much more sophisticated attacks are more difficult to prevent. They use social engineering, they will use urgency and fear to get your employees to go, oh, I gotta, I gotta act right now. Social engineering attacks, including spearfishing and business, email compromised cost businesses on average $130,000 an incident. Can you afford that? And then if there's another one tomorrow, could you afford that? And again, tomorrow after that as demand, I'll give you some examples. When last year at the start of 2022 and the demand for Covid 19 test was skyrocketing.
Barracuda researchers, cuz they're always monitoring what's going on. They saw an increase in covid 19 test related phishing attacks. In fact, between October, 2021 and January, 2022, it went up 521%. The bad guys are cagey, they're smart, they pay attention to what's going on in the world and they, you know, their, their attacks are ripped right from the headlines when cryptocurrency was going up, and by the way, it's going up again. When it was going up back in late 2020 Barracuda research found that impersonation attacks going after, you know, people using terms like cryptocurrency went up 192% between October, 2020 and April, 2021. In 2020, the last time we got statistics, the internet crime complaint center, IC three received in one year 19,000. Business email compromise, email account compromise complaints, adjusted losses for those complaints. $1.8 billion. Okay, I think I've scared you enough. <Laugh>, have I got enough urgency and fear into you?
Securing email at the gateway is not enough anymore, right? You should certainly do that. You gotta leverage gateway security to protect against, you know, more traditional stuff and viruses and spam and, and you know, all sorts of stuff. But these targeted attacks, your gateway isn't gonna stop cuz they come in under your name to your employees by name, things like that. You need protection at the inbox level and you need protection that is nimble, that moves, that changes as the attacks change. That's why you need AI and machine learning because they learn from the attacks and they modify themselves constantly to stay ahead of the bad guys. That's how you detect and stop the most sophisticated threats. Get a free copy of the Barracuda report. This is, I mean, this is easy. It's a no-brainer. It's absolutely free 13 email threat types to know about right now.
Kind of important part of your learning for you and for your employees and for your security team as well. You'll see how cyber criminals are getting more and more sophisticated every day and how you can build the best protection for your business and your data and your people using Barracuda. Find out about the 13 email threat types you need to know about and how Barracuda can provide complete email protection for your teams, your customers and reputation. Go beyond protection at the perimeter. Get some deep protection right there in your inbox. Get that free ebook, barracuda.com/securitynow, I guarantee you it's an eye-opener. It's fascinating. Barracuda.Com/Securitynow, barracuda your journey secured. We thank you Barracuda for the work you do. We thank you for supporting Steve and we thank you dear listener for supporting Steve by going to that address. Barracuda, B a r r a c u d a barracuda.com/security now.
Alright, back to Steve. And two things. Yes. So we're gonna walk through this, this piece where Google is explaining the rule of two. And this was actually written as advice and guidance for wouldbe chrome developers. So Google explains, excuse me, when code that handles untrustworthy inputs at high privilege has bugs. So untrustworthy inputs at high privilege and bugs, the resulting vulnerabilities are typically of critical or high severity. They said we'd love to reduce the severity of such bugs by reducing the amount of damage they can do by lowering their privilege, avoiding the various types of memory corruption bugs by using a safe language or reducing the likelihood that the input is malicious in the first place by asserting the trustworthiness of the source. So they said for the purposes of this document, our main concern is reducing and hopefully, ultimately eliminating bugs that arise due to memory unsafety.
They said a recent study by Matt Miller from Microsoft Security states that around 77 0% of the vulnerabilities addressed through a security update each year continued to be memory safety issues. A trip through Chromium's bug tracker will show many, many vulnerabilities whose root cause is memory unsafety. They said as of March, 2019, only about five out of 130 public critical severity bugs are not obviously due to memory corruption only five out of one 30. Were not memory corruption. They said security engineers in general very much including the Chrome security team would like to advance the state of engineering to where memory safety issues are much more rare, then we could focus more attention on the application semantic vulnerabilities. That would be a big improvement. Okay, so it's clear that the historic use of c and c plus plus has been the source of a great many past security vulnerabilities despite the coders of those languages doing the very best jobs they can.
There's, there's no date on this document, but it feels a few years old since they were citing Matt Miller's well-known research, which we cited before here from March of 2019 from what was written above. It's also clear that, you know, they were wishing and hoping then for the move that Google announced just this past week with a formal incorporation of rust as a first class chromium implementation language. Okay, so let's flesh out each of these three factors in the context of browser implementation. First untrustworthy inputs, they explain that untrustworthy inputs are inputs that a, have non-trivial grammars meaning, you know, a complex language to figure out and or come from untrustworthy sources. So Google explains if there were an input type, so simple that it was straightforward to write a memory safe handler for it, we wouldn't need to worry much about where it came from for the purposes of memory safety because we'd be sure we could handle it.
But unfortunately, these are not simple languages. Okay? So Google said any arbitrary peer on the internet is an untrustworthy source unless we get some evidence of its trustworthiness. They said, which includes at least a strong assertion of the source's identity. They said, well, we can know with certainty that an impetus coming from the same source as the application itself. For example, Google in the case of Chrome or Mozilla in the case of Firefox, and that the transport is integrity protected over https, then it can be acceptable to parse even complex inputs from that source. They said it's still ideal, we're feasible to reduce our degree of trust in the source, such as by parsing the input in a sandbox. And we'll be talking about that in a second. Okay, so that was untrustworthy inputs. What about implementation language?
Google explains unsafe implementation. Languages are languages that lack memory safety, including at least c c plus plus and assembly language memory safe languages include go rust, Python, Java, Java Script, Kotlin, and Swift. And then they said, note that the safe subsets of these languages are safe by design, but of course implementation quality is a different story. Okay, so what about unsafe code in safe languages, which there's, there are often provisions for Google said some memory safe languages provide a backdoor to unsafety such as the unsafe keyword in rust. This functions as a separate unsafe language subset inside the memory safe language. The presence of unsafe code does not negate the memory safety properties of the memory safe language around it as a whole. But how unsafe code is used is critical. Poor use of an unsafe language subset is not meaningfully different from any other unsafe implementation language.
So they said in order for a library with unsafe code to be safe for the purposes of the rule of two, all unsafe usage must be able to be reviewed and verified by humans with simple local reasoning. To achieve this, we expect all unsafe usage to be three things small. So the minimal possible amount of code to perform the required task. Second encapsulated all access to the unsafe code is through a safe and an understood a p i and third documented all pre-conditions of an unsafe block, meaning a block of code, for example, a call to an unsafe function are spelled out in comments along with explanations of how they're satisfied. So in other words, they said where, you know where a safe language such as rust provides facilities for breaking out of safety in order to address some need, that region of unsafety must be small, contained, completely understood and well documented.
So they continue because unsafe code reaches outside the normal expectations of a memory safe language, it must follow strict rules to avoid undefined behavior and memory safety violations. And these are not always easy to verify. A careful review by one or more experts in the unsafe language subset is required. It should be safe to use any code in a memory safe language in a high privileged context. Okay, so there's, there's, there's two of the rules. It should be safe to use any code in a memory, safe language in a high privileged context. As such, the requirements on a memory safe language implementation are higher. All code in a memory safe language must be capable of satisfying the rule of two in a high privilege context, including any unsafe code that it encapsulates in order to be used or admitted anywhere in this project. Okay? So that was interesting.
I mean, they're, they're like saying we're following these rules if, if, if, if you can't satisfy the rule of two, this, the, the code is not coming into chromium. Okay, so finally, code privilege, that's the third item from which we can only have two. So Google explains high privilege is a relative term. The very highest privilege programs are the computer's firmware, the boot loader, the ker, any hypervisor or virtual machine monitor and so on. They said below, oh, below that, you know, below those very top ones are processes that run as an OS level account representing a person. This includes the chrome browser process. They said we consider such processes to have a high privilege after all they can do anything the person can do with any and all of the person's valuable data and accounts. Processes with slightly reduced privilege include as of March, 2019, the G P U process and hopefully soon the network process.
They said these are still pretty high privileged processes. We are always looking for ways to reduce their privilege without breaking them. Low privilege processes include sandboxed utility processes and renderer processes with site isolation, which is very good. Or origin isolation, which is even better. Okay, so Google then talks about two topics that we've discussed through the years and we've observed over and over how difficult they appear to get right parsing and des serialization. Remember that des serializing is essentially an interpretation job. An interpretation is notoriously difficult to get correct because it appears to be nearly impossible for the coder of the interpreter who inherently expects the input to be sane to adequately handle inputs that are malicious. So Google says, turning a stream of bites into a structured object that's the de serialization into a, into a structured object is hard to do correctly and safely.
For example, turning a stream of bites into a sequence of uni unicode code points, and from there into an H T M L document object model tree with all of its elements, attributes and metadata is very error prone. The same is true of quick packets, video frames and so on. Whenever the code branches on the bite values its processing, the risk increases that an attacker can influence control flow and exploit bugs in the implementation. Although we are all human and mistakes are always possible, a function that does not branch on input values has a better chance of being free of vulnerabilities. And then they say, consider an arithmetic function such as s h a 2 56, for example. And I thought that was a really interesting observation. We made ssha 2 56 branch free so that differing code paths would not leak timing information and would not leave nable hints in our processor's branch prediction history.
But a side effect of that also increased the algorithm's robustness against deliberate code path manipulation because there is none. So anyway, what surprised me a bit is that the chromium security team is, as I said, is extremely literal about the application of this rule of two. They're not joking around. They actually apply the rule when evaluating new submissions and they wrote some advice to those who would submit code to the chromium project. They said, Chrome's security team will generally not approve landing a new feature that involves all three of trust untrustworthy inputs, unsafe language, and high privilege to solve this problem. You need to get rid of at least one of those three things. Here are some ways to do that. Okay, privilege reduction, obviously one of the three things they said, also known as sandboxing privilege reduction means running the code in a process that has, that has had some or many of its privileges revoked when appropriate, try to handle the inputs in a renderer process that is isolated to the same site as the inputs came from.
Take care to validate the parsed processed inputs in the browser since only the browser can trust itself to validate and act on the meaning of an object. Or you can launch a sandboxed utility process to handle the data and return a well-formed response back to the caller in an inner process communications message. So, okay, these are, you know, these ideas are structural means for creating an arms length, essentially a client server relationship where a low privileged worker process does the unsafe work and simply returns the results to the higher bridge privileged client. You know, that way if something does go sideways, there's containment within the process that cannot do much with its malicious freedom because it doesn't actually have much freedom to be malicious with. As for verifying the trustworthiness of a source, they say that if the developer can be sure that the input comes from a trustworthy source, you know, so not overly attempting to be malicious, it can be okay to parse and evaluate it at high privilege in an unsafe language, even though that seems scary.
In this instance, they say a trustworthy source means you know, that chromium can cryptographically prove that the data comes from a business entity that can be or is trusted. For example, in the case of Chrome coming from one of the alphabet companies, Google then talks about ways to make codes safer to execute under, under this title of normalization. Writing to wouldbe chromium coders, they explain, they said you can defang literally their term defang a potentially malicious input by transforming it into a normal or minimal form, usually by first transforming it into a format with a simpler grammar. They said, we say that all data file and on the wire formats are defined by a grammar, even if that grammar is implicit or only partially specified as is so often the case. They said, for example, a data format with a particularly simple grammar is, and they have a, an internal data structure SK pix map, which, you know, basically is a, is a simple pi image pi pixel map.
They said this grammar is represented by the private data fields, a region of raw pixel data, the size of that region, and simple metadata which directs how to interpret the pixels. They said, unfortunately, it's rare to find such a simple grammar for input formats. For example, consider the p n g image format, which is complex and who see implementation has suffered from memory corruption bugs in the past. An attacker could craft a malicious p and g to trigger such a bug. But if you first transform the image into a format that doesn't have p and g's complexity in a low privilege process, of course the malicious nature of the P n G should be eliminated basically, you know, defanged as they said, or, or, or purified and then be safe for parsing at a higher privileged level. Even if the attacker manages to compromise the low level process with a malicious P N G, the high privileged process will only parse the compromised processes output with a simple plausibly safe parser.
If that parse is successful, the higher privileged process can then option optionally further transform it into normal normalized minimal form, such as to save space. Otherwise, the pars can fail safely without memory corruption. So they said, for example, it should be safe enough to convert a P N G into this sk bitmap in a sandboxed process and then send the sk bitmap to a higher privileged process via an inter process communication. Although there may be bugs in the inter process communication message to serialization code and or this SK bitmap handling code, they said, we consider that safe enough. So I think that the interesting message here for those of us who are not writing code for a browser is first of all, to be thankful that we're not, yeah, no kidding. That is, boy, that is not an easy job. And secondly, to more deeply appreciate just how truly hostile this territory is, you know, it is, it is a true battlefield on this podcast.
I've often noted that the browser is that part of our systems that we blindly thrust out into the world and hope that it doesn't return encrusted with any plagues. You know, when this environment is coupled with the insane complexity of today's browsers, it's truly a miracle that they protect us as well as they do. And all of this is up against the crushing backdrop of the imperative for performance. Google Notes that they, they, they said quote, we have to accept the risk of memory safety bugs in DC serialization because c plus pluses high performance is crucial in such a throughput and latency sensitive area. If we could change this code to be both in a safer language and still have such high performance, that would be ideal, but that's unlikely to happen soon. So this was written several years ago, rust was noted earlier, so at least he was on their radar.
And it will be interesting to see whether that might be the right compromise, if it would be possible to move this most risky aspect, which, you know, they have had to keep up in c plus plus because, and this like, this is the, the, the pinch point in the, the performance pipeline for, for getting the page on the screen. Could they actually re-implement this in, in a memory safe language? Well, I guess we're gonna see but for now it's cleared that the reason task manager shows us 30 processes spawned when Chrome or Edge launch is for containment. Low privileged processes are being created and are given the more dangerous and time critical performance critical tasks to perform. They're time critical, so they're written in an unsafe language to go as fast as they possibly can, but they're, and, and, and they are, they're performance critical, but they're created in a low privileged separate process, even though there's some overhead talking back and forth through between processes.
They said one could imagine Kotlin on Android two, although it is not currently used in chromium, some of us on the security team aspire to get more acromium in safer languages and you may be able to help with our experiments. And of course we know from their announcement last week, rust has made that move. Okay, so at this point, no mention of Russ's adoption and this was a couple years ago, but we now know that's changing this interesting discussion and guidance for would-be chromium developers concludes by noting that all of this is aspirational and that unfortunately it even this does not reflect Chromium's current state under the final heading of existing code that violates the rule they write. We still have code that violates this rule. For example, Chrome's Omni Box, you know that the single URL and search box up at the top of the UI still parses Jason in the browser process.
Additionally, the networking process on Windows is at present uns sandboxed by default, though there is ongoing work to change that default. Okay? So we're seeing an evolution across our industry. Web browsers have become so capable that they're now able to host full applications, you know, that has enabled the relocation of those applications from our local desktop to the cloud and a redefinition of the customer from an owner to a tenant. I, for one, hates this change, but those of us who feel this way are dying off and we're clearly irrelevant anyway. But, you know, neither are my hands completely clean since I'm composing these show notes in Google Docs, which is a stunning example of how well this new system can work. But the gating requirement for any of this to work and for any of this future to unfold, is for our web browsers to survive on the front line of an astonishingly hostile internet.
No one who's been following this podcast for the past few years could have any doubt of the open hostility that today's web browser's face every time they suck down another page loaded with unknown code of unknown providence containing pointers to other pages of code with the need to go get load and run that code too. I, I mean, it just makes you shutter. So I'm very glad that Google's security team is thinking about the problems they're facing, that they take mitigations such as this rule of two as seriously as they do, and that they're finally beginning to migrate to the use of safer languages, which I'll note was made possible by, by Mozilla pioneering this wonderful rust development language. Is that, do you think the only one they could use or it's the best or have you even thought about that at all? I mean, I know you use Assembly, it may, it may be Mozilla's influence, you know, I mean they, even though they seem like competitors, there is a lot of cross pollination Sure.
Between sir Yeah. Between Chromium and Firefox. Yeah. Yeah. Very cool. Yeah, and everything we hear, everything we hear about Rust says that it is a serious implementation language. I mean, like a systems a systems level, systems implementation language. Yeah. Yeah. It's a really interesting language. Yeah. Steve Gibson once again has done it. He's put together a two hours of fascinating conversation about the things we care about the most. Thank you, Steve. You you you did it again. Well, we also found out that last pass turned out not to be doing Disappointing. Help us. I was hoping I, I hope for several hours that that was the case, <laugh>, but no. Oh, well you'll find steve grc.com. That's his website, the Gibson Research Corporation. There's some good stuff there. Of course, spin, right? The world's finest mass storage, maintenance, and recovery tool that's available now in version 6.0 proven bug free over the last 18 years, <laugh>, but soon to be 6.1.
Also bug free for another 18 years. You'll find that in process. But if you buy six oh, now, you will get six one as soon as it comes out, which should be fairly soon. I did release Alpha nine nice. And we're now, we're now, which had a huge slew of new features, and so we're in the process of getting that tested. We're getting closer. Good, good. Yeah, that's a good reason to get it right now. While you're there, you can also get a copy of this show. Steve has two copies. We don't have 16 Kilobit audio version for the bandwidth Impaired and transcriptions, handcrafted Transcriptions by Elaine Ferris. So you can read along as you listen or use them for search. That's firstname.lastname@example.org. Lot's other great stuff there. So browse around. There is a feedback form grc.com/feedback, but he also is on the Twitter as you heard him mention at SG grc, which means you can also, his dms are open.
You also can DM him there, but he has been swamped lately. So don't expect a personal reply. You'll do his best. I liked do when I can. I know, I know you do. That's why, why you're not joining us on Masin on. It's all right. I understand <laugh>. I understand. Steve <laugh>. Steve also can be found on our website, twit tv slash sn. We record the show on Tuesdays at about one 30 to 2:00 PM Pacific, 5:00 PM Eastern, 2200 utc. If you wanna watch us do it email@example.com, you can firstname.lastname@example.org club twit. Members can chat in our discord. A lot of fun in there. It's a great place to hang, not just for shows, but for all kinds of topics. We have a big coding section in there. We talk about rust a lot. Alcohol, sports ball, I'm sure they'll be talking about curling next.
It's all just part of the fun in Club twit, the clubhouse, so to speak. That's our discord. But there's also ad-free versions of all the shows. There's shows we don't put out in public, like Hands on McIntosh with Micah Sergeant Hand on Hands on Windows with Paul Throt. We've got some great special club events coming up, including Thursday. Lisa and I will be doing a, a kind of a AMA inside twit. If you're not a club member, it's not too late. Seven bucks a month, that's all it costs. Twit.Tv/Club twit. And we thank all our club members in advance for all they do for us. It really helps us put these shows on, keep the lights on, keep the staff employed and happy. And it it buys snacks in the kitchen, which are also very welcome after the fact.
There is, of course, a free version, as I said on the website, TWI tv slash sn. There's a YouTube channel. It's also free, dedicated to security. Now, best way to get it though, really would be subscribe in your favorite podcast player. All you gotta do is find a player and search for security now. And Bob's your uncle. I actually have an Uncle Bob, but that's a story for another day. If I said everything that needs to be said, I think I have, except have a great week and we will see you next time. Steve, on security. Now I, you my friend. Till next week.
Rod Pyle (01:54:21):
Hey, I'm Rod Pile, editor-in-Chief of Ad Astrat magazine. And each week I joined with my co-host to bring you this week in space, the latest and greatest news from the Final Frontier. We talk to NASA chiefs, space scientists, engineers, educators and artists, and sometimes we just shoot the breeze over what's hot and what's not in space, books and tv, and we do it all for you, our fellow true believers. So, whether you're an armchair adventurer or waiting for your turn to grab a slot in Elon's Mars Rocket, join us on this weekend space and be part of the greatest adventure of all time