Transcripts

Security Now 962 Transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.

 

0:00:00 - Leo Laporte
It's time for Security Now. Steve Gibson is here coming up. We're gonna talk about the webcams that accidentally put everybody's video and everybody else's house. Patch Tuesday is here. Uh well, it was Last week. Steve has some notes on that why the flipper zero is being banned in Canada and a nightmare scenario with DNSSEC that could have brought the whole internet to its knees. Steve explains next on Security Now Podcasts you love.

0:00:34 - Steve Gibson
From people you trust. This is TWIT.

0:00:42 - Leo Laporte
This is Security Now with Steve Gibson, episode 962. Recorded Tuesday, february 20th 2024. The internet dodged a bullet. This episode is brought to you by Panoptica.

Panoptica, cisco's cloud application security solution, provides end-to-end life cycle protection for cloud-native application environments. It empowers organizations to safeguard their APIs, serverless functions, containers and Kubernetes environments. Panoptica ensures comprehensive cloud security compliance and monitoring at scale, offering deep visibility, contextual risk assessments and actionable remediation insights for all your cloud assets. Powered by graph-based technology, panoptica's attack path engine prioritizes and offers dynamic remediation for vulnerable attack vectors, helping security teams quickly identify and remediate potential risks across cloud infrastructures. A unified cloud-native security platform minimizes gaps for multiple solutions, providing centralized management and reducing non-critical vulnerabilities from fragmented systems.

Panoptica utilizes advanced attack path analysis, root cause analysis and dynamic remediation techniques to reveal potential risks from an attacker's viewpoint. This approach identifies new and known risks, emphasizing critical attack paths and their potential impact. This insight's unique and difficult to glean from other sources of security telemetry, such as network firewalls. Get more information on Panoptica's website, panopticaapp. More details on Panoptica's website panopticaapp. It's time for security now, the show you wait for all week long. I know it. I know it One of the nine best security shows in the world. I go with Steve Gibson, the man in charge.

What number are we on that list? By the way? Number one, I hope.

0:02:48 - Steve Gibson
No, no, we had to. We went away. I think maybe seven, but it wasn't ranked in order of goodness. It might have been alphabetical. I think it was alphabetical.

0:02:57 - Leo Laporte
Well then we would come in towards the end. Applebee's security show, which is a wonderful, wonderful show, by the way.

0:03:07 - Steve Gibson
Yeah. So, oh boy, no one is going to believe this literally, because, well, you will once I'm done, but this has got to be the most underreported event that I can remember, and this sounds like hyperbole, and you know me. So it could well be, but the whole internet was at risk for the last couple months, and a group of researchers silently fixed it. What? Because, if it had been discovered, the internet could have been shut down. It turns out there was a problem that has been in the DNS specification for 24 years. It's deliberate, and so it wasn't a bug, but it was a feature that a group of German researchers figured out how to exploit, and one. I'm stepping on my whole storyline here Because I'm so excited by this. It's like, wow, okay. So this is episode 962 for February 20th, titled the Internet Dodged a Bullet.

0:04:29 - Leo Laporte
Not the first bullet, though right. Didn't Dan Kaminski save the internet back in 2008?

0:04:36 - Steve Gibson
No, dan is a great publicist. Oh, was a great publicist. Yeah, I mean. And yes, the issue of DNS queries not having sufficient entropy was important. I mean, I wrote the, what the hell's it called. I forgot now the DNS the spoofability test A lot of work in order to allow people to measure the entropy of their own of the DNS servers that are issuing queries on their behalf. This blows that away. I mean I hear I'm doing it again, anyway okay, so Wow, I can't wait to hear this.

It is something and, leo, be careful not to look at the picture of the week, because this is a great one. So we're going to answer some questions. What's the worst mistake that the provider of room and I'm not had too much coffee of a remotely accessible residential webcams could possibly make? Like what in your when? When Lisa said we don't want any cameras in our house, what was she worried would happen?

0:05:46 - Leo Laporte
Oh, yeah, yeah.

0:05:48 - Steve Gibson
What's the yeah? What surprises did last week's patch Tuesday bring? Why would any website put an upper limit on password length? And, for that matter, what's up with the no use of special characters business? Will Canada's ban on importing the flipper zero hacking gadgets reduce their car theft? Yeah, exactly. Why didn't the Internet build in security from the start, like what was wrong with them? How could they miss that? Doesn't Facebook's notice of a previous password being used leak information? Why isn't to to just another password that's unknown to an attacker? Can exposing SNMP be dangerous? Why doesn't emails, general lack of encryption and other security make email only login quite insecure? And finally, what major cataclysm did the Internet just successfully dodge? And is it even possible to have a minor cataclysm?

0:07:01 - Leo Laporte
All this and more from one of the best 17 cyber security podcasts in the world that's a different one.

0:07:09 - Steve Gibson
Mine was nine. We were in the top nine, the Sands, Institute says we're in the top 17.

0:07:14 - Leo Laporte
Well, but where. Are they alphabetical? They're in random order, but we're on there. That's the most important thing. I mean Sands Institute's pretty credible.

0:07:25 - Steve Gibson
Sands good, but I'd rather be in the top nine. So today we'll be taking a number of deep dives, after Leo, after we examine a potential solution to global warming and energy production. No, this is serious, as shown in our terrific picture of the week. I can't wait Some and Leo. Some things are just so obvious in retrospect. Oh wow, I know this is a podcast for the agent. Thank God we got it in before 999. Otherwise you know we'd have to wait, but I into four digits.

0:08:09 - Leo Laporte
Well, we will. We will continue with what is this episode I have? Now I've taken it nine, 62.

0:08:15 - Steve Gibson
Yes, one by one, in just a moment.

0:08:20 - Leo Laporte
But first a word from our sponsor, collide. We love collide. What do you call endpoint security? That works perfectly? Oh, it's a great product, it works perfectly, but makes users miserable. Well, that's a guaranteed failure, right, because the user is going to work around it.

The old approach to endpoint security locked down employee devices, rollout changes through forced restarts, but that just doesn't work. The old approach is terrible. All around IT is miserable. They've got a mountain of support tickets. Employees are miserable. They start using their own personal devices. Now you're really going to be in trouble just to get the work done.

And the best, the worst thing is the guy or gal who's writing to check the executives. They opt out the first time. They're late for a meeting. It's like I use this Look, it's pretty clear, and if you're in the business you know this. You can't have a successful security implementation unless you get buy in from the end users. That's where collide is great. They don't just buy into it, they help you, they're part of your team.

Collides user first device trust solution. Notifies users as soon as it detects an issue on their device, you know with a big klaxon or anything, but just in a nice way, and it says here's how you could fix this without bothering IT. He's how you can make your device more secure and, by the way, users love it. They're on the team, they're learning, they're doing the stuff that makes your company succeed and you'll love it, because untrusted devices are blocked from authenticating. The users don't have to stay blocked, they just fix it and they're in. So you got a little characteristic going on. Collide is designed for companies with octa. It works on Mac OS, windows, linux, mobile devices. Byod doesn't require MDM If you have octa and you got that half of the authentication job done. But you want a device trust solution that does the other half and respect your team. Collide, that's the solution.

Collidecom slash security Now go there. You get a demo, no hard sell, just the explains how it works and I think you'll agree it's a great idea. Kol IDEcom slash security. Now. We thank him so much for their support. Now let's see if Steve has overhyped his picture of the week. What is this going to do for us? It's going to save the world solution to global warming and energy production. Well, of course, you put light on the solar panels and infinite electricity.

0:11:00 - Steve Gibson
That's exactly right, leo. So what we have for those who are not looking at the video is a picture of a rooftop with some solar panels mounted on it, as many are these days. What makes this one different is that it's surrounded by some high intensity lights pointing down at the solar panels. Because you know why not?

0:11:25 - Leo Laporte
It's trying to generate a day and night.

0:11:28 - Steve Gibson
And my caption here I said when you're nine years old, you wonder why no one ever thought of this before Adults are so clueless.

0:11:37 - Leo Laporte
I bet you even knew better when you were nine.

0:11:40 - Steve Gibson
Well, you know, it was interesting because this put me in mind of the quest for perpetual motion machines.

0:11:47 - Leo Laporte
Remember those back in our youth.

0:11:50 - Steve Gibson
I mean, and like even Da Vinci, was a little bit obsessed with these things there was one really cool design where you had a wheel with marbles writing tracks where the marbles could roll in and out and the tracks were canted so that on one side of the wheel the marbles rolled to the outside, therefore their weight was further away from the center, so pulling down harder. And on the other side, the way the fixed tracks were oriented, the marbles would roll into the center, into the hub, so their weight was pulled up. There it is. You just found the perpetual motion machine, exactly and of course never stops turning.

And I mean there were again. I was, you know, five and interested in physics already, because I was wiring things up with batteries and light bulbs and stuff when I was four. So I was you know. I spent some length of time puzzling that out. The good news is now, as an adult, I don't give it a second thought.

0:13:06 - Leo Laporte
You say you believe Newton's law of the conservation of energy. You just believe that's to be true.

0:13:13 - Steve Gibson
Well, the problem we have with our picture of the week is that light lights are not 100% efficient in converting electricity into light, for example, they get warm, so you have some energy lost in heat, and the physics of the conversion are also not completely efficient.

0:13:35 - Leo Laporte
Oh yeah, solar panels are no more than five or 10% efficient. Very, and there you go.

0:13:40 - Steve Gibson
So, so you know, the idea would be you hook the output of the solar panel to the input of the lights, and then you know when the sun goes down. This just keeps going.

0:13:51 - Leo Laporte
Somebody has a good point, though. Maybe those lights are hooked up to the neighbor's electricity.

0:13:59 - Steve Gibson
Well, and the only. Thing.

I could think when I was like trying to find a rationale, was that they might turn the lights on at night because for some wacky reason there's whatever they're powering from the solar panels needs to be on all the time. Or maybe they turn it on on cloudy days because, again for the same reason. So it's sort of a sun substitute because of you know dumb dumbness. Because it is dumb, as you said, solar panels are way inefficient, so you're going to get much less juice out of the solar panels than you put into the array of lights which are busy lighting.

0:14:38 - Leo Laporte
We're in a golden era for scammers. You're just going to see endless variations of this on YouTube and on Twitter, and you know, using water to power your car, and this stuff just never dies. It never does. Wait, does that work? I got to try that. No, no, okay.

0:14:53 - Steve Gibson
No. So in reporting the following story, leo, I'm reminded of your wife leases wisely, so to speak, because this story is about wise forbidding cameras of any kind inside your home 9 to 5 Max. Headline read wise camera breach let 13,000 customers view other people's homes. Oh boy, now one of our listeners, a user of wise monitoring cameras, was kind enough to share the entire email he received from other users. But bleeping computers coverage included all of those salient details and added a bit more color, as you might expect. Here's what bleeping just wrote yesterday they said wise camera glitch gave 13,000 users a peek into other homes. Just wrote yesterday they said wise shared more details on a security incident that impacted thousands of users on Friday and said that at least 13,000 customers could get a peek into other users' homes.

The company blames a third party caching client library recently added to its systems, which had problems dealing with a large number of cameras that came online all at once after a widespread Friday outage. Multiple customers have been reporting seeing other users video feeds. Oh, under the events tab, I bet there were some events in the app since Friday, with some even advising other customers to turn off the cameras until these ongoing issues are fixed. Wise wrote quote the outage originated from our partner AWS and took down wise devices for several hours early Friday morning. If you tried to view live cameras or events during that time, you likely weren't able to. We're very sorry for the frustration and confusion this caused. As we worked to bring cameras back online, we experienced a security issue. Some users reported seeing the wrong thumbnails and event videos.

0:17:30 - Leo Laporte
Yeah, that's not us. Who's bathtub?

0:17:34 - Steve Gibson
is that it's not mama walking around, oh we immediately removed access to the events tab and started an investigation. Unquote. We bravely did that. Okay. Why says this happened? Because of the sudden increased demand? I'll get to my skepticism on that in a minute and led to the mixing of device IDs and user ID mappings.

You don't ever want that to happen, with your camera system causing the erroneous connection of certain data with incorrect user accounts. As a result, customers could see other people's video feed thumbnails and even video footage after tapping the camera thumbnails in the wise apps events tab. In emails sent to affected users, wise confessed quote we can now confirm that, as cameras were coming back online, about 13,000 wise users received thumbnails from cameras that were not their own, and 1500 and four users tapped on them. We've identified your wise account as one that was affected. This means that thumbnails from your events were visible in another wise users account and that a thumbnail was tapped. Oh, that is a confession. Somebody was looking at your video, baby. Most taps enlarged the thumbnail, but in some cases it could have caused an event video to be viewed.

0:19:19 - Leo Laporte
Zoom in on that one. What is that?

0:19:22 - Steve Gibson
the corner over there. Yeah, that's right. Wise has yet to share the exact number of users who had their video surveillance feeds exposed in the incident. The company has now added an extra layer of verification oh you betcha For users who want to access video content via the events tab, to ensure that this issue will not happen in the future. Is that really your video you're about to look at? Additionally, it adjusted systems to avoid caching during user device relationship checks until it can switch to a new client library. Get rid of that old cash capable of working correctly, which would be convenient during extreme events. That's what we're adding. Quotes like the Friday outage.

Okay now I like wise and their cameras are adorable, little, beautifully designed cubes. You know they look like something Apple would produce, but at this stage in the evolution of our understanding of how to do security on the Internet, I cannot imagine streaming to the cloud any content from cameras that are looking into the interior of my home. You know, maybe the backyard, but even then you know who knows. You know the cloud and real time images of the contents of our homes do not mix. I understand that for most users, the convenience of being able to log in to a wise camera monitoring website to remotely view one's residential video cameras is difficult to pass up you know, seductive If you weren't listening to this podcast.

And the whole thing operates with almost no setup, you know. And why is his response to this incident appears to be everything one could want. I mean, oh, they've really they've been honest, which could not have been easy. The letter that our listeners shared with me unlike the letter that bleeping computer quoted said that his account had not been affected. So, you know, wise was on the ball and they clearly got logging working because they knew that 1500 and four people did you know, click on a thumbnail like what is that? That doesn't look like my living room. Click, oh, it's not my living room. Wow, who's that?

Anyway, I should add, however, that I'm a bit skeptical about the idea that simply overloading a caching system could cause it to get its pointers scrambled and its wires crossed, you know, thus causing it to, you know, to get its users confused. You know, if it really did happen that way, it was a crappy cache and I'm glad they're thinking about, like, revisiting this whole thing because there was a problem somewhere. Anyway, I do like the cameras. I'm just not going to let them be exposed to the outside world. That makes no sense, Okay.

0:22:35 - Leo Laporte
By the way, jason on MacBreak Weekly recommended a new Logitech camera that's HomeKit enabled, which means it doesn't send video outside your house. You know it's, or I guess it does. Probably did Apple encrypted iCloud, something like that.

0:22:52 - Steve Gibson
And again done right, probably Done right, of course.

0:22:55 - Leo Laporte
Yeah, so HomeKit and security and Apple. That seems like a good choice. It's too bad because the why stuff is very inexpensive. I've been recommending it for years and using it too, so oh, I think in fact you order when they send you money. It's a practically how you know yeah, they are.

0:23:14 - Steve Gibson
And they're just beautiful little things. Yeah, yeah, it's too bad. Okay, we haven't checked in on a Microsoft patch Tuesday for a while. Last week's update was notable for its quiet inclusion of a mitigation for what the industry's DNS server vendors have described as quote the worst attack on DNS ever discovered unquote. It seems to have slipped under much of the rest of the industry's radar, but not this podcast, since it could have been used to bring down the entire internet. It is today's topic, which we will, of course, be getting back to later, but first we're going to take a look at patch Tuesday and then answer some questions and have some interesting deep dives. The inner, you know, dns attack didn't happen, so the internet is still here. Thus Microsoft doesn't get off the hook for fixing another round of problems with its products.

Last week's patches included fixes for a pair of zero days that were being actively exploited, even though they were not the worst from a CVSS rating standpoint. That that goes to a different pair, which earned 9.8 CVSS ratings Overall. Last Tuesday, microsoft released patches to repair a total of 73 flaws against its cross its product lineup. Of those 73, five are critical, 65 are important and the last three have moderate severity Fixed separately were 24 flaws in edge which had been repaired in the intervening month. Between now and last month's updates, the two zero days are a Windows Smart Screen Security Bypass carrying a CVSS of only 7.6, and an Internet Shortcut Files Security Feature Bypass with a CVSS of 8.1. The first one of those two, the Windows Smart Screen Security Feature Bypass, allows malicious actors to inject their own code into Smart Screen to gain code execution, which could then lead to data exposure. The lack of system availability could also happen, or both. Now, the reason it was only rated at 7.6 is that for the attack to work, the bad guys needed to send the targeted user a malicious file and convince them to open it. So it wasn't, like you know, just receive a packet and it's the end of the world, which actually is what happened with this DNS business. We'll get to the other zero day permitted unauthenticated attackers to bypass displayed security checks by sending a specially crafted file to a targeted user, but once again, the user would somehow need to be induced to take the action of clicking on the link that they'd received. So not good. Not good Was being actively exploited in the field. Both of these were zero days being used. Still this one rated at 8.1 and it's fixed as of last Tuesday.

The five flaws deemed critical, in order of increasing criticality, were a Windows Hyper-V denial of service vulnerability that got itself a score of 6.5. So critical, but not a high. Cvss. The Windows Pragmatic General Multicast, which is PGM Remote Code Execution Vulnerability, scored a 7.5. Microsoft Dynamics Business Central slash NAV Information Disclosure Vulnerability came in with an 8.0. And finally, the two biggies, both getting a 9.8.

To few people's surprise, we have another Microsoft Exchange Server Elevation or Privilege Vulnerability and a Microsoft Outlook Remote Code Execution Vulnerability. Both of those 9.8, both of those easy to do, both of those now resolved as of last week A senior staff engineer at Tenable, the security firm, said in a statement that this was very likely to be exploited and that exploiting the vulnerability could result in the disclosure of a targeted user's NTLM. You know, nt Landman version 2 hash, which could be relayed back to a vulnerable Exchange Server in an NTLM relay or past the hash attack to allow the off the attacker to authenticate as the targeted user. So it was a way of getting into Microsoft Exchange Server through this 9.8, you know. Basically a backdoor vulnerability and, believe it or not, last Tuesday's updates also fixed 15. I had to like what really? Yes, 15 remote code execution flaws in Microsoft's WDAC OlayDB provider for SQL Server that an attacker could exploit by tricking an authenticated user into attempting to connect to a malicious SQL Server via Windows OlayDB. So 15 remote code execution flaws. It must be that someone found one and said, wow, let's just keep looking. And the more they looked, the more they found. So Microsoft fixed all of those and rounding off the patch batch, as I mentioned, is a mitigation, not a total fix for a 24 year old fundamental design flaw, not an implementation flaw, a design flaw in the DNS spec which, had they known of it, bad guys could have used to exhaust the CPU resources of DNS servers to lock them up for up to 16 hours after receiving just a single DNS query packet.

We'll be talking about this bullet that the internet dodged as we wrap up today's podcast. But first Ben wrote hey, steve loved the show. I hated the security courses I took in school, but you make it way more interesting. I haven't missed a podcast from you in three years.

My question was I was recently going through my password vault and changing duplicate passwords. I encountered a lot of sites with length and symbol restrictions on passwords, for example, no passwords longer than 20 characters or disallowing certain symbols. My understanding of passwords is that they all get hashed to a standard length, regardless of the input, so it can't be for storage space reasons. Why the restrictions? Even if I input an eight character password, the hash will end up being 128 bits or whatever. I would love some insight, because it makes no sense to me.

Thanks, yuri. So that's his real name. Okay, so, yuri, you're not alone. When you look closely at it, none of this really makes any sense to anyone. I think the biggest problem we have today is that there are no standards, no oversight and no regulation, and everything that is going on behind the scenes is opaque to the user. We only know in detail, for example, what LastPass was doing and the entire architecture of it, because it really mattered to us, and back then we drilled down and got questions from the guy that wrote that stuff, and so you know we really understood what the algorithms were. But on any random given website, we have no idea. There's no visibility.

0:31:38 - Leo Laporte
You can also kind of feel how antiquated that is when you say the words the guy who wrote it? Besides your software, spinrite and all of that, nothing's written by one person. That's nuts, right.

0:31:59 - Steve Gibson
You're right, yeah, you're right.

0:32:01 - Leo Laporte
The fact that Joe Segrist wrote LastPass 30 years ago by himself is amazing.

0:32:06 - Steve Gibson
Yeah, so everything is opaque to the user. We hope that passwords are stored on a site server in a hashed fashion, but we don't know that, yeah, you know. And assuming that they are hashed, we don't know whether they're being hashed on the client side, in the user's browser, or on the server side after they arrive in the clear over an encrypted connection. That much we do know, because we can see that the connection is encrypted. A very, very long time ago, back at the dawn of Internet identity authentication, someone might have been asked by a customer support person over the phone to prove their identity by reading their password aloud. I'm sure that happened to a few of us. I'm sure I was asked to do it like at the very beginning of all this. The person at the other end of the line would be looking at it on their screen, which they had pulled up for our account, to see whether what we read to them over the phone matched what they had on record. That's the way it used to be. Of course, as we know, this could only be done if passwords were stored as they actually were given to the server in the clear, as indeed they originally were, and in that case you can see why the use of special characters with names like circumflex, tilde, pound, ampersand back, apostrophe and left curly brace might be difficult for some people at each end of the conversation to understand. Tilde, what's a circumflex? What's a circumflex, what? So restricting the character set to a smaller common set of characters made sense back then. And we also know that in those very early days a web server and a web presence was often just a fancy graphical front end to an existing monstrous old school mainframe computer. You know, up on an elevated floor with lots of air conditioning, whose and that computer's old account password policies predated the internet. So even after a web presence was placed in front of the old mainframe, its ancient password policies were still being pushed through to the user. Today we all hope all that. None of that remains.

But if we've learned anything on this podcast, it's to never discount the power and the strength of inertia, even if there's no longer any reason to impose password restrictions of any kind. Well, other than minimum length. That would be good, you know, because the password A is not a good one. Restrictions may still be in place today simply because they once were, and hopefully all consumers have learned the lesson to never disclose a password to anyone at any time, for any reason. We see reminders from companies which are working to minimize their own fraud, explicitly stating that none of their employees will ever ask for a customer's password under any circumstances, and we're hoping that's because they don't have the passwords under any circumstances. They were hashed a long time ago and they're just being stored.

The gold standard for password processing is for JavaScript or WebAsem running on the client, that is, the user's browser, to perform the initial password qualification and then its hashing. Some minimum length should be enforced. All characters should be allowed, because why not? And requirements of needing different character families, upper and lower case numbers and special symbols also make sense. That will protect people from using one, two, three, four, five, six or password as their password, and those minimal standards should be clearly posted whenever a new password is being provided. It's really annoying right when you're asked to change or to create an account or change your password and then, after you do, it comes up and says, oh, we're sorry, that's too long, or oh, you didn't put a special character in. It's like, why didn't you tell me first? Anyway, ideally there should be a list of password requirements with a checkbox appearing in front of each requirement, dynamically checked as its stated requirement is met, and the submit button should be grayed out and disabled until all requirements have checkboxes in front of them. Thus those requirements have been met, and a password strength meter would be another nice touch. Once someone arrives at the server, once something that has been submitted from the user arrives at the server, then high power systems on the server side can hash the living daylights out of whatever arrives before it's finally stored.

But since we also now live in an era where mass data storage has become incredibly inexpensive and where there's very good reason to believe that major world powers are already recording pretty much everything on the internet, all internet transactions, in the hope of eventually being able to decrypt today's quantum unsafe communications once the use of quantum computers becomes practical, there's a strong case to be made for having the user's client hash the qualifying password before it ever leaves their machine to traverse the internet. Once upon a time we coined the term PI P-I-E pre internet encryption. So this is like that. This would be, I guess, p-i-e pre internet hashing. But you know, javascript, or preferably WebAssembly, should be used to hash the user's qualifying input.

Much of that gold standard that I just described is user facing, and its presence or absence is obvious to the user. You know, unfortunately we rarely see that going on today, it would be necessary to reverse engineer individual web applications if we wished to learn exactly how each one operates. Since avoiding the embarrassment of breaches and disclosures is in each company's best interest, and since none of the things I've described is at all difficult to deploy today, we can hope that the need to modernize the user's experience while improving their security will gradually overcome the inertia that will, you know, always be present to some degree. But you know, so we're, we'll always be dragging forward some of the past, but at some point, you know, everything should be catching up. I, like yep, philippe Moffra said hello, steve.

First of all, I'd like to thank you and Leo for the great show. I'd like to bring you something very interesting that recently happened on this side of the border. The Canadian industry minister, francois Philippe Champagne, proudly tweeted on February 8th that they are banning the importation, sale and use of hacking devices such as Flipper Zero that are being widely used for auto theft. This is an attempt to respond to the huge increase in auto thefts here in Canada, even if I believe it's good that the government is trying to address this issue. I found myself rather than I found myself. Rather than blocking the usage of such devices, it would be better if the industry was required to make things right by design. This pretty much aligns with last week's Security Now, episode 960, regarding security on PLCs you know, programmable logic controllers as we see no commitment from those industries to make their products safe by design. Anyways, I really appreciate it. I wanted to share this with you and get your perspectives. Thank you again for the show, looking forward to 999 and beyond. Best regards, philippe.

Okay, in past years we've spent some time looking closely at the automotive remote key unlock problem. What we learned is that it is actually a very difficult problem to solve fully and that the degree of success that has been obtained by automakers varies widely. Some systems are lame and others are just about as good as can be, and we've seen even very cleverly designed systems, like as good as they could be, fall to ingenious attacks. Remember the one where a system was based on a forward rolling code that was created by a counter in the key fob being encrypted under a secret key and the result of that encryption was transmitted to the car. This would create a completely unpredictable sequence of codes. Every time the unlock button was pressed, the counter would advance and the next code would be generated and transmitted, and no code ever received by the auto would be honored a second time. So anyone snooping and sniffing the radio could only obtain code that had just been used and would no longer thus be useful again.

So what did the super clever hackers do? They created an active attack. When the user pressed the unlock button, the active attack device would itself receive the code while simultaneously emitting a jamming burst to prevent the automobile from receiving it. So the car would not unlock. Since that happens when you're too far away from the car, and it's not that uncommon, the user would just shrug and press the unlock button again. This time the active attacking device would receive the second code, emit another jamming burst to prevent the car from receiving the second code, then itself send the first code it had received to unlock the car. So the user would, you know, have to press the button twice. But they just figured the first one didn't make it second one unlocked the car.

By implementing this bit of sub-trefuge, the attacker is now in possession of a code that the key fob has issued. Thus it will be valid. But the car has never seen it, and it's the next key in the sequence from the last code that the car did receive. It is diabolically brilliant and I think it provides some sense for what automakers are up against From a theoretical security standpoint. The problem is that all of the communication is one way, key fob to auto. The key fob is issuing a one-way assertion instead of a response to a challenge.

What's needed to create a fully secure system would be for the key fob's unlock button to send a challenge request to the car. Upon receiving that request, the car transmits a challenge in the form of an unpredictable value resulting from encrypting a counter. The counter is monotonic, upward counting 128 bits, and it will never repeat during the lifetime of the universe, let alone the lifetime of the car or its owner. Upon receiving that unique challenge code sent by the car, the key fob encrypts that 128-bit challenge with its own secret key and sends the result back to the car. The car, knowing the secret kept by its key fobs, performs the same encryption on the code it sent and verifies that what the key fob has sent it was correct.

Now I cannot see any way for that system to be defeated. The car will never send the same challenge and the key will never return the same response and no amount of recording. That challenge and response dialogue will inform an attacker of the proper responses to future challenges. If some attacker device blocks the reception, the car will send a different challenge, the key will respond with a different reply and once that reply is used to unlock the car, the car will no longer accept it again.

So the only problem with this system is that now both endpoints need to contain transceivers capable of receiving and transmitting. Previously the fob only had to transmit and the car only had to receive. So transceivers add some additional cost, though not much in production, since both already contain radios anyway. But what this does mean is that a simple software upgrade to the existing hardware installed base will not and cannot solve this problem. I doubt it's possible to create a secure one-way system that's safe against an active attacker while still reliably unlocking the vehicle without unduly burdening its user. The system I've just described is not rocket science. It's what any crypto-savvy engineer would design. And since this problem is also now well understood, I would be surprised if next generation systems which fix this in this way once and for all were not already on and off the drawing board and headed into production. But that doesn't solve the problem, which exists and which will continue to exist for all of today's automobiles.

So now let's come back to Philippe's point about Canada's decision to ban the importation of devices such as the Flipper Zero. We know that doesn't solve the problem, but will it reduce the severity of the problem? Yeah, probably somewhat. Kits will spring up to allow people to build their own. Canada is a big place. There's nothing to prevent someone from firing up manufacturing and creating homegrown Flipper Zeros or their like. It's an open source device. I mean like. The design is all there. What we keep seeing, however, is that low-hanging fruit is the fruit that gets picked and eaten, and many people won't take the time or trouble to exert themselves to climb a tree to obtain the higher-hanging fruit. You know, hand them a piece. Sure, work for it, perhaps later. So I would argue that making car theft even somewhat more difficult will likely be a good thing. And the Flipper Zero is at best a hacker's gadget. You know, it's not as if it has huge non-hacker applications. No, but it's a lot of fun. It is a lot of fun.

0:49:03 - Leo Laporte
And I was going to use it on my car and then Russell said don't, because you could actually lock yourself out of the car, because the car's security features will see that you're doing it and will prevent you from using your regular fob after that. So I declined, but I was able to get into the office. I was able to clone our key fob and use it.

0:49:23 - Steve Gibson
Oh yeah, it is. I did a little bit of brushing up on it yesterday. It is a very cool device.

0:49:29 - Leo Laporte
I gave it a line to Father Robert, so it's now in Italy.

0:49:33 - Steve Gibson
You gave it a good home.

0:49:34 - Leo Laporte
I think he's really going to get a lot of use out of it.

0:49:38 - Steve Gibson
And actually you know, from a hardware hacking standpoint, all of the little GPIO pins along the back.

0:49:44 - Leo Laporte
It's really cool. It's very, very cool it's a great device. I think you could duplicate it with an Arduino or any other variety of devices. It's not that unique.

0:49:51 - Steve Gibson
But, as we've seen, packaging counts like remember the picture of that TPM buster that we talked about last week, where it had the little row of Pogo pins along one side and it just looked adorable and it's like, wow, that's very cool, it's cute. So, leo, let's take a break. And then what are we going to talk about next? We got, oh, we're going to talk about why the internet didn't start off having security in the first place.

0:50:21 - Leo Laporte
Oh, you know, I interviewed I guess it was Vince Cerf, the father of the internet, back in the day and I asked him you know why didn't you think of putting crypto in? And he said we didn't. No one knew, we just didn't. He said we would. Now we now know how you know people use the internet, but at the time, anyway, I'd be very curious to what you have to say about this one.

But first a word from our sponsor, vanta. From dozens of spreadsheets to fragmented tools and manual security reviews, are you familiar? Managing the requirements for modern compliance and security programs is is just that a hand. It's just gotten out of hand. Vanta V-A-N-T-A is the leading trust management platform that helps you centralize your efforts to establish trust and enable growth across your organization. G2 loves Vanta year after year. Here's a very typical review from a chief technology officer. There is no doubt about Vanta's effect on building trust with our customers. As we work more with Vanta, we can provide more information to our current and potential customers about how committed we are to information security, and Vanta is at the heart of it. She was very happy with Vanta.

Automate up to 90% of compliance, strengthen your security posture, streamline security reviews and reduce third party risk, all with Vanta. And speaking of risk, vanta is offering you a free risk assessment. Very easy to do, no pressure. Vantacom slash security now. Generate a gap assessment of your security and your compliance posture. This is information I know nobody wants to know, but you need to know. Discover your shadow IT and understand the key action to de-risk your organization. It's all at Vantacom slash security now. Your free risk assessment awaits Vantacom slash security now. I love their slogan Compliance. That doesn't sock too much. I guess you know it's kind of an in-joke.

0:52:35 - Steve Gibson
I like it All.

0:52:36 - Leo Laporte
right, steve, on with it, so M.

0:52:38 - Steve Gibson
Scott tweets. Steve, I'm wondering about your thoughts. The cyber security community seems to bemoan the lack of security baked into the original internet design and ceaselessly encourages designers of new technology to bake in security from the get go. Well, we certainly grew the second half of that. Several books I'm reading for a cyber and information warfare class suggest that government regulation to require security is the answer and should have been placed on the internet in the initial design. However, I suspect if security had been a mandate on day one, the robust cyber community we have today would not exist. I see the internet as more of a wicked problem where solutions and problems emerge together but cannot be solved up front. Your thoughts? Thank you for your continued service to the community. Okay, I suppose that my first thought is that those writing such things may be too young to recall the way the world was when the internet was being created. I'm not too young. I was there looking up at the imp, the interface message processor a big, imposing white cabinet sitting at Stanford's AI lab in 1972.

As the thing known at the time as DARPA net was first coming to life. The problems these authors are lamenting not being designed in from the start didn't exist before the internet was created. It's the success of the internet and its applications that created the problems and thus the needs we have today. Also, back then we didn't really even have crypto yet. It's nice to say. Well, that should have been designed in from the start. But it wasn't until four years later, in 1976, that Whit Diffie, Marty Hellman and Ralph Merkel invented public key crypto. And a huge gulf exists between writing the first academic paper to outline a curious and possibly useful theoretical operation in cryptography and designing any robust implementation into network protocols. No one knew how to do that back then and we're still fumbling around finding mistakes in TLS decades later. And one other thing these authors may have missed in their wishful timeline is that the applications of cryptography were classified by the US federal government as munitions In 1991, 19 years after the first imps were interconnected.

Phil Zimmerman, PGP's author, had trouble with the legality of distributing PGP over the internet. Today, no one would argue that security should always be designed in from the start, and we're still not even doing that very well. We're still exposing unsecurable web interfaces to the public-facing internet, and that's not the internet's fault. So anyone who suggests that the internet should have been designed with security built in from the start would have to be unaware of the way the world was back when the internet was actually first being built. Some of us were there.

0:56:27 - Leo Laporte
Yeah, and you know Vince Cerf did say the one thing he would have added had they been able to was encryption.

0:56:34 - Steve Gibson
It's exactly right.

0:56:34 - Leo Laporte
Yeah, but they just couldn't at the time.

0:56:37 - Steve Gibson
No, I mean it didn't exist. It literally, like you know. Sure, I mean we knew about the enigma machine, but you know you're not going to have gears and wires on your computer.

0:56:49 - Leo Laporte
You have time to do that. You know it's like what, no.

0:56:51 - Steve Gibson
So I mean so you know we were playing around with these ideas and Leo remember also when, when, when HTTPS happened, when navigator came out and SSL was created, there was a 40-bit limit on any, any cipher that would leave the US. So you know we had 128 bits. You know, as long as it didn't try to make a connection outside the US, I mean it was a mess.

0:57:21 - Leo Laporte
It was, it was yeah.

0:57:25 - Steve Gibson
Jonathan Haddock said Hi, steve, thanks for the podcast, having been listening from very early on, was just listening to episode 961 last week and the section about Facebook telling users they entered an old password. A downside of this practice is that an attacker knows the incorrect password was one that the person has used. Given the prevalence of password reuse, the attacker is then in possession of a confirmed previous password that could still work for that user elsewhere. A middle ground would be to simply say when the password was last changed. That way, the user experience is maintained, but the previous validity of the password is not exposed. Thanks again, jonathan, and I think he makes a very good point.

It's not as useful to tell a user that their password was recently changed, but the fact is that it could be useful to an attacker. That is, you know, just telling the user your password was changed last Tuesday. Well, okay, that's definitely more useful. But the difference that Jonathan is pointing out is that if a bad guy is guessing passwords and is told by Facebook well, close, you got close, this is the password from last week. Well, now the attacker knows he can't get into Facebook with that password, but he may be able to get into something else this user logs into with that password? If they haven't, you know if they were reusing that password elsewhere? So it's weird, because something was bothering me a little bit when we talked about this last week, like why isn't this a problem?

It turns out there is some information leakage from Facebook telling their users that Probably it's worth doing still, but I'm glad that Jonathan brought this up. Yeah, sean Milochic said, and while we're sharing oh, I said, and while we're sharing brilliant observations from our listeners, here's one from Sean Milochic. He said on the topic of passwordless email, only login, I think the largest benefit to the companies is that this is that that is, using email links for login, which we talked about last week. He said the largest benefit to the companies is that this practically eliminates password sharing.

1:00:11 - Leo Laporte
Oh you're right, Isn't that cool.

1:00:13 - Steve Gibson
Yes, he says it's one thing to give someone your password for a news or a streaming site. It's quite another to give them access to your email and probably the ability to reset your banking password, among other things. So, sean, brilliant point, thank you. Lars Exeler said hi, steve, love the show and just became a Twitter, a club Twitter member. He said I'm wondering about T O T P as a second factor.

So a password is a password. Is the something you know? Factor A T O T P is the something you own? Factor? The question is isn't the T O T P just based on a lengthy secret in the QR code? So in my mind, it's just like a second password with the convenience of presenting itself as six digits because it's combined with the actual time. What am I missing here? Regards from Germany. Lars.

Okay, so the password, right, is something you know, but the way to think of the T O T P, which is based upon a secret key, is that it's transient and time limited proof of something else you own or know. But transient and time limited proof and, as we know, a password can be used over and over. But in any properly designed T O T P based authentication, the six digit token not only expires every 30 seconds, but each such token is invalidated after its first use. This means that even an immediate replay attack which still fits inside that 60 second expiration window will be denied. It's those two distinctions that make the T O T P a very powerful and different form of second factor.

1:02:30 - Leo Laporte
Yeah, you're not. If you were sharing the secret every time, then it would just be another password, but you're not right, right, that's not back and forth.

1:02:39 - Steve Gibson
You are. You are proving that you know the secret exactly without sharing it, and that's a key One step better, gavin. Yes, gavin Lano wrote, and this is a long one, but this needs to be shared. Dear Steve, I first came across GRC in the early 2000s for recovering hard disks and your shields up scanner Just as I was starting the career. I'm a T and I T security. About a year ago I rediscovered you on the security now podcast and also discovered the wider twit network. The security now podcast over the last year is now the highlight of my week and even go so far as to have a diary note in my calendar. Nice, so I can try to listen to the creation live. Thank you, it is late night where I live.

Gavin hails from Germany. He said this week I discovered an issue with routers provided by one of our local ISPs and I thought if there was every reason to drop you a message worthy of your time, this was it, and I would add even of our even worthy of our listeners time. He said over my 30 years or so. In it I, as I'm sure we all, have gathered a number of people who we look after for home user, family and friends it support. I noticed over the last week or so some of my 400 or so private clients were reporting strange issues. They reported to be getting warnings when browsing that sites were dangerous and we're getting a lot of 404s from streaming services which weren't working and connections to email services via IMAP secured with SSL were not working, connection and certificate errors and such. When I sat back and thought about it, all the users were on the same ISP and we're all using the ISP provided router, he said.

Side note, I usually recommend replacing the ISPs router with something more substantial, but when it's working fine or doesn't need anything better, you know, like a single elderly home user with an iPad and nothing else, it's hard to justify the spend. So he says. I got access to one of the users routers and after some investigation I found that the DNS servers listed just didn't look right. A trace route showed the server was in the Netherlands and we're in Guernsey, germany. The listed owner of the IP had nothing to do with the local ISP. Reverting the DNS servers to the ISPs addresses resolved all the issues and he says later I reset them to quad nine. And here it is. I also found that the ISP had left SNMP enabled on the WAN interface with no restrictions, and the username and password to gain access to the router were both admin.

1:06:11 - Leo Laporte
That's for their convenience, not yours.

1:06:14 - Steve Gibson
That's exactly right. He said I have rotated through all of my other customers that have the same router and am locking them down, checking the DNS servers, disabling SNMP and resetting the password. I may even go so far as replacing the routers in their entirety because they may have also been compromised in some other yet undiscovered way. Yes, I would say that he's been listening to this podcast. He said I wrote to the ISP yesterday to explain the issue and received a call back today. It did take me some time to explain that any suitably skilled bad actor can connect to the routers via SNMP, which you know simple network management protocol and boy is it dangerous with the default credentials, admin, admin and reconfigure the router settings. The engineer I was speaking to had trouble comprehending how this was possible without the web management enabled on the WAN side, but he eventually understood.

We contacted another customer while we were talking and he got to see the issue first hand and is taking this up with their security team internally. He said my understanding for their reason for this configuration is, believe it or not, it was on purpose. They want to be able to remotely connect to a customer's router to assist in fault finding. And ha, they won't have to look very far. They have a script that connects to the routers via SNMP and enables web management on the WAN interface. There you have it, right there, jesus, for a period of time, and then later it removes the web based access again. They didn't consider that the open SNMP management interface with default credentials is an easy way to guess community string, and an easy way to guess community string could be exploited in this way.

1:08:59 - Leo Laporte
We never thought of that.

1:09:02 - Steve Gibson
Yeah, wow, what? Yeah, we're accessing them remotely, but that's us. You mean bad guys can too. Wow, the engineer. He says the engineer did seem quite disgruntled. Get this Disgruntled. I said I often replace the ISP provided routers.

But as yes, as I explained to him, if it has a vulnerability on it I'm going to replace it, without any consideration for their convenience of remote diagnosis, he said. Thank you, steve. I hope this may get a mention on the show and how, but regardless, I really do look forward every week to your deep dives in detail behind the week security related news. Warm regards, gavin from Guernsey. So wow, gavin, thank you for sharing that. Apparently, this ISP believes that SNMP stands for simply not my problem. But they would be wrong. They would be wrong and this deliberate, unprotected SNMP exposure with default credentials is breathtaking. It is a horrific breach of security.

Snmp, for those who don't know, is a funky, non-user-friendly UDP-based protocol. It was originally and still present in contemporary devices, but like very old, original networking gear which allows for both the remote over the wire monitoring and control of that gear. Unfortunately, it also comes from an era before security was being given the due that it deserves. On the one hand, it's a good thing that this ISP has the wisdom to keep their customers web management interfaces disabled by default, but that good intention is completely undone by their persistent exposure of the router's SNMP service, which I'm still shaking my head over. So one remaining mystery is what was going on with the DNS rerouting. Given that most of today's DNS is still unencrypted in the clear UDP, this would be part of the requirement for site spoofing to be able to spoof DNS, but with browsers now having become insistent upon the use of HTTPS, it's no longer possible to spoof a site over HTTP, so someone would need to be able to create web server certificates for the sites they wish to spoof. The fact that it's so widespread across an ISP's many customers tells us that it's probably not a targeted attack. That means it's somehow about making money or intercepting a lot of something. Having thought about this further, though, leo, he mentioned lots of email problems when people are using secure IMAP. Well, not all email is secure, and so you could spoof email MX records in DNS in order to route email somewhere else, in which case clients would have a problem connecting if the DNS was spoofed over a TLS connection. So, if you know, basically it's really only HTTP, in the form of HTTPS, where we've really got our security button down tight with web server certificates, other things that use DNS, and there are lots of other things that use DNS. They're still not running as securely as the web is, so that could be an issue. Jeff Jellin said.

Steve, listening to SN 961 and thinking about the discussion on passwordless logins and using email as a single factor, is my recollection correct that email is generally transmitted in the clear? Can't someone scrupulous actor sniff that communication? I'm especially concerned if the email contains a link to the site. This provides all the necessary info to anyone who can view the email. A six digit code without any reference to the site where it should be used would be more secure, as it is both out of band but also obfuscated from the site. Of course, this is all based on my possibly antiquated recollection of the lack of privacy in email.

Okay, now the email transit encryption numbers vary. I saw one recent statistic that said that only about half of email is encrypted in transit using TLS. But Google's most recent stats for Gmail state that 98 to 99% of their email transit, both incoming and outgoing, is encrypted and it's a good feeling to have that. Grc's server fully supports all of the transit encryption standards, so it will transact using TLS with the remote end whenever it's available. And it's a good feeling knowing that when Sue and Greg and I exchange anything through email, since our email clients are all connecting to directly to GRC's email server. Everything is always encrypted end to end, but it's not encrypted at rest.

Jeff's point about the possible lack of email encryption in transit is still a good one, since email security is definitely lagging behind web security. Now that virtually everything has switched over to HTTPS On the web, we would never even consider logging into a site that was not encrypted and authenticated using HTTPS. For one thing, we would wonder why the site was doing that, and alarm bells would be going off. And even back when almost everything was still using HTTP, websites would switch their connections over to HTTPS just long enough to send the login page and receive the user's credentials in an encrypted channel and then drop the user back to HTTP. So, by comparison to the standard set by today's web login with HTTPS, email security is sorely lacking. If we revisit the question, then, of using email only to log into a site where that email carries what is effectively a login link or even a six digit authentication token, what we've done is reduce the login security of the site to that of email, which today we would have to consider to be good, but not great and certainly far from perfect.

Another difference worth highlighting is that a web browser's connection to the website it's logging into is strictly and directly end to end encrypted. The browser has a domain name and it must receive an identity asserting certificate that is valid for that domain, but this has never been true for email. Although modern email does tend to be point to point, with the server at the sender's domain directly connecting to the server at the recipient's domain, that's not always, nor is it necessarily, true. Email has always been a store and forward transport. Depending upon the configuration of the sender and receiver's domains, email might make several hops from start to finish, and since the body of the email doesn't contain any of its own encryption, that email at rest is subject to eavesdropping. Last week I coined the term login accelerator to more clearly delimit that value added by a password.

And the whole point of the email loop authentication model is the elimination of the something you need to know factor. Yet without the something you need to know factor, the security of email loop authentication can be no better than the security of email, which we've agreed falls short of the standard set by the web's direct end to end certificate identity verification with encryption. As a result of this analysis, we might be inclined to discount the notion of email loop authentication as being inferior to web based authentication with all of its fancy certificates and encryption, except for one thing the ubiquitous I forgot my password, get out of jail free link is still the party spoiler. It is everywhere and its presence immediately reduces all of our much bally hoot web security to the security of email, imperfect as it may be.

Even without email loop authentication, if a bad guy can arrange to gain access to a user's email, they can go to any site where the user maintains an account, click the I forgot my password link, receive the password recovery link by intercepting their email and do exactly the same thing as if that site was only using email loop authentication. What this finally means is that all of our identity authentication login, either using all of the fancy web technology or just a simple email loop, is not actually any more secure than email, and that simple email loop authentication without any password is just as secure as web authentication, so long as web authentication includes and I forgot my password email loop bypass. So the notion of a password being nothing more than a login accelerator holds, and the world should be working to increase the security of email, since it winds up being the lynchpin for everything. So really interesting thought experiment there. Two last quickies Alex knee, house, long time friend of the podcast and early advertiser, I think what? What? Our first advertising?

I think our first yeah with the star, yeah yeah, he said regarding 961 last week's episode and overlay networks, pf sense has a killer tail scale integration. With this running, you can set up PF sense to be an exit node combining safe local access and VPN, like move, my apparent IP address. That's useful for services that are tied to your location and some cable providers apps require you PNP, which you can, at your own risk, enable in PF sense tail scale. So, alex, thank you for that. Given that I'm myself a PF sense shop, as Alex knows, because he and I sometimes talk about PF sense, and the tail scale integrates well with its free BS, with PF senses, free BSD, unix, tail scale will certainly be a contender for a roaming overlay network. So, thank you, alex. And finally, just a reminder from a listener. He said hi, steve, given your discussion of throw away email addresses during last security, now a podcast episode, I'd like to bring your attention to duck duck goes email protection service.

This free service is designed to safeguard users email privacy by removing hidden trackers and enabling the creation of unlimited unique private email addresses on the fly. This feature allows users to maintain a distinct email address for each website they interact with. What makes this even more compelling is duck duck go email protections integration with bit warden. Bit warden has included duck duck go among its supported services, alongside other email forwarding services. This integration allows bit warden users to easily generate unique email addresses for each website they interact with. Take a look at and he provided the links which I've got in the show notes. So I thank our listener very much for that. Leo, let's take a rest. Yes, tell our listeners about our sponsor and then, oh boy, have I got a fun deep dive for everybody. Something just happened when nobody was looking.

1:23:09 - Leo Laporte
Wow. Well, one thing I'd like to do we're going to a couple of things I'd like to take care of. One is you've mentioned club Twitter a couple of times now. I probably should give people more information about that. You know, more and more I look at what's going on on the internet in terms of supposedly tech news sources or tech reviews sources, and more and more.

I think you know we're kind of old fashioned here at Twitter. Nobody would deny that Steve and I are not old timers. We don't generate link bait, we don't. We don't make stuff up, we don't do fake reviews. We buy the products we review. You know, really kind of old school, but a little bit out of step with the way things are today.

Now I hope and I think that you probably like what we do you're listening to shows, after all but we're going to need your support because, frankly, I'm not an influencer with a million subscribers on YouTube. Steve should be, but he's not either. We don't do thumbnails of us going oh my god, we don't. You know over hype stuff. We do our jobs and I think we do a very good job, but advertisers want the other one, so it's getting harder and harder to get advertising dollars. That means, you know, in the past we've been able to support ourselves entirely that way. We need to rely on you a little bit, and I think it's kind of a better way to do it, because we know we're giving you what you want. When you give us that seven bucks a month as a member of club Twitter, what do you get? Ad free versions of all the shows this show included you get. We have now putting out audio of all of the shows we used to have behind the club tip paywall, but you will get video of all those shows. If you're a club to it member, you get access to the discord, lots of additional content we don't put out, like Stacy's book club, the ban her before and after shows, things like that, and, most importantly, you get the good feeling to know that you're supporting the kind of technology journalism that I think we need more than ever. So if you agree with me, please visit twitter TV slash club Twitter. You can buy this show individually if it's, if you say I only listen to security. Now, that's two dollars, ninety nine cents a month. There's a link there as well twitter TV slash club to it, but I do appreciate your support. All, let's see we're up to eleven thousand two hundred ninety two paid members. That means about three hundred people joined this week. Thank you to those, and let's get another three hundred next week. It makes a big difference and we figure we really probably need to get to thirty five thousand members to be absolutely sustainable in the long run. So we're only about a third of a way there. So please make up the difference. If you will a twitter TV slash club to it.

This episode of security now brought to you by good friends. They've been with us a long time it pro TV, now known, and this important, as a CI learning. Okay, so when you hear a CI learning, no, that's the guys at it pro TV. They've been with our network for as long as it was since twenty thirteen, that's a one ten years, as long as they've, as long as they've been around. When they first started, they came to us immediately and now, as part of a CI learning, it pros really expanded what it can do, providing even more support for it individuals and teams.

I CI learning covers all your needs. What? Not just it, but cybersecurity and audit as well. You'll get. A personal account manager will help you make sure there is no redundancy in your training. You're not wasting your time or money, or theirs. Your account manager will work with you to ensure your team only focuses on the skills that matter to you and your organization. Leave the unnecessary, costly training behind which, by the way, also keeps your employees happy.

That team, you know what I see videos about stuff they already know.

They want to learn new things, they love learning new things and ACI learning is kept. All the fun, all the personality by two pro TV, all the deep, passionate love for this stuff that it pro TV is famous for, and amplifying their robust solutions for all your training needs. Let your team be entertained as well as informed, with short form format content and now, thanks to ACI learning, over seventy two hundred hours, seven thousand two hundred hours of absolutely up to date content covering the latest search, the latest exams, the latest information, the stuff you need to know. Visit go dot aci learning dot com. Slash twit if you've got a team. Fill out the form. The discounts are based on the size of your team and they go as high as sixty five percent off. Fill out the form. Find out how much you can save on an IT pro enterprise solution plan when you visit. Go dot ACI learning dot com. Slash twit and, as always, we thank ACI learning for their support of security.

Now, okay, this I've been waiting all day for Steve. What the heck? Okay, are we in trouble?

1:28:15 - Steve Gibson
no, the bullet was dodged, but the story is really interesting. The vulnerability has been codenamed key trap and if ever there was a need for responsible disclosure of a problem, this was that time. Fortunately, responsible disclosure is what it received, with. The German researchers who discovered this last year realized was that an aspect of the design of DNS, specifically the design of the secure capabilities of DNS, known as DNS sec, could be used against any DNS sec capable DNS resolver to bring that resolver to its knees. The receipt of a single UDP DNS query packet could effectively take the DNS resolver offline for as many as sixteen hours, pinning its processor at a hundred percent and effectively denying anyone else that servers DNS services. It would have therefore been possible to spray the internet with the single packet DNS queries to effectively shut down all DNS services across the entire globe. Servers could be rebooted once it was noticed that they had effectively hung, but if they again received another innocuous looking DNS query packet which is their job after all they would have gone down again eventually. Of course, the world would have discovered what was bringing down all of its DNS servers, but the outage could have been protracted and the damage to the world's economies could have been horrendous. So now you know why this podcast is titled the internet dodged a bullet. We should never underestimate how utterly dependent the world has become on the functioning of the internet. I mean, it's like what don't we do with the internet now? Like all of our entertainment, all of our news? You know, I do, I have an FM radio or I'm not sure.

Okay, the detailed research paper describing this was just publicly released yesterday, though the problem, as I said, has been known since late last year. Here's how the papers abstract describes what these German researchers discovered, to their horror. They wrote availability is a major concern in the design of DNS sec. To ensure availability, dns sec follows postel's law, which reads be liberal in what you accept and conservative in what you send. Hence, name servers should send not just one matching key for a record set, but all the relevant cryptographic material, in other words, all the keys for all the ciphers that they support and all the corresponding signatures. This ensures that validation will succeed and hence availability. Even if some of the DNS sec keys are misconfigured, incorrect or correspond to unsupported ciphers will be maintained, they write.

We show that this design of DNS sec is flawed by exploiting vulnerable recommendations in the DNS sec standards. We develop a new class of DNS sec based algorithmic complexity attacks on DNS, which we dub key trap attacks. All popular DNS implementations and services are vulnerable. With just a sing, this is them writing. With just a single DNS packet, the key trap attacks lead to a 2 million times spike in CPU instruction count in vulnerable DNS servers. Remember, that's all. Dns servers stalling, some for as long as 16 hours. This devastating effect prompted major DNS vendors to refer to key trap as the worst attack on DNS ever discovered. Exploiting key trap, an attacker could effectively disable internet access in any system utilizing a DNS sec validating resolver. We disclosed key trap to vendors and operators on November 2nd 2023. We confidentially reported the vulnerabilities to a closed group of DNS experts, operators and developers from the industry. Since then, we've been working with all major vendors to mitigate key trap, repeatedly discovering and assisting in closing weaknesses in proposed patches. Following our disclosure, an umbrella CVE was assigned.

Okay. So, believe it or not, all that just actually happened, as they say, behind closed doors. Google's public DNS and cloud flare were both vulnerable, as was the very popular, the most popular and widely deployed bind 9 DNS implementation, and it was the one. We'll see why later that could be stalled for as long as 16 hours after receiving one packet. So what happened? As the researchers wrote, months before all this came to light publicly, all major implementations of DNS had already been working on quietly updating because, had this gotten out, it could have been used to bring the internet down. With Microsoft's release of patches, which included mitigations for this, last week, and after waiting a week for them to be deployed, the problem is mostly resolved, if you'll pardon the pun. I say largely because this is not a bug in an implementation, and apparently even now, as we'll see, some DNS servers will still have their processors pinned, but they will still at least be able to answer other DNS query functions. It sounds as though thread or process priorities have been changed to prevent the starvation of competing queries, and we'll actually look in a minute at the strategies that have been deployed.

Characterizing this as a big mess would not be an exaggeration. Keytrap exploits a fundamental flaw in the design of DNSSEC, which makes it possible to deliberately create a set of legal but ultimately malicious DNSSEC response records which the receiving DNS server will be quite hard pressed to untangle. Once the researchers had realized that they were onto something big, they began exploring all of the many various ways DNS servers could be stressed, so they created a number of different attack scenarios. I want to avoid getting too far into the weeds of the design and operation of DNSSEC, but at the same time I suspect that this podcast's audience will appreciate seeing a bit more of the detail so that the nature of the problem can be better appreciated. The problem is rooted in DNSSEC's provisions for the resolution of key tag collisions, the handling of multiple keys when they're present and multiple signatures when they're offered. I'm gonna quote three paragraphs from their research paper, but just sort of let it wash over you so that you'll get some sense for what's going on without worrying about understanding it in any detail. You will not be tested on your understanding of this. Okay, so they explain. We find that the flaws in the DNSSEC specification are rooted in the interaction of a number of recommendations that, in combination, can be exploited as a powerful attack vector.

Okay so first, key tag collisions. They write DNSSEC allows for multiple cryptographic keys in a given DNS zone. Zone is the technical term for a domain, essentially. So when they say zone, they mean a DNS domain. For example, they say during key rollover for a multi-algorithm or for multi-algorithm support, meaning you might need multiple cryptographic keys. If you're retiring one key, you wanna bring the new key online right before the old key goes away. So for a while you've got two or more keys, or multi-algorithm support Might need different keys for different algorithms if you wanna be more comprehensive. So they said.

Consequently, when validating DNSSEC, dns resolvers are required to identify a suitable cryptographic key to use for signature verification, because the zone is signed. So you wanna verify the signature of the zone to verify it's not been changed. That's what DNSSEC is all about is preventing any kind of spoofing. So they said DNSSEC uses key tag values to differentiate between the keys, even if they are of the same zone and use the same cryptographic algorithm. So they just could be redundant keys for some reason. They said the triple of zone algorithm and key tag is added to each respective signature to ensure efficiency in key signature matching. Again, don't worry about the details. When validating a signature, resolvers check the signature header and select the key with the matching triple for validation. However, the triple is not necessarily unique and that's the problem. Multiple different DNS keys can have an identical triple, that is to say an identical tag. This can be explained by the calculation of the values in the triple.

The algorithm identifier results directly from the cipher used to create the signature and is identical for all keys generated with a given algorithm. Dnssec mandates all keys used for validating signatures in a zone to be identified by the zone name. Consequently, all DNSSEC keys that may be considered for validation trivially share the same name. Since the collisions in algorithm, id and key name pairs are common, the key tag is calculated with a pseudo random arithmetic function over the key bits to provide a means to distinguish same algorithm, same name keys. Again, just let this glaze over. Using an arithmetic function instead of a manually chosen identifier eases distributed key management for multiple parties in the same DNS zone. Instead of coordinating key tags to ensure their uniqueness, the key tag is automatically calculated.

However, here it comes. The space of potential tags is limited by the 16 bits in the key tag field. Key tag collisions, while unlikely, can thus naturally occur in DNSSEC. This is explicitly stated in RFC 4034, emphasizing that key tags are not unique identifiers. As we show, colliding key tags can be exploited to cause a resolver not to be able to uniquely identify a suitable key efficiently, but you have to perform validations with all the available keys, inflicting computational effort during signature validation. Okay, now just to interrupt this for a second. Cryptographic keys are identified by tags and those tags are automatically assigned to those keys.

Work on DNSSEC began way back in the 1990s, when the internet's designers were still counting bits and were assigning only as many bits to any given field as would conceivably be necessary. Consequently, the tags being assigned to these keys were, and are still today, only 16 bits long, since these very economical tags only have 16 bits. Thus, one of 64K possible values is not necessarily possible values inter tag collisions, while unlikely one of 65536, and we've got the birthday paradox, which makes collisions happen more often than you'd expect. If DNSSEC were being designed today, tags would be the output of a collision-free cryptographic hash function, and there would be no provision for resolving tag collisions, because there would be none. The paragraph I just read said that the key tag is calculated with a pseudo-random arithmetic function, in other words, something simple from the 90s that scrambles and mixes the bits around but doesn't do much more.

1:43:41 - Leo Laporte
Like a salad spinner or something Right exactly, it's still salad it's just better now.

1:43:48 - Steve Gibson
Consequently, servers need to consider that key tags are not unique, and so what the attack is is it deliberately makes all the key tags identical, forcing the server to check them all. Oh, brilliant, yes, yes.

1:44:08 - Leo Laporte
How many key tags can you put in there?

1:44:10 - Steve Gibson
We're getting there. We're getting there, but you make them all the same and the server can't use it to select the key. It's gotta try them all. So the first attack is key tag collisions. Literally, they all collide. Okay, onto the next problem multiple keys. The DNS specification mandates that a resolver must try all colliding keys until it finds a key that successfully validates the signature. Or all keys have been tried, of course.

1:44:46 - Leo Laporte
Right. What could possibly go wrong?

1:44:50 - Steve Gibson
That's right. The requirement is meant to ensure availability, meaning DNSSEC will try as hard as it can to find a successful signature. Even if colliding keys occur, such that some keys may result in failed validation, the resolver must try validating with all the keys until a key is found that results in a successful validation. This ensures that the signed record remains valid and the corresponding resource therefore remains available. However, what they call eager, they say. However, this eager validation can lead to heavy computational effort for the validating resolver, since the number of validations grows linearly with the number of colliding keys. So, for example, if a signature has 10 colliding keys with all with identical algorithm identifiers, the resolver must conduct 10 signature validations before concluding that the signature is invalid. While colliding keys are rare in real world operation, we show that records created to deliberately contain multiple colliding keys meaning all the keys are colliding can be efficiently crafted by an adversary imposing heavy computation upon a victim resolver. Okay, and the third and final problem is multiple signatures. The philosophy they said of trying all the cryptographic material available to ensure that the validation succeeds also applies to the validation of signatures. Creating multiple signatures for a given DNS record can happen, for example, during a key rollover, the DNS server adds a signature with the new key while retaining the old signature to ensure that some signature remains valid for all resolvers until the new key has been propagated. Thus, parallel to the case of colliding keys, the RFCs specify that in the case of multiple signatures on the same record, a resolver should try all the signatures it received until it finds a valid signature or until it's used up all the signatures.

Okay, so we have the essential design features which were put into the DNS spec specification in a sane way. I mean, all this makes sense with the purpose of never failing to find a valid key and signature for a zone record. Their term for this, of course, is eager validation, they write. We combine these requirements for the eager validation of signatures and of keys, along with the colliding key tags, to develop powerful DNS sec-based algorithmic complexity attacks on validating DNS resolvers. Our attacks allow a low resource adversary to fully DOS a DNS resolver for up to 16 hours with a single DNS request.

Holy cow, yeah, Wow. One request and the server goes down for 16 hours. They wrote. Members from the 31 participant task force of major operators, vendors and developers of DNS and DNS sec, to which we disclosed our research dubbed our attack the most devastating vulnerability ever found in DNS sec. Okay, now, the researchers devised a total of four different server side resource exhaustion attacks and I have to say, Leo, I was a little bit tempted to title today's podcast after three of them. Had I done so, today's podcast would have been titled Sig Jam, Lock, Cram and Hash Trap.

1:49:29 - Leo Laporte
Mike, I would have done that. That's good, that's good.

1:49:32 - Steve Gibson
I know Sig, jam, lock, cram and Hash Trap.

1:49:36 - Leo Laporte
This is all my law firm.

1:49:40 - Steve Gibson
And while I certainly acknowledge that would have been fun, I really didn't wanna pass up. I didn't wanna lose sight of the fact that the entire global internet really did just dodge a bullet, and we don't know which foreign or domestic cyber intelligence services may today be silently saying darn it, they found it. That was one we were keeping in our back pocket for a rainy day, while keeping all of our foreign competitor DNS server targeting packages updated, because this would have made one hell of a weapon. Okay, so what are the four different attacks? Sig Jam utilizes an attack with one key and many signatures they write. The RFC advises that a resolver should try all signatures until a signature is found that can be validated with the DNS key. This could be exploited to construct an attack. We're gonna answer your question, lee, about how many, using many signatures that all point to the same DNS set key. Using the most impactful algorithm, meaning the most time consuming to verify, an attacker can fit 340 signatures into a single DNS response, thereby causing 340 expensive cryptographic signature validation operations in the resolver until the resolution finally fails by returning serve fail response to the client. That shouldn't take 16 hours. No, it gets better. Oh, there's more. Because that's linear. We're gonna go quadratic in a minute, oh good.

The Sig Jam attack is thus constructed by leading the resolver to validate many invalid signatures on a DNS record using one DNS key, okay. The lock cram attack does the reverse. Using many keys and a signal signature, they write. Following the design of Sig Jam, we develop an attack vector we dub lock cram. It exploits the fact that resolvers are mandated to try all keys available for a signature until one validates or all have been tried.

The lock cram attack is thus constructed by leading a resolver to validate one signature over a DNS record having many keys.

For this, the attacker places multiple DNS keys in the zone which are all referenced by signature records having the same triple name algorithm key tag. This is not trivial, as resolvers can deduplicate identical DNS key records and their key tags need to be equal. A resolver that tries to authenticate a DNS record from the zone attempts to validate its signature. To achieve that, the resolver identifies all the DNS keys for validating the signature which have correctly constructed, conform to the same key tag. An RFC compliant resolver must try all the keys referred to by the invalid signature before concluding. The signature is invalid for all keys, leading to numerous expensive public key cryptography operations in the resolver and the next attack, the key Sig trap, combines the two previous attacks by using multiple keys and multiple signatures. They say. The key Sig trap attack combines the many signatures of Sig jam with the many colliding DNS keys of lock cram, creating an attack that leads to a quadratic increase in the number of validations compared to the previous two attacks.

1:54:09 - Leo Laporte
Sig lock jam cram is so much worse.

1:54:14 - Steve Gibson
That's wow.

Yeah, you don't want one of those. The attacker creates a zone with many colliding keys and many signatures matching those keys. When the attacker now triggers resolution of the DNS record with the many signatures, the resolver will first try the first key to validate all the signatures. After all the signatures have been tried, the resolver will move to the next key and again attempt validation of all the signatures. This continues until all key, all pairs of keys and signatures have been tried. Only after attempting validation on all possible combinations does the resolver conclude that the record cannot be validated and returns a serve fail to the client. Okay now, elsewhere in their report, when going into more detail about this Sig key trap, they explain that they were able to set up a zone file containing 582 colliding DNS sec keys and the same 340 signatures that we saw in Sig Jam. Since the poor DNS resolver that receives this response will be forced to test every key against every signature, that's 582 times 340, or 197,880 expensive and slow public key cryptographic signature tests. And that was caused by sending that DNS server a single DNS query packet for that domain.

Now, interestingly, the researchers discovered that not all DNS servers were as badly as possible. For some reason, several took the request much harder. Upon investigating, they discovered why, for example, the unbound DNS server is doused approximately six times longer than some of the other resolvers. The reason is the default re-query behavior of unbound. In its default configuration, unbound attempts to re-query the name server that gave it this malicious zone five times after failed validation of all signatures. Therefore, unbound validates all attacker signatures six times before returning a serve fail to the unbound DNS server. Therefore, returning a serve fail to the client. Essentially, unbound is being penalized for being a good citizen. Disabling default re-queries brings unbound back to parity with the other servers. But bind the famous I mean it's the most used server on the internet bind, in fact, that's where unbound got its name right. It's a play on bind. Bind was the worst. That's the one with the 16 hour DOS from a single packet, and they explained why.

Investigating the cause for this observation, we identified an inefficiency in the code triggered by a large number of colliding DNS set keys. The routine responsible for identifying the next DNS set key to try against a signature does not implement an efficient algorithm to select the next key from the remaining keys. Instead, it reparses all keys again until it finds a key that has not been tried. Yet this algorithm does not lead to inefficiencies in normal operation, where there might be a small number of colliding keys. But when many keys collide, the resolver spends a large amount of time parsing and reparsing the keys to select the next key, which extends the total response duration of the deal of the DOS to 16 hours.

So what all of this should make clear is that these potential problems arise due to DNS-seq's deliberate eager to validate design. The servers are trying as hard as they can to find a valid key and signature match. Dns servers really want to attempt all variations, which is exactly, unfortunately, what gets them into trouble. The only solution will be something heuristic. We've talked about heuristics in the past. They can be thought of as a rule of thumb and they usually appear when exact solutions are not available, which certainly is the case here.

As we might expect, the researchers have an extensive section of their paper devoted to what to do about this mess, and they worked very closely for months with all the major DNS system maintainers to best answer that question. I'll skip most of the blow-by-blow, but here's a bit that gives you a sense and a feeling for it. Under the subhead Limiting All Validations, they wrote the first working patch capable of protecting against all variants of our attack was implemented by Akamai. In addition to limiting key collisions to four and limiting cryptographic failures to sixteen, the patch also limits total validations in any requests to eight. In other words, they basically just said it is Akamai patched their DNS to say OK, it's unreasonable for anyone to expect our server to work this hard. There are not really going to be that many key collisions in a DNS-sec zone. There aren't, you know. It's just not going to happen in real life and we're not going to constantly be failing. So let's just stop after we've got four key collisions. We're just not. We're just going to say sorry, time's up, you know we're not going to go any further and let's just stop after we've had sixteen cryptographic failures, no matter what kind and what nature. That's all we're going to do, because it's unreasonable for any valid DNS-sec zone to have more, and we're also going to cap the total limit of validation requests to eight.

So then they wrote evaluating the efficiency of the patch. We find the patched resolver does not lose any benign request, meaning DOS is avoided even under attack with greater than ten attacking requests per second. In other words, it doesn't fix the problem, it just mitigates it, it gets it under control. The load on the resolver does not increase to problematic levels under any type of attack at ten requests ten malicious requests per second and the resolver does not lose any benign traffic. It thus appears that the patch successfully protects against all variations of key trap attacks. Nevertheless, they said, although these patches prevent packet loss, they still do not fully mitigate the increase in CPU instruction load during the attack. You know it's still an attack. The reason that the mitigations do not fully prevent the effects of the key trap attacks is rooted in the design philosophy of DNS-sec. Notice, however, that we are still closely working with the developers on testing the patches and their performance during attack and normal operation. Still as in today. Still Okay.

So the disclosure timeline you know the responsible disclosure timeline for this is extra interesting Since it provides a good sense for the participants and the nature of their efforts and interactions over time. So in the following they wrote, we describe the timeline of disclosure to indicate how the vulnerability was reported and how we worked with the experts from industries to find solutions for the problems we discovered. Okay, so November 2nd is the of 2023. November 2nd, 2023, the initial disclosure to key figures in the DNS community. They said both confirm that. Both confirm that key trap is a severe vulnerability requiring a group of experts from industry to handle.

Five days go by Now or at November 7th, confidential disclosure to representatives of the largest DNS deployments and resolver implementations, including Quad9, google, public DNS, bind 9, unbound Power DNS Not and Akamai. The group of experts agree that this is a severe vulnerability that has the potential to cut off internet access to large parts of the internet. In case of malicious exploitation, a confidential chat group is established with stakeholders from the DNS community, including developers, deployment specialists and the authors, that is, you know, the researchers here. The group is continuously expanded with additional experts from the industry to ensure every relevant party is included in the disclosure. Potential mitigations are discussed within the group. Two days later, november 9th, we share key trap zone files to enable developers to reproduce the attacks locally, facilitating the development of mitigations.

November 13th, after four days, akamai presents the first potential mitigation of key trap by limiting the total number of validation failures to 32. That doesn't work. November 23rd, 10 days later, unbound presents its first patch limiting cryptographic failures to a maximum of 16 without limiting collisions. Also a non-starter. Next day, bind 9 presents the first iteration of a patch that forbids any validation failures. Okay, now we jump to December 8th, so a couple of weeks go by. The CVE is assigned to the key trap attacks, although nothing is disclosed, it's just an umbrella CVE to encompass them all.

Now we move to January 2nd this year, 2024, beginning of the year. After discussions with developers, we find some have problems recreating the attack in a local setup. We thus provide them an updated environment with a DNS server to ease local setup and further facilitate testing of patches. In other words, the researchers actually put this live on the internet so that the DNS developers could DOS their own servers. March 1st a month goes by. Or two months go by, no, I'm sorry, the next day I'm getting. The dates are in an odd. Well you know day, month and year. So January 3rd, bind 9 presents the second iteration of a patch limiting validation failures. Same day, they said, our newly implemented DS hashing attack proves successful against all mitigations which do not limit key collisions, including bind 9 and unbound, and is disclosed to the group. So whoops, patch again everybody. January 16th our any type attack circumvents the protection from limiting colliding keys and limiting cryptographic failures. And on January 24th, the first working patch is presented by Akamai. Other resolvers are implementing derivatives of the countermeasures to protect against the attacks. And so now we get to the reports and the German discoverers of this final conclusions they write.

Our work revealed a fundamental design problem with DNS, dns and DNS sec.

Strictly applying postels law to the design of DNS sec introduced a major and devastating vulnerability in virtually all DNS implementations. With just one maliciously crafted DNS packet and attacker could stall almost any resolver, for example the most popular one, by nine, for as long as 16 hours. The impact of key trap is far reaching. Dns evolved into a fundamental system in the internet that underlies a wide range of applications and facilities, new and emerging technologies. Measurements by AP Nick show that in December of 2023, 31.47% of web clients worldwide were using DNS sec validating resolvers. Therefore, our key trap attacks have effects not only on DNS itself, but also on any applications using it. An unavailability of DNS may not only prevent access to content, but risks also disabling other security mechanisms like anti spam defenses, public key infrastructure or even inter domain routing security, like like RPKI or Rover. As the initial disclosure of the vulnerabilities, we've been working with all major vendors on mitigating the problems in their implementations, but it seems that completely preventing the attacks will require fundamentally reconsidering the underlying design philosophy of DNS sec.

In other words, to revise the DNS standards so as we can see, as I titled this podcast, the internet really did dodge a bullet and I got to say it's also terrific to see that the technical and operational level of all of this you know. At that level we have the ability to quickly gather and work together to resolve any serious trouble that may arise. Fortunately, that doesn't happen very often, but it just did, and it's like who exactly? Everybody is breathing a deep sigh.

2:10:19 - Leo Laporte
It was never exploited by anybody. Do we know or Nope?

2:10:22 - Steve Gibson
No, no, far as we know. I mean, had it been, it would have been discovered.

2:10:26 - Leo Laporte
You would know, yeah, yeah.

2:10:30 - Steve Gibson
Basically. Basically, they reverse engineered this from the spec, kind of going, well, what if we were to do this, and what if we were to do that? And when the servers went offline they thought, whoops.

2:10:45 - Leo Laporte
Yeah, yeah. What if we did that? Oh, hello, yep. And how can we make it worse?

2:10:52 - Steve Gibson
And how can we make it? Worse and how can we make it worse?

2:10:55 - Leo Laporte
Well, good research and let's hope everybody fixes their. It's ironic because there was a huge push to get everybody to move to DNS sec for a while. We talked about it a lot, yep. Oh well, yep, steve. Another fabulous show in the can, as they say. Steve Gibson is at GRCcom. You didn't hear anything about Spinrite on today's episode.

2:11:20 - Steve Gibson
No, we actually think we found a bug in Freedos, a little obscure bug. I did get going on the documentation. I have the source and I've already customized the Freedos kernel to be to run with drives of any type. So this evening I will finally be able to. This just happened on Sunday evening before I had to start working on the podcast, so I get back to it tonight. Good, everything's going great. I've got a good head. I got a good start on the documentation.

2:11:47 - Leo Laporte
Here's the chance that you have to get it 6.1, the minute it comes out, by buying 6.0 now at GRCcom. Spinrite's the world's best mass storage maintenance and recovery utility. If you have mass storage, you need Spinrite and you can actually get 6.1 right now. If you want to be a beta tester, it's available as well. All that's at GRCcom, along with a copy of this show. Steve has the usual 64 kilobit audio kind of the standard version, but there is also on his side alone 16 kilobit audio if you don't have a lot of bandwidth, and really well done transcripts by Elaine Ferris. All of that's at GRCcom, along with Spinrite. Shieldsup Validrive Decombobulator.

2:12:35 - Steve Gibson
The DNS benchmark.

2:12:36 - Leo Laporte
The DNS benchmark Still a big download right A lot of people download that Best number one utility we've ever produced.

2:12:43 - Steve Gibson
That's amazing. I think it's 8 million downloads, wow, well.

2:12:47 - Leo Laporte
I have it, that's for sure. I use it all the time GRCcom. Now, if you want a video of the show, we have that along with the 64 kilobit audio at our website, twittv. There's also a YouTube channel devoted to security now, which makes it easy to share little clips, and, of course, the best thing to do is get a podcast client and subscribe. That way, you'll get it automatically the minute it is available. Steve and I get together to do this show right after Mac Break Weekly. It's Tuesdays, supposedly 1.30 Pacific. Usually it's sometime between 1.30 and 2.00 PM Pacific. That's 1.30. It's 4.30 to 5.00 PM, eastern 21.30 UTC. If you want to tune in and watch us as we do it live, we stream live on YouTubecom, slash twit and, of course, in our club twit discord. After the fact, though, you can get the show here on the website or Steve's website, or subscribe and listen at your leisure, which is probably the best time. Steve, thank you so much. Have a wonderful week and we'll see you next time. Thank you, my friend.

2:13:54 - Steve Gibson
Yeah, we got one more week of where did February go? Wow, yeah.

2:14:00 - Leo Laporte
Well, we'll be back, not leap day, but we'll be, back in a week. Thank you, steve. Bye, buddy, take care. Bye. 

All Transcripts posts