Security Now 1016 transcript
Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show
0:00:00 - Leo Laporte
It's time for Security. Now Steve Gibson is here. He has a remarkably good solution to the age verification conundrum, a fantastic story about a fake employee coming from North Korea, and then we'll talk about the Bluetooth backdoor. It got a lot of press, but is it really a problem? All of that coming up and a lot more on Security Now. Next Podcasts you love.
0:00:30 - Steve Gibson
From people you trust.
0:00:32 - Leo Laporte
This is Twit. This is Security Now with Steve Gibson, episode 1016, recorded Tuesday, march 11th 2025. The Bluetooth backdoor. It's time for security now. I know you've been waiting all week. Here we are Tuesday and the latest security news is here with Mr Steve Gibson, the king of the hill when it comes to this stuff. Hi, steve, hey.
0:01:03 - Steve Gibson
Leo, it's great to be with you again, march 11th and episode 1016. And I was a little jealous of hearing you talk about the 20th anniversary upcoming for Twit.
0:01:15 - Leo Laporte
April 13th will be our 20th year of Twits.
0:01:19 - Steve Gibson
Yep, and so you did that for a few months before you said hey, Gibson, your 20th is coming up. I think we're ready for a to add a second podcast to our network. Actually, I guess that would create a network right, Would it? Really wouldn't be a network?
0:01:33 - Leo Laporte
until then it was just a podcast. Yeah, that's right. So your 20th, this should be in the fall, I guess, yeah.
0:01:41 - Steve Gibson
Yeah, soon it is.
0:01:43 - Leo Laporte
Yeah, well, we'll do. We could do something special for that. Think about what you want to ignore it.
0:01:47 - Steve Gibson
No, we're gonna let my birthday go by.
0:01:49 - Leo Laporte
We're gonna let the 20th I'm the same exact way, but I did decide. You know, on the thousandth episode we had all of the original hosts from episode one back, right, uh, and I said, well, I can't do that again. But I thought, really, what's the most important part of all of the things we do? It's our community, it's the people who listen to, people who email and they chat with us, all the people who are part of the family. So I said, uh, let's do, let's celebrate them on april 13th, and I'm asking people to send us videos of when they first started watching, how they watch. You know, just memories, that kind of thing. So that'll be a lot of fun.
That show will be jam-packed with. We'll have the regular show as well, but every once in a every few minutes we'll drop in a video from a listener or viewer. So if you want to be part of that, uh, just post it on your favorite social, with at twit in the posting, so we'll see it. Or you can email leo at leofm and send that to me. That way, and that'll work too. I don don't have your fancy mail system. I should just say everybody, mail it to Steve. No, steve has a very clever system, which I should steal of validating emails before you can email them on a regular basis at grccom.
0:03:00 - Steve Gibson
Well, but you like dipping in on all those social media places? I do, I mean you're streaming on all those social media places, I do.
0:03:06 - Leo Laporte
I mean you're streaming on 27 of them right now.
0:03:08 - Steve Gibson
It's unbelievable how many there are.
0:03:10 - Leo Laporte
Yeah, they're growing like topsy.
0:03:12 - Steve Gibson
I think that makes more sense for you. For me it's like oh my God, I just did it. I forgot to post on Twitter again.
0:03:20 - Leo Laporte
Shoot Don't post on Twitter. Skeet, you got to skeet man Be. Don't post on twitter skeet.
0:03:28 - Steve Gibson
You gotta skeet man, be a skeeter. What's that? That's blue sky, like I said, uh, old school for me. Yes, all right well you.
You can post on twitter when I do the first ad which is coming up, but first I'd like to know what we're going to be talking about today we're going to talk about well, okay, I just gave this the title of the week, which was the most emailed thing that I saw which is all of this huffing and puffing about a big, bad Bluetooth backdoor that had been discovered and was revealed by a pair of Spaniards, spanish security researchers, last week at the big annual global Spanish security conference. It's an interesting story and we're going to cover it. But we're going to talk about Utah passing the first age verification requirement for app stores and I'm going to spend a little time talking about age verification again. We have before, but boy is it a hot topic among our listeners. I get so much feedback from people who are mostly upset at the idea that they need to verify their age on the internet. I, my take is this is as significant as cryptography, as privacy, in as much as it's one of those things, that it's a problem created by the fact that cyberspace is different than physical space. So I want to. We're going to spend a little time on that.
Also. We've got a really interesting piece the inside story on fake North Korean employees, written with the details provided by an individual who keeps having these North Koreans trying to trying to get hired by his firm, oh wow. And he says they you know they really don't sound like they're from texas anyway. Uh, we've got an update on the ongoing by bit crypto high saga. Several more pieces, I mean, for something, something that is this big right. There's a lot of tendrils sort of oozing from it, like where did the crypto go? What has happened? The industry looks like it's going to actually respond in some interesting ways, more like in a larger, bigger way. Also, how did this happen? We know more now about that safe wallet guys and exactly what the exploit was that caught them, that then caused them to get infiltrated and allowed them to pass the attack forward. Also, apple is pushing back against the order that never was in the UK, so we have a little bit of news about that. Also, did somebody crack pass keys?
0:06:36 - Leo Laporte
Something happened yeah.
0:06:40 - Steve Gibson
But we'll look at that. Also. The UK has launched a legal salvo at an innocent security researcher just because they can Also. In addition, we have the old data breach which we all witnessed, which just keeps on giving. Oh no, many people will be glad they're no longer using that particular password manager. Uh, we also have, uh, as I uh, some some additional by bit forensic news. Um, uh, a lesson to learn from a clever and effective ransomware attack and then finally, put about that bluetooth backdoor discovery that everyone's talking about. So, I think, a lot of interesting stuff for this week's podcast and a picture of the week that is difficult to believe, but it was not, or it's not, one of those that was blindly posted to the Internet, where people have sent it to me. This was from a listener in the state of Minnesota who said he took this screenshot himself. He said I took this screenshot and thought of you, oh, I can't wait.
0:07:57 - Leo Laporte
I haven't looked at it yet. We'll do as we always do I will scroll up, absorb it and then you'll all have a chance to see our picture of the week. All that coming up on Security Now. It's going to be a great show. I know which password manager you're talking about and in some ways I feel like we should apologize, because for so many years we told everybody to use it. When you used it, I used it. We loved it. You had interviewed the guy who created it, but, as often happens, private equity got involved, yep, and the bottom line became more important than actual security.
0:08:33 - Steve Gibson
I vetted the technology and Joe had done everything right. The design was immaculate. So sad.
0:08:42 - Leo Laporte
So sad. Well, our sponsor for this segment is another company that is doing everything right 1Password. Very happy to have 1Password on the show. I know you know them as a password manager. But they also have another product that extends what the password manager does to a lot more of what's going on in your business. It's called 1Password's Extended Access Management.
Now the question that I always ask is do your end users always work on company-provided devices, right that you've got locked down and only use the apps that your IT department has vetted, has verified, has kept up to date? Of course they don't. We live in a BYOD, a bring device universe. People bring in their phones, their laptops and, by the way, they're running all sorts of weird software, much of it not patched old browsers, old operating systems. So how do you keep your company data safe when it's sitting on all those unmanaged apps and devices? This is the answer 1Password has come up with and it's great. It's called Extended Access Management. 1password Extended Access Management helps you secure every sign-in for every app on every device. I mean 1Password is the king on that right, because it solves the problems traditional IAM and MDM can't touch. But let me say that again, not just every sign-in, but every app, every device.
Imagine your company's security like the quad of a college campus. Let's kind of put this in a metaphorical way. You have this, you know, you can see it and you close your eyes unless you're driving. And you can see, you can probably imagine it even if you don't close your eyes. You know the brick buildings or the ivy covered brick buildings and you know. And then, of course, a beautiful quad with a perfect lawn and brick, little brick paths winding their way through the quad. It all looks so nice and pretty. That's the company-owned devices, that's the it approved apps. That's the company-owned devices, that's the IT-approved apps, that's the managed employee identities on your network. But no college quad stays that way for very long because the students are going to wear pads, shortcuts through the grass, the shortest distances from, you know, econ 101 to the English department, and those muddy paths. Those are the unmanaged devices, the shadow it apps, the non-employee identities like contractors on your network.
The problem is that most security tools just assume oh yes, it's all those perfect brick paths, that's all we have to worry about. And all the security problems. They happen, or they often anyway, on the little, you know, muddy shortcuts that are inevitably going to be everywhere in your network. So one password extended access management is the first security solution that takes those unmanaged devices, those shadow IT apps, the identities on your network and puts them under your control. It ensures that every user credential is strong and protected under your control. It ensures that every user credential is strong and protected, every device is known and healthy and every app is visible. One password is iso 27001 certified, with regular third-party audit, so it exceeds the standards set by various authorities. It's a leader in security and what extended access management does is security for the way we really work today, not some imaginary universe for the real world. It's now generally available to companies with octa and microsoft entra, and it's in beta for google workspace customers.
I think you should find out more about this. Secure every app, every device, every identity, even and especially the unmanaged ones at onepasswordcom slash security now. That's all lowercase security now. One p-a-s-s-w-o-r-d dot com slash security now. Security for the way we really work today. One password extended access Management. 1passwordcom slash security now. We thank them so much for supporting the vital work Steve is doing here, especially and this is the most vital of all the picture of the week. I like it that you start with the comedy. You always end with the big one.
0:13:01 - Steve Gibson
Sometimes the somber, yes we end on a somber note like well good luck.
0:13:07 - Leo Laporte
What could possibly go wrong?
0:13:10 - Steve Gibson
actually, sometimes these pictures have what's good, so tell me about this picture okay, this was actually what a listener of ours found when he went to the minnesota the state of minnesota oh my god, okay, like he found it today, this is appalling. It's unbelievable. The caption I gave it was what year is this? And I said it seems we still have a ways to go. So this is the login page for the state of Minnesota unemployment insurance agency.
There, and he's tried to put in a what looks like a reasonable length password. If you count, we know that the dots that it shows when you're blanking a password don't always correspond to the length of the password. That's for additional security, right. But we're seeing one, two, three, 4, 5, 6, 7, 8, 9, 10, maybe about 16 to 20 dots. He gets it shows an X on the right side of that attempt and then the page is updated saying validation errors and we have then an enumeration of what's wrong with this password. And we have then an enumeration of what's wrong with this password. Password must not be more than six characters and, as if that wasn't bad enough, password must not contain any special characters. What?
0:14:49 - Leo Laporte
So it's six alphabetic characters, yeah numeric presumably, oh, maybe alpha numeric.
0:14:52 - Steve Gibson
Yeah, so you have an alphabet of what? Uh 26 letters and then uh 10 more.
0:14:57 - Leo Laporte
Yes, yeah 62 or no. Oh yeah, lower and uppercase, although I bet they don't care about case if they're doing it.
0:15:05 - Steve Gibson
Oh my God. And then it's a little confusing because the standard guidance here underneath the validation error screen is password must be at least six characters.
0:15:21 - Leo Laporte
But it cannot be more than six characters.
0:15:23 - Steve Gibson
Password must not be more than six characters. So it's exactly six characters. Must more than six characters.
0:15:26 - Leo Laporte
Password must not be more than six characters. So it's exactly six characters.
0:15:27 - Steve Gibson
Must be exactly six characters. I mean Leo. I mean, if this didn't come from a listener who said, steve, I had to share this with you and took a screenshot for me, I wouldn't believe it. And that's today, 2025.
0:15:45 - Leo Laporte
Well, it also tells you that they aren't hashing the passwords right, Because the length wouldn't matter if they were hashing them.
0:15:51 - Steve Gibson
One would hope. I mean again, if they're telling you first of all, if it must be at least six characters and must not be more than six characters, it actually says that on two successive lines.
0:16:05 - Leo Laporte
Yeah.
0:16:06 - Steve Gibson
You could simplify that by saying obviously password must be exactly six characters but that would seem a little too extreme. So they're going to make it a little more mysterious, apparently by saying must be at least six characters, must not be more than six characters. Do the math.
0:16:24 - Leo Laporte
Have you? Did you ever play the password, that guy's password game? Do you know what I'm talking about? You mean mastermind? No, no, no, no, there's a fun little game, huh, making fun of? Oh right, right, right, I do. Uh, this whole this whole rules thing. Where it's, it tells you the rules and you have to adjust the password as you go. So let's say monkey123. And then it tells you nope, you got to have an uppercase character. Okay, so let's put an uppercase character. And now it says it has to include a special character. So let me have a special character. The digits must add up to 25. Three, six, that means I need to put a nine, and then another nine and a one. Oh, there we go. Your password must include a month of the year. Oh well, let's fix that. Here we make key. It must include one of our sponsors Okay, and it goes from there. There's actually 36 rules. It gets harder and harder. It's hysterical. This is at nealfun. It's a great kind of take on what we just saw here, which is absurd passwords.
0:17:33 - Steve Gibson
Yeah. So you know, we've wondered how it is, Leo, that states keep getting themselves infected with malware and being hit with ransomware malware and being hit with ransomware. But when you see a page like this which says you know you're six character passwords and we don't know about special characters, it's like there is really like what? What explains this? And this is the unemployment insurance site which you know. It'd be nice to have some security there, anyway, wow, okay, listeners of this podcast know how I feel about age verification.
In the same way that we need to make peace with the thorny issues surrounding the abuse of the absolute privacy offered by modern encryption privacy offered by modern encryption I believe we must also squarely address the problem of verifying someone's biological age in cyberspace, even if that means deciding not to.
That is, I'm not saying we have to know. I'm saying this is an issue that we just need to stop punting because we have so far, unfortunately, having given this issue a great deal of thought. This feels to me like another of those thorny and intractable problems. But okay, let's explore this a bit, and I want to do that because, boy, is this of interest to our listeners. So I'm old enough and I think you are too, leo, to be able to collect social security. I am, yes, so I have the legal right, as do you, to sit at my desktop PC and do anything and go anywhere someone my age can legally do, which is pretty much anywhere and anything, yeah, but I also want my privacy preserved while I'm wandering around. Now I understand, leo, you've pretty much given up that battle.
0:19:34 - Leo Laporte
Yeah, I don't care anymore. A lot of our listeners.
0:19:36 - Steve Gibson
You know, our listeners are like no, no no, I don't even recommend it.
0:19:41 - Leo Laporte
Everybody should care about privacy. I just I don't get to because I spend so many hours on the day on the air and I have no filter and everybody knows everything about me.
0:19:50 - Steve Gibson
So I voluntarily surrender and Leo your email address. Come on.
0:19:54 - Leo Laporte
And I just gave out my email address. So that just tells you right there. I gave up a long time ago, but I don't recommend it. That's just you know.
0:20:01 - Steve Gibson
Okay, okay, good. In the interest of preserving as much privacy as possible and only disclosing the bare minimum necessary and when necessary, I would argue there is never any need to share an exact date of birth. After all, none of the proposed legislation anywhere says we need to know your birthday. They just want to know how many years you've been around. Round up to an integer number, the number of years completed, that should be sufficient. So, okay, but I also don't like the idea of having my age sprayed indiscriminately everywhere I go. So you know it should be on an as needed basis. I'm just sort of talking about a theoretical framework here, like if we were going to try to solve this problem. What would that solution look like? You know, if I go to a website that has a reasonable need to verify my age, and if I agree with its need and elect to provide that, then I should have the option of in some way releasing my integer age to that site one time, and that one time only.
Now, another consideration is that age restrictions vary by region, right? So in the United States we do not yet have any uniformity across our individual and independent state legislations. They all just kind of make crap up as they go and internationally, restrictions often vary by country, so it would likely be necessary to be able to assert our country and state of residence as part of this voluntary age and jurisdiction disclosure Right, because it matters where we are. This state says you have to be this old. That state says, oh no, you can drink when you're 12, whatever. In that case, this theoretical age verifier could be left unlocked, with any querying website being informed of such a user's age and jurisdiction on the fly. I'm Gen Z, I don't care. Both lock and password or pin protect this feature, so that something I know would need to be provided anytime I wish to assert my age in cyberspace. So if this, something like this, were to happen, this would be another internet specification for the W3C, the World Wide Web Consortium, to design and standardize, and it would be implemented in and dispensed by our web browsers the way they do all this other stuff for us already, once this was standardized, any website that was legally obligated to verify its visitor's age, or actually any website that wanted to know, because after all they could ask, we don't have to tell them rather than presenting you know that ridiculous yes, I'm at least 16 years or older or 18 years, or whatever it is, or older, button, that site would have returned an HTTP reply header when displaying the site's initial homepage.
In the Gen Z-er case, where their browser was set to permanently disclose or permanently unlocked, their browser would return a query making the proper assertion and the site's content would automatically be available if they qualified. But in the typical case where a web user wants to exercise some control over the disclosure of this information, the receipt of this reply header would cause the user's browser to display its own uniform pop-up prompt saying that the site being visited requires the user to verify their age and location either, or or maybe it is requesting that information as opposed to requiring it. That pop-up would contain a button labeled please verify my age and send my location to this website. If the user agreed, the browser would generate a query containing this information and the website would open its doors. Now, in that regard, the model would be very much like the cookie pop-ups that we're all now plagued with, but it would be implemented by the browser, not by the website. So that's where the uniformity in its display would come from, and it would be displayed in the center of the screen and only when sites required verification.
Now, of course, by this time, everyone is thinking, yeah, okay, fine, but how can any user's web browser possibly know their date of birth and location in any way? That cannot be spoofed at will. And, of course, everyone's thinking is 100% correct. That's the big problem and it's not a problem that can be sidestepped, since it's the essential problem. Since it's the essential problem, but I wanted to first lay out the rest of this required framework to show that if that essential problem could be solved, it could be the basis for a workable solution.
Okay, so now let's switch to the news from last week which triggered this re-exploration, though it received wide coverage. The Verge's headline was Utah becomes the first state to pass an App Store age verification bill, and they followed that with a note that Meta, snap and X are applauding this. So the Verge wrote Utah became the first state in the country to pass age, to pass legislation requiring app store operators and you know that's, you know Apple and Google right To verify users ages and require parental consent for minors to download apps. The App Store Accountability Act, as it's named, is the latest kids' online safety bill to head to the governor's desk, as states across the country and the federal legislature have tried to impose a variety of design regulations and age-gating requirements to protect minors from online harms. Much of the legislation that has advanced through the states has been blocked in the courts, and the leading bill in Congress failed to pass last year amid concerns that it could limit free expression on the internet. Right, our First Amendment is like what always gets marched out in order to say no, no, no, you can't do any filtering.
Other social media sites have pushed in recent months as legislatures consider a variety of bills that could impose more liability for kids' safety across the tech industry. Apple reportedly lobbied against a Louisiana bill that would have required it to help enforce age restrictions. You know they don't. Apple doesn't want any involvement in this, if they can possibly avoid it, but recently voluntarily opted to let parents share their kids' age ranges with apps, and we talked about the first phase of that. We're about to talk about the update to that. Meta spokesperson Jamie Radice called that quote a positive first step at the time, but noted that developers can only apply these age appropriate protections with a teen's approval. After Utah passed its age verification bill, metta, snap and X applauded the move in a joint statement and urged Congress to follow suit, meaning let's make this go national. Saying quote parents want a one-stop shop to verify their child's age and grant permission for them to download apps in a privacy-preserving way. They said the App Store is the best place for it and I disagree with that. But okay, we'll get there.
0:29:06 - Leo Laporte
Parents are going to be very surprised when they ask the parents' age as well, by the way, but okay go ahead continue.
0:29:12 - Steve Gibson
Precisely because right you have to you have to Everybody. The App Store is the best place for it, and more than a quarter of states have introduced bills recognizing the central role app stores play. Apple spokesperson Peter Ajman pointed to a white paper the company released last month which emphasizes the importance of minimizing the amount of sensitive data collected on users. Google, which runs the Play Store on Android, did not immediately provide comment on the bill, but others, including the Chamber of Progress which counts Meta's European arm as well as Apple and Google among its corporate backers, warn that the bill could put all users' privacy and rights at risk Again.
this is why I think this is like a big deal. This is one of these things, one of these sticky wickets that you know that cyberspace brings with it, that up to now, we just kind of wanting to like, let's not let's.
0:30:28 - Leo Laporte
But you know what you're seeing. This is all something that Meta wanted. This is all something the social they didn't want to do the age verification. So they lobbied hard and, of course, probably brought big black bags of cash to members of Congress and the state assembly saying, well, really, the app store should be responsible. They're the central authority. Well, there's not a us.
0:30:50 - Steve Gibson
Yeah, and I actually do agree with the notion. I think it should go deeper than that. I think it should be on the platform, because then everybody gets it.
0:30:59 - Leo Laporte
And you could keep it locally. If it were just the phone right, the phone could just say yes or no.
0:31:04 - Steve Gibson
Yes, the phone? Well, the phone could just say yes or no. Yes, the phone. Well, in fact, we will be talking about Apple has finally capitulated with an API. That is what I've been talking about us needing for quite a while.
Ratification requirements, like those in SB 142, chill access to protected speech for everyone and are therefore inconsistent with the First Amendment. And again, yes, this is a problem, right, I mean, this is not a small thing. This doesn't have an easy answer. It's clear that this runs up against the legislation to restrict access to the Internet, runs up against this notion of unrestricted free speech, because we're talking about restricting, based on age, some people's access. But we actually do that now, right, just not in cyberspace. So this Chamber of Progress, this Legal Advocacy Council they have a person, carrie Maeve Sheehan, wrote in a blog post.
Scotus is set to weigh in on age verification this year, but in a case that deals the deal specifically with its application to accessing porn sites, okay, better that than nothing, I I'd say this is the way. Maybe we're going to have to chip away at this in order to get where we need to go. Quote as privacy experts have explained strict age verification confirming a user's age without requiring additional personally identifiable information is not technically feasible in a manner that respects users' rights, privacy and security. And that, of course, gets back to the point I was making earlier. That is like, yes, we can invent a framework and a system for doing this, but that last piece is the problem how do we do it in a way that that cannot be easily bypassed and spoofed? So once again, we have political legislators imagining that they're able to dictate the way reality should operate. You know much, as they've been wanting to with encryption. Well, we want everything encrypted, except we need to be able to see things. What, what.
But Apple, apparently cognizant of the direction things are going, last month, in February, published a short, eight-page document titled Helping Protect Kids Online. I have a link to it in the show notes for anyone who wants to see it, but I'm going to cover it here. It appears that Apple is grudgingly moving in the direction they need to go, which is to allow their platform to be used as an age verifier, much as they would clearly rather not age verifier, much as they would clearly rather not. Their Helping Protect Kids Online document addressed this under the topic Making it Easier to Set Up and Manage Accounts for Kids. Apple wrote.
For years, apple has supported specialized Apple accounts for kids, called child accounts, that enable parents to manage the many parental controls we offer and help provide an age-appropriate experience for children under the age of 13. These accounts are the bedrock of all the child safety tools we offer today. To help more parents take advantage of child accounts and parental controls, we're making two important changes. First, we're introducing a new setup process that will streamline the steps parents need to take to set up a child account for a kid in their family. And they keep using the word kid. I guess that's OK, but it just strikes me as odd.
0:35:08 - Leo Laporte
Kids and if parents refer my folks always said you say children, not kids. Yeah, exactly, kids are baby goats, it seems too informal to me, but okay, it's marketing material. That's why it's not really security material.
0:35:19 - Steve Gibson
And if parents prefer to wait until later to finish setting up a child account, child and this was very interesting to me Child-appropriate default settings will still be enabled on the device. So even if a parent just sort of flips a switch to say, yeah, we want a child account, it defaults to safe. So they said this way a child can immediately begin to use their iphone or ipad safely and parents can be assured that child safety features will be active in the meantime that's because they know parents will do the least possible exactly.
0:35:58 - Leo Laporte
Okay, here you go, get out of my hair yeah, so it fails into a safe state, which it should that.
0:36:03 - Steve Gibson
That's right, it absolutely should. This means even more kids, they wrote, will end up using devices configured to maximize child safety, with parental controls. Second, starting later this year and this is annoying to me. This thing is full of coming soon and later this year. It's like what? What's the problem here? Just do it. How hard could it be? Yeah, I know you've had endless hand-wringing meetings in your ivory tower up there in your golden donut, so get it done. Anyway, starting later this year, parents will be able to easily correct the age that is associated with their kid's account, if they previously did not set it up correctly. What? Okay? Now, this is one of my hobby horses. Why set the age? Just set the date of birth. Do it once and Leo, oh, that's a good point it automatically updates.
0:37:06 - Leo Laporte
It's a good point it automatically updates it's it's a miracle it's amazing.
0:37:10 - Steve Gibson
It's as if you had a computer that was able to do division well, never, I don't understand.
And then apple. This is the big revelation. Once they do, parents of kids under 13 will be prompted to connect their kid's account to their family group. If they're not already connected, the account will be converted to a child account and parents will be able to utilize Apple's parental control options with Apple's default age appropriate settings applied as a backstop. Ok, so under the topic OK, so, for example, it could be it could default to under age Right, and then what the parent does is then insert their child's date of birth instead of their child's age, because that's not, as they would say, that's not going to age. Well, but date of birth it's automatic, it's a miracle.
Anyway, under the topic, then, a new privacy protective way for parents to share their kids age range. Age range, apple said again, because you know, leo, this is good. This gonna have to be thoroughly vetted. We have to make sure that the slide switches are the right size later this year.
0:38:41 - Leo Laporte
Button is just later this year.
0:38:43 - Steve Gibson
we're wait for it, it's coming. Yes, apple will be giving parents a new way to provide developers with information about the age range of their kids Age range. We're not giving them, we're not going to, we're really, you're just going to pry this information from us.
0:39:00 - Leo Laporte
It could be under 13, over 13. That's sufficient right.
0:39:03 - Steve Gibson
Yeah, it could be under 13,. Over 13. That's sufficient, right, yeah, enabling parents to help developers deliver an age-appropriate experience in their apps while protecting kids' privacy, they said. Through this new feature coming soon, parents can allow their kids it says kids to share the age range associated with their child accounts with app developers. It's a miracle. If they do, developers will be able to utilize a declared age range API to request this information from the platform, which can serve as an additional resource to provide age-appropriate content for their users.
How long do you think it took them to come up with this, leo? As with everything we do, the feature will be designed around privacy and users will be in control of their data, and users will be in control of their data. The age range will be shared with developers if, and only if, parents decide to allow this information to be shared, and they can also disable sharing if they change their mind. That's got to be another slide switch. Probably it's bigger.
I changed my mind. That's right, red. And it won't provide kids actual birth dates. Wow, what a concept. As I've noted before, a declared age range api is exactly the right solution. Kids use use specific iPhones and iPads, and Apple will even have a default in the direction of enforcing safe content. So it makes sense for the device's platform to know the age of its user and for that platform to be able to disclose that information with proper controls. I still think it makes the most sense, as I've said, for parents to set their child's date of birth internally and, as I've noted, they would be free to fudge it either way, depending upon their individual child's emotional maturity and the level of protection they feel most comfortable enforcing Exactly right.
0:41:26 - Leo Laporte
This is such a good solution. You know what? This is what should happen, and I think it's only happening now because it's either this or the App Store and they don't want to do that Right.
0:41:37 - Steve Gibson
And, of course, the App Store can get this from the platform. Right, so it's a win-win, right, so it's a win win If Apple. So then if Apple insists upon calculating the user's age within large privacy protecting ranges, while that seems unnecessarily restrictive to me, fine, apple has already needed to amend those dumb ranges because, well, they're dumb. Okay, they had to add another one, but okay, if that's what they want to do, then they could do that, and it does serve to give some additional impression of increased privacy. So in this document, apple explained their thoughts about all this under the heading age assurance striking the right balance between platforms and developers to best serve the needs of our users. They said at Apple, we believe in data minimization, collecting and use it. We know that, god. We know that collecting and using only the minimum amount of data required to deliver what you need, of data required to deliver what you need.
This is especially important for the issue of age assurance, which covers a variety of methods that establish a user's age with some level of confidence. Some apps may find it appropriate, or even legally required, to use age verification, which confirms user's age with a high level of certainty, often through collecting a user's sensitive personal information, like a government-issued ID, to keep kids away from inappropriate content. But most apps don't. That's why the right place to address the dangers of age-restricted content online is the limited set of websites and apps that host that kind of content. After all, we ask merchants who sell alcohol in a mall to verify a buyer's age by checking IDs. We don't ask everyone to turn their date of birth over to the mall if they just want to go to the food court. There you go, there you go, good analogy yeah.
Requiring age verification at the app marketplace level. Here's their point is not data minimization. While only a fraction of apps on the App Store may require age verification, all users would have to hand over their sensitive, personally identifying information to us, regardless of whether they actually want to use one of these limited set of apps. That means giving us data like a driver's license, passport or national identification number, such as a social security number, even if we don't need it. And because many kids in the US don't have government issued IDs, parents in the US will have to provide even more sensitive documentation just to allow their child to access apps meant for children. That's not in the interest of user safety or privacy. Requiring users to overshare their sensitive personal data would also undermine the vibrant online ecosystem that benefits developers and users. Many users might resort to less safe alternatives like the unrestricted web, or simply opt out of the ecosystem entirely, because they can't or won't provide app marketplaces like the App Store with sensitive information just to access apps that are appropriate for all ages.
The Declared Age Range API is a narrowly tailored, data-minimizing, privacy-protecting tool to assist app developers, who can benefit from it, allowing everyone to play their appropriate part in this ecosystem. It gives kids the ability to share their confirmed age range with developers, but only with the approval of their parents. This protects privacy by keeping parents in control of their kids' sensitive personal information, while minimizing the amount of information that's shared with third parties, and the limited subset of developers who actually need to collect a government-issued ID or other additionally sensitive personal information from users in order to meet their age verification obligations can still do so too. All in all, it gives developers a helpful addition to the set of resources that they can choose from, including other third-party tools, to fulfill their responsibility to deliver age-appropriate experiences in their apps. With this new feature, parents will even more firmly be in the driver's seat, and developers will have a way to help identify and keep kids safe in their apps.
So, anyway, apple is going to need to get you know like over themselves to some degree, I think, and accept that youals and they're going to have to have some way of filtering the content that children are able to see. I think this does that. You know they do still have this notion of dividing ages up into segments. I think they have five of them now. I have it here somewhere in my notes.
0:47:29 - Leo Laporte
I'm not seeing it right now and maybe that's why they don't do birth dates, because they don't want to even know that much.
0:47:34 - Steve Gibson
They just I think you're right. I think you're right In the same way that they don't want to have the decryption keys for their advanced data protection. They don't want their phone to know the person. I think you're exactly right that that explains it. It was a mystery to me. It's like it seems so obvious. But you're right, they don't want anyone.
0:47:54 - Leo Laporte
They don't even want that, even want the. They want this vaguest thing that they can get away with, and that would be age range. That makes sense. Yep, yep, it really does. Yeah and this is a great solution. Frankly, I don't know why they didn't do this right away. This is perfect.
0:48:10 - Steve Gibson
Yeah, and they can simply not show the apps which require an age that the viewer doesn't qualify for in the app store. They're just not there for those viewers. They shouldn't see them, they can't get them anyway, so just don't show them. It just doesn't show up in the phone.
0:48:30 - Leo Laporte
Yeah uh, and it gives. And the other thing I like about it it gives parents the ultimate authority, because only the I mean the parent knows what a kid is mature enough to do or not. And if the kid is a 12 year old but has the maturity of a 16 year old, the parents can say that he's 16. And Apple doesn't get involved. Nobody gets involved. The parent is the right person to decide and give, and if there's a parent who's, you know, doesn't care, let's hope they care enough to set a button that says it's a kid's phone and that would do the, you know, the closest thing to the right thing. Right, I like this. I, you know what, I suspect apple. This sounds like something apple came up with but hadn't implemented at all, and they know it's gonna take some time.
0:49:14 - Steve Gibson
They just wanted to push back as long as and hard as they could. Right, and now it's like, okay, fine, if we're going to start having legislation, then guess what here. Here's the solution we propose.
0:49:25 - Leo Laporte
It's probably an iOS 19 feature and that's what they're saying in the company. It's like September, we'll have it for you, I hope so. Anyway, because this would completely short circuit the whole thing. It would make it doable.
0:49:39 - Steve Gibson
Now. What this does, though, then is solve the problem here, then is solve the problem here. It does not solve the problem for Pornhub, where, if Congress weighs in and or if the Supreme Court weighs in and says, you know, you must absolutely protect minors from having access to this content, then the only way to do that is for people who do want access to lose their anonymity well, you know, the state of texas did in fact create a law which a federal judge put on hold in the supreme court, heard arguments last month on and will decide upon, and one hopes that the supreme court decides in favor of the first amendment.
0:50:21 - Leo Laporte
Uh, that's what that federal judge, the district judge in texas, said, and this violates the first amendment. So there are, I think, at least 20 States that have these porn laws and what happens? What Pornhub has done, is just withdraw from the state, right, but that just means there's a lot other, plenty of other porn sites, or use a VPN, or I mean there's all sorts of ways around it. Your phone solution is actually much more bulletproof and you could say, uh, that the phone has to say you're over 21. I mean you could. You could say that, yeah, I mean that's. I mean an 18 year old probably has their own phone, maybe even a 16 year old without parental supervision. So, but if, at that point, they should be able to do whatever, if the parents aren't going to get involved, then they could be able to do what they want right, yeah, I, yeah, I mean so again, we're it's a great solution.
0:51:12 - Steve Gibson
We're looking at this because we're in cyberspace and this is something that we've been just sort of like not wanting to deal with so far, and I think that we're finally facing the fact that we've got to answer some of these hard questions. Yeah, yeah.
0:51:34 - Leo Laporte
I agree. We have an easy question, leo, like which is who's the next sponsor? That's the one I was thinking of. I know you so well. All right, let's take a little break. We have lots more to talk about.
0:51:41 - Steve Gibson
Steve's coming back in just a second oh, we got a north korean uh job interview. What are you talking?
0:51:47 - Leo Laporte
about North Korea. I'm from Lubbock. We'll talk about that in just a little bit. Steve Gibson, you're watching Security Now, so glad you're here.
Our sponsor for this segment of Security Now is a little company that I've gotten to know pretty well over the last few months US Cloud. When I first heard about them and I said let's get on the horn with these guys, because what do they do? Are they a cloud company? I didn't know from the name. Well, I I I got schooled. They are the number one microsoft unified support replacement. Now we've been talking about them ever since, for the last few months about US Cloud. They are the global leader in third-party Microsoft support for enterprises. They support 50 of the Fortune 500. And people switch to US Cloud because they get better support. They get faster support and they pay up to 30% to 50% less than they would pay Microsoft for unified or premier support. Let me say that again 50% less than they would pay Microsoft for unified or premier support. Let me say that again, switching to us clouds gave your business 30 to 50% and you're not settling for less, you're getting more. It's faster Twice as fast in average time to resolution as Microsoft. They do things Microsoft is never going to do.
For instance, how's your uh Azure spend, do you know? I mean, you would know what the bill is, but do you know what you're getting for that? Do you know? I mean, do you have azure that you bought maybe months or years ago and you don't really know what it's? Well, now you can find out.
Us cloud is a great new offering, something I know microsoft will never do it. It's an Azure cost optimization service. If you've been using Azure for a little while, you undoubtedly have what we call Azure sprawl. You know spend creep right. It's easy and it's tempting, but you can save money and that's the great thing. Us Cloud isn't there to make money on your Azure spend. They're there to save you and it's easier than you think.
Us Cloud offers an eight-week Azure engagement powered by VBox that edifies key opportunities to reduce costs across your entire Azure environment. And, by the way, you're going to get access during this to expert guidance from US Cloud's senior engineers. These guys have an average of over 16 years with Microsoft products doing break-fix, doing the stuff that you really need them to do. They know Microsoft inside and out, often better than Microsoft does. Now, at the end of this eight-week Azure engagement. You're going to get an interactive dashboard that will identify, rebuild and downscale opportunities, unused resources. You can reallocate those precious IT there's never enough, right, those precious IT dollars towards needed resources. Or you could do what a lot of US clouds other customers have done Increase the savings by getting off Microsoft support and moving to US cloud. Ultimately, you can eliminate your unified spend and save even more.
Here's uh. Here's a review we got from uh sam, who is operations manager at bead gaming. He gave us cloud five stars. He said, and I quote we found some things that have been running for three years which no one was checking. These vms were I don I don't know 10 grand a month. Not a massive chunk in the grand scheme of how much we spent on Azure, but once you get to $40,000 or $50,000 a month, it really starts to add up. Yeah, it does, doesn't it? This is stuff Microsoft's not going to tell you, right, they don't want you to spend less, but you as cloud's on your side. It's simple. Tell you, right, they don't. They don't want you to spend less, but us clouds on your side. It's simple stop overpaying for azure. Identify and eliminate azure creep and boost your performance all in eight weeks with us cloud.
Visit uscloudcom. There are so many reasons why this is the right choice. Book a call today. Find out how much your team can save uscloudcom. Better, faster microsoft support for less I mean, this is, this is great. Visit uscloudcom to find out more. Make sure if they ask you, you say oh yeah, I heard about it on security. Now that steve gibson fella, he's a, he's a good guy, he's. Just because that helps us UScloudcom, call him. Get a book a call today. I think you'll be impressed. All right, steve, your rest is over. Back to work.
0:56:19 - Steve Gibson
Okay. So thanks to a listener of ours, I was made aware of one employer's experience with North Koreans faking their identities for the purpose of attaining employment in the US, as we'll see at one point toward the end of his description. Roger Grimes, whose security industry work we've covered before, says I've now spoken with many dozens of other employers who have either almost hired a North Korean fake employee or hired them. It is not rare. So here's what Roger himself experienced. He said you would think with all the global press we've received because of our public announcement of how we mistakenly hired a North Korean fake employee in July of 2024, followed by our multiple public presentations and a white paper on the subject, that the North Korean fake employees would avoid applying for jobs at Nobi4. You would be wrong. You would be wrong. It is apparently not in their workflow to look up the company they're trying to fool oh how funny. Along with the words North Korea fake employees before they apply for jobs. We get North Korean fake employees applying for our remote programmer slash developer jobs all the time. Wow, sometimes they're the bulk of the applications we receive. This is not unusual these days. This is the same with many companies and recruiter agencies I talk with. If you are hiring remote only programmers, pay attention a little bit more than you usually would.
North Korea has thousands of North Korean employees deployed in a nation state level industrial scheme to get North Koreans hired in foreign countries to collect paychecks until they're discovered and fired. Note that, due to UN sanctions, it is illegal to knowingly hire a North Korean employee throughout much of the world. To accomplish this scheme, north Korean citizens apply for remote-only programming jobs offered by companies around the world. The North Koreans apply using all the normal job-seeking sites and tools that a regular applicant would avail, such as the company's own job-hiring website and dedicated job sites like Indeedcom. The North Koreans work as part of larger teams, often consisting of dozens to over a hundred fake applicants. They're usually located in countries outside of North Korea that are friendly to North Koreans, such as China, russia and Malaysia. This is because North Korea does not have a good enough infrastructure, in other words Internet and electricity, to best sustain the program, and it is easy for adversarial countries to detect and block North Korean Internet traffic.
The North Korean fake employees work in teams with a controlling manager. They often live in dormitory-style housing, eat together and work in very controlled conditions. They do not have much individual freedom. Their families back home are used as hostages to keep the North Korean applicants in line and working. Basically they're slaves. That's so awful. They get jobs and earn paychecks, but the bulk of the earnings is sent back to North Korea's government, often to fund sanctioned weapons of mass destruction work.
The scheme is much like an assembly line workflow. The North Korean fake employee and their helpers apply for the job interview, supply identity documents, get the job, get the related company equipment and collect a paycheck. The North Korean applicant may do all the steps in this process or farm it off to other participants, depending upon the language skills of the applicant and the requirements of the job application process. They will often use made-up synthetic identities, use stolen identity credentials of real people in the targeted country or actually pay real people of Asian ancestry who live in the target country to participate. It turns out there's a burgeoning sub-industry of college-aged males of Asian ancestry who cannot wait to get paid for participating in these schemes. There are Discord channels all around the world just for this. They make a few hundred to a few thousand dollars for allowing their identity to be misused or participating in the scheme. That way they can interview in person or take drug tests if the job requires that. Wow, so they're like subcontractors of this North Korean scheme. Sometimes the North Korean instigator does all the steps of the application process, sometimes they just get the job interview and hand it off to others with better language skills for the interview. And sometimes they hand off the job to someone who can actually do the job and collect a kickback percentage.
How the North Korean fake employee accomplishes the hiring and job process runs the spectrum of possibilities. We have seen it all. If they actually win the job, they will have another participant in the targeted country pick up the computing equipment sent by the employer and set it up. They're known as laptop farmers. These laptop farmers have rooms full of computing equipment sitting on tables marked with an identifier of what computer belongs to what company. To keep them straight of what computer belongs to what company. To keep them straight, they power on the laptops and give the fake North Korean employees remote access to the laptop.
Using this scheme, north Korea has illegally earned he has in air quotes hundreds of millions of dollars to fund its illegal weapons programs. Over the last few years there have been North Korean fake employee part-time contractors for over a decade. But the fake full-time remote employees took off when COVID-19 created a ton more of fully remote work-from-home jobs. Remote work-from-home jobs. There is far more money to be made. If your company offers high-paying remote-only programmer-slash-developer jobs, you are likely receiving fake job applications from North Koreans. It is rampant. Hundreds to thousands of companies around the world likely have North Korean fake employees working for them right now. It is common. We regularly get applications from North Korean fake employees. We routinely reject most of them. Occasionally we accept a few and interview the fake employees to learn more about them. Wow, like deliberately right.
1:04:15 - Leo Laporte
That's wild.
1:04:16 - Steve Gibson
And to keep up on any possible developing trends. Luckily, so far North Korea does not seem to be changing their tactics that much from our original postings. The signs and symptoms of a North Korean fake employee we described last year still apply today. They're apparently still having great success using them. If you and your hiring team are educated about these schemes, it's fairly easy to recognize and mitigate them. You just have to know and look for the signs and symptoms. We recently interviewed Mario and he has that in quotes. Mario, supposedly from Dallas, texas. Here's part of his resume. I have it in the show notes uh on on uh, page nine. So it shows dallas, texas, and then a 754 phone number and they blocked out the rest.
Mario, something at gmailcom black I'm a mario, that's right gcp at the very top line gcCP, python, c, sharp, rust, microservices, cloud. He has in parens, aws and Azure. Then all of the counseling about how to prepare, like a one-page resume. Senior software engineer with eight plus years of experience in Python, c-sharp, rust, microservices, REST, slash, graphic, ql API development, cloud infrastructure, aws and Azure and containerized application deployment. Specialized in cloud native architectures, high availability systems and secure coding practices. Passionate about building scalable, reliable and high-performance applications for cybersecurity and enterprise solutions. I'd hire him. This guy looks good. He's like where do you sign? Yeah, and under experience from 07-22 through 12-20-24,.
A senior software engineer with cloud native microservices and security, amazon Web Services, aws and remote Designed. And during this time he, mario, designed and developed cloud native microservices in Python, c, sharp and Rust, ensuring high availability and fault tolerance. That's what you want. Built secure and scalable REST and GraphQL APIs enabling seamless interoperability between cloud services and enterprise applications. Check that box, it's a buzzword festival. Led cloud infrastructure development on AWS using Lambda, ec2, s3, rds and DynamoDB and Azure, aks, cosmo, dbs, key vault and event grid, woo. Implemented zero trust security models incorporating OAuth 2.0, jwtt authentication and end-to-end encryption. Developed containerized application. And this goes on and on and on, but you know, one full page of this is what I can do for you.
1:07:41 - Leo Laporte
Patrick said he wouldn't hire him because of the use of what looks like Comic Sans in there. That right there he's header. That right there he's out.
1:07:51 - Steve Gibson
He's out. That is a bad choice.
1:07:52 - Leo Laporte
I think it's Tekton or one of those architectural fonts, but yeah, probably not very professional.
1:07:57 - Steve Gibson
So Roger wrote we have hidden Mario's last name and contact information because it is the name of a real American oh interesting who is likely unaware that his identity has been hijacked. Interesting and used, so like when people go and check him out and Google him and look him up. Oh, look, there he is. He's a real guy.
1:08:19 - Leo Laporte
Just like the jackal did you go to the cemetery. You get a child that's died young and then go to get the birth certificate, and then you get the passport and their name.
1:08:35 - Steve Gibson
Oh no, that's died young. And then go to get the birth certificate and then you get the passport and their name right. Oh no, that's a tv show. But anyway, same idea, and he said so. Who's likely unaware? And his identity, that his identity has been hijacked and used in this scheme and we don't want hiring companies to accidentally be given the rogue contact information and think they have a real employee candidate, he said Mario, in quotes, claimed that he was an American citizen who was born and raised in Dallas. Despite this, he had a fairly strong Asian accent, yeehaw, likely North Korean. The Mario who showed up for our Zoom interview had the same voice as the Mario we interviewed over the phone during the first stage of the application process.
1:09:19 - Leo Laporte
Now we should say they're in on this, right. I mean, they know this. They're just playing with this guy because they want to learn about this. Yes, yes. So they're ready. As he said, occasionally they go ahead and do an interview, playing with this guy, because they want to learn about this.
1:09:28 - Steve Gibson
Yeah, yes, yes. So they're ready. As he said, occasionally they go ahead and do an interview, even though they are highly suspicious from the get-go, because they want to. They want to, like, stay up to date on on what north korea is doing. So he but so so he said. In this case, he said I love this. The Mario who showed up for our Zoom interview had the same voice as the Mario we interviewed over the phone during the first stage of the application process, but sometimes they're different.
1:10:00 - Leo Laporte
I have a cold today. Wow, sometimes it's the American who they're using as a patsy who's doing the interview. Probably, right, yeah right.
1:10:12 - Steve Gibson
So he said we had three know before people on the zoom call, including myself, which, as we'll see, comes in here in a minute. He said over the next 45 minutes we asked all sorts of questions that we that would be asked of any real developer candidate. Whenever we asked a question, mario would hesitate, spend 5 to 15 seconds repeating our question and then come back with the perfect answer. Most of the time it clear that mario, or someone participating with him, was typing the question subject into a google search or ai engine and repeating the results. Mario started off by saying how he had a special interest in social engineering and then ro Roger here writes no kidding, because of course, this whole thing is social engineering yeah.
And security culture. He mentioned security culture over and over. He said I soon realized that if you go to our main website, we say security culture all over the place. He was repeating phrases he found on our website, but he was very friendly and smiling and his English was heavily accented but not super hard to understand most of the time. Although born and bred in Dallas, brett and Dallas, he said I would say that, based solely on this first part of the interview, if we were unaware of what was going on, we would all have liked what he said and how he responded. He was friendly and smiley and we liked him.
Mario claimed on his resume and in person to a program for Amazon, salesforce and IBM he supposedly has the exact advanced programming skills we had advertised. I wish all job applicants knew as well how to best match what we advertised in a job ad with what they responded. Of course it was all fake, but still. During his initial statements he said he had a personal interest in cryptography and security. When it came time for me to ask technical questions, I used his mentioned interests as the basis for my questions. I started off by asking if he'd ever done post-quantum cryptography and if he had implemented it in his past projects, including mentioning nist. You know nist, which is probably the top search result you get when researching post-quantum cryptography, and a list of the various post-quantum cryptography standards maybe he listens to security.
Now, you know, maybe he's, maybe yeah I asked him if his previous projects were all using post-quantum cryptography. He he said yes, no, which is absolutely untrue Right. Almost no American company is currently implementing post-quantum cryptography. Strike one I asked what post-quantum encryption standard he liked the most. He said crystals dilithium. It is a digital signature algorithm, not encryption. He frequently mixed up encryption algorithms like AES with hashes like SHJ2 and digital signatures like Diffie-Hellman. Strike two for someone who is really into cryptography and regularly does post-quantum crypto.
1:14:10 - Leo Laporte
He should have listened to the show better. He obviously was drifting off at some point.
1:14:14 - Steve Gibson
I asked what size, yeah, he was.
1:14:15 - Leo Laporte
He wasn't paying close attention he was fascinated by the sponsors.
1:14:21 - Steve Gibson
I asked what size an AES cipher key would need to be to be considered post-quantum strength. This seemed to throw him for a loop and he wasted more time than usual. Finally, he replied 128 bits. That's wrong. Aes keys have to be 256 bits or longer to be considered resilient against quantum cryptography. Strike three. On the technical questions, he wrongly answered every technical question I asked.
At this point I decided to throw out a random bad fact that any normal US candidate should be able to spot and correct. I said Bill Gates, ceo of Microsoft, says that all future programming will be done by AI agents. What do you think? Okay Now? Bill Gates has not been the CEO of Microsoft since 08, but most people outside the industry would likely think Bill Gates was still the CEO because that's how the media often references him as the former CEO of Microsoft. He's still a cultural icon associated with Microsoft. This is the type of mistake that a North Korean employee who does not have great access to the Internet would make Aha, gotcha. And sure enough, mario repeated the fact that Bill Gates was the CEO of Microsoft instead of the current CEO, satyam Nadella.
Mario did give a great answer on agentic AI and programming using AI agents. If he were a real employee, I would give his answer top points. Well, except for not noticing my CEO switcheroo. Well, except for not noticing my CEO switcheroo. Finally, with the technical part of the interview over, we switch to the personal questions. If you're concerned that you may have a North Korean fake employee candidate on your hands, it cannot hurt to ask of and ask for cultural references that anyone in your country or region should readily know, but that would be harder for a foreigner with limited knowledge of the culture to understand. One of my co-interviewers asked him what he did in his free time. This seemed to surprise him. My co-worker asked if he likes any sports.
He said he loved badminton, okay, okay, which he probably did not realize that, although super popular in Asian cultures, is not among the top sports. If you grew up in Dallas, texas, or nearly anywhere in America, sure, grew up in Dallas, texas, or nearly anywhere in America, sure, there are plenty of people who play badminton, especially Americans of Asian American ancestry. But it is an unlikely response out of all the possible responses you could offer. I asked how excited he was that the Cowboys won the AFC. I figured he would not know that the Dallas Cowboys got creamed and did not win the AFC. I figured he would not know that the Dallas Cowboys got creamed and did not win the AFC. For one, they're in the NFC and not the AFC Conference Division. See, I would have missed that one. So I don't know. He again hesitated, but then seemed to get that I was mentioning the Dallas Cowboys and that they had been eliminated from contention. I was surprised this one did not trip him up as much as I thought it would?
1:17:53 - Leo Laporte
The right answer is I don't follow sport ball. Right? If you really were a geek.
1:17:59 - Steve Gibson
Ask me a question about badminton, and I got you.
1:18:01 - Leo Laporte
Yeah, I don't think badminton is so disqualifying, to be honest.
1:18:07 - Steve Gibson
My coworker said he was going to visit Dallas soon. And did the candidate have any favorite food spots? Mario said his mother's cooking. Oh, good answer, he said. I thought that was a great response. So he did not have to look up any restaurants in Dallas. So my coworker persisted asking the candidate if they had any restaurants to recommend. Mario did not. I offered up the book repository, one of the most famous tourist sites in Dallas, where people are dying to eat their Nashville hot chicken.
1:18:41 - Leo Laporte
No.
1:18:42 - Steve Gibson
Mario wholeheartedly agreed with my recommendation. Whoopsies. My coworker asked the candidate if there was anywhere in the world he would want to travel In our hidden Slack channel. My coworker said that when he asked this question of North Korean candidates, their eyes always lit up and they got excited. Yeah, sure enough. Mario began to excitedly describe his dreams of visiting Paris and South Africa. That's sad, and Roger said.
I think it was at this point that we all began to have some empathy. Yes, we were dealing with a fake job candidate who was trying to steal our money or worse, but in reality this was a young man, likely forced to do what he was doing, yep, destined never to receive any big salary or visit those dreamed of vacation destinations. It's strange, but I think we started to feel a little ashamed at conducting a fake interview. So we stopped and asked if he had any questions. The normal job candidate would likely ask more about the job, the tools used, the benefits and things like that. Mario had no questions other than how many other people we were interviewing and how he was doing in the job interview.
We ended the job interview. We had not picked up any new tactics or information, other than noticing that a lot of the North Korean fake employee candidates lately had been claiming to have been born and raised in Dallas, texas, and all with very heavy accents. However, the last fake employee interview switched from a heavy Asian accent from the initial phone interview to a savvy Pakistani person whom we interviewed on Zoom and they said he must have been hired to hand off the interview. They said he must have been hired to hand off the interview. I've now spoken with many dozens of other employers who have either almost hired a North Korean fake employee or actually hired them. It is not rare, and sometimes the fake employees, when discovered, switch to a ransomware encryption scheme or steal your company's confidential data and ask for a ransom. So it is not always just about getting a paycheck. Employers, beware.
1:21:20 - Leo Laporte
I think, though, it's really interesting to say he felt some sympathy for the guy, because I feel the same way for the guy, because I feel the same way you know, when you kind of punk um people are trying to scam you on the phone, often they're as much the victim as you would be Right and they're in some big farm just and and some robo dialer is connecting them to you and and unfortunately, they're being rated on on the their success percentage Right.
Wow, what a story Wow, that's just what a story. Wow, that's just, that's fascinating.
1:21:51 - Steve Gibson
So I wanted to be sure that the employers and interviewers among our listeners were fully aware and appreciated the degree to which these fake North Korean employee farm scams are real. I have a link on page 12 of the show notes to Rogers far more detailed 21 page report on this which also has it is heavily linked to other resources. Uh, it's no beforecom. And then the the. The URL has the title North Korean fake employees are everywhere. Has the title North Korean fake employees are everywhere. So, anyway, I just I wanted to put this on our listeners radar because it's really not something you want to do. And, of course, it is the case that the moment they start to feel that they might be found out, that the jig might be up for them. There is a serious danger of them switching their use of your network to ransomware and exfiltration and extortion. So you know it also needs to be taken seriously.
1:23:06 - Leo Laporte
And next time say Billy Bob's Texasas, if they asked you what restaurant and the other thing that needs to be taken seriously, leo yes, our fine sponsors. That's right, you are getting really good at this steve. It's scaring me a little bit your job is secure, my friend don't worry.
No, no, I, I like doing these ads. This one, actually, I have a little personal connection to. We're going to talk about our sponsor for this segment of security. Now, delete me. And we use delete me. And we use it for a very good reason.
Delete me has uh accounts for individuals, for families and for businesses. We use the business account and I realize it's really important to do this because, well, let me put it this way have you ever searched for your name online? And I mean, I don't recommend it, but if you don't believe me, do, you will not like how much of your personal information is available. Worse, almost all the sites will say and for a little, a few dollars more, we can give you the leo's prison record. We can tell you, uh, you know anything you want to know about him, because these data brokers online have been collecting this information. There's hundreds of them. They make money selling it to the highest bidder, whether it's a marketing company that's the least of your worries or a government uh, they sell it on. They don't care, they don't care. And the reason it's personal for us is because and I mentioned this before, but briefly uh, you know when lisa got spearfished and all our employees got a text purporting to be from her. It wasn't. We immediately signed her up for delete me. And you remember this steve when the national public data broker breach happened. We had a website at the time. You could search to see if you're in it. You were in it, I was in it, our social security numbers were in and then I said, well, let's search for lisa. She wasn't. And that's because we use delete me and uh, and I think every business should absolutely use delete me for their management, because spearfishers are looking for management so they can target your employees with the right name, the right phone number, the right information.
Maintaining privacy is not just an individual thing, it's a, it's a business thing, it's a family thing. Delete me has plans for everybody. In fact, yes, they have family plans so you can ensure everyone in the family feels safe online. Delete Me reduces risk from, yes, cybersecurity threats, spear phishing, but also identity theft. That's why you want it for your family. Everybody needs to get this stuff off the Internet For harassment. I mean, that is a plague right now in the internet era and I gotta tell you it works. We know it works. We did that. I didn't. I didn't even know what would happen. When I did it live on the show a few months ago and I was at first I thought why isn't lisa in there than I remembered? We hired Me.
Their experts will find and remove your information. They removed her information from hundreds of data brokers. If you're doing it in a family, you can actually assign a unique data sheet to each member that's tailored to them. You have easy to use control so the account owner can manage privacy settings for the whole family. Delete Me then and this is really important they'll make the first initial deletion, but then they continue to scan and remove your information regularly. That's important because there's new data brokers all the time and that's their job. That's what they focus on 100% of the time. They're always looking who's the new guy, and it's such a profitable business for data brokers. There's more springing up every single day.
It's also true that these data brokers yes, they're required to have a delete page, but there's nothing to stop them from collecting more information about you after the fact. Oh, the middle name's different, or oh, it's a different address, and they just start all over again. That's why Deleteme will continue to scan and remove everything they can find Addresses, photos, emails, relatives, phone numbers, social media, property value and a whole lot more, including social security numbers. It was a shock to me to learn that there is no law you'd think this would be a federal law against selling somebody's social security number. It's completely legal. That's what these guys do. It's completely legal. Look, if the law is not going to stop them, we got to do it. Protect yourself, reclaim your privacy, visit, join to lead me.
Dot com slash twit. Use the offer code twit. It's up, it's in our hands and it's really important for your business and for your family, as well as for you. Join. Delete mecom slash twit. By the way, when you get there, use the offer code twit. That does two things. One, it lets them know. You saw it here. That's really important to us. Two, it gets you 20 off. I think that's gonna be important to you. Join, delete mecom slash twit. The offer code is twit for 20 off, and we thank him so much for sponsoring the good works Steve is doing here. Back to you, steve.
1:28:16 - Steve Gibson
Okay. So before I share the latest news on the movement of 1.5 billion US dollars worth of stolen Ethereum tokens, I should note that the 10% bounty on that $1.5 billion is not $150,000, as I apparently mistakenly said. It's a little more than that. Last week, yeah, several of our listeners politely wrote to say, steve, that would be $150 million in bounty, not $150,000.
So, indeed, I am happy to share that correction and thank you listeners who are paying attention. Ok, so what do we know today? Cryptonews reports under the headline nearly 20% of Bybit's $1.46 billion in stolen funds gone dark, said Bybit's CEO. His CEO, ben Cho, now says nearly 20% of the funds are now untraceable. Less than two weeks after the exchange lost over $1.4 billion in a highly sophisticated attack by North Korea-backed hackers, in a March 4 post on X, cho shared an update on the ongoing investigation into the cyber attack, revealing that around 77% of the stolen funds remain traceable, but that nearly 20% has gone dark.
Through mixing services, which came under scrutiny for unwillingness to prevent DPRK hackers from laundering the funds to convert stolen Ethereum into Bitcoin. Approximately 83% of the funds, or around $1 billion, were swapped into Bitcoin across nearly 7,000, that's actually 6,954 individual wallets. So, as I said, this was that dispersion that I talked about, where it just. They scattered it to the four corners of you know in order to make it much more difficult to track and to chop this huge amount into smaller, less suspicious sized chunks. Less suspicious-sized chunks, as Crypto News reported earlier. They wrote while other protocols took steps to prevent the movement of stolen funds.
Thor chain validators failed to take meaningful action. Pluto, a core contributor, resigned in protest after nodes rejected a governance proposal to halt ETH transactions. Of the stolen funds, 72% 900 million passed through ThorChain, which remains traceable, says Cho. However, around 16% of the funds, totaling just shy of 80,000 Ethereum, valued at around 160 million, have now gone dark through EXCH, a centralized crypto mixing service. Cho mentioned that the exchange is still waiting for an update on these transactions. Another portion of the funds around 65 million, also remains untraceable. As Cho says, more information is needed from OKX's Web3 wallet. In addition, the Bybit CEO revealed that 11 parties, including mantle Paraswap and blockchain sleuth ZaxxBT, have helped freeze some of the funds, resulting resulting in over 2.1 million in bounty payouts so far.
1:32:13 - Leo Laporte
So that's 2.1 billion in saved money, right.
1:32:18 - Steve Gibson
Yeah.
1:32:18 - Leo Laporte
That's pretty good. That's a good start. So.
1:32:21 - Steve Gibson
Bybit is recovering some of their stolen money in return for those 10% bounty payouts, which allows them to keep those monies legally, which is certainly the way to do it.
1:32:34 - Leo Laporte
I would check Mario and Dallas if I were them. I just I don't know. I think that's one possible place to look.
1:32:44 - Steve Gibson
Well, maybe one of his cousins, the Lazarus group. Wow, and just listen, as I'm sharing what Crypto News wrote. It's like this is clearly just a world unto itself.
1:33:01 - Leo Laporte
Yes, that's right.
1:33:02 - Steve Gibson
When you talk about all this stuff moving back and forth and sloshing around and it's just amazing, it's the Wild West, absolutely.
1:33:08 - Leo Laporte
Yeah, it really is back and forth and sloshing around and it's just, it's, it's a wild west, absolutely yeah, and while there were for a while some attempts to regulate it with the sec, I think that's that's uh, that that that horse has left the barn doesn't seem to be much interest at the time not anymore.
1:33:21 - Steve Gibson
No, I'm doing that, so yep, okay. Also, meanwhile, what of the safe wallet service, whose malicious infiltration was the proximate cause of this very expensive breach in the first place? To Bybit hack with major security improvements? Which is what you call, you know, closing the door after the horses have all left the barn? They wrote Ethereum-based crypto wallet protocol SAFE implemented quote immediate security improvements unquote to its multi-sig solution.
Following a cyber attack on Dubai-based exchange Bybit on February 21st, north Korea's Lazarus stole, as we know, over $1.4 billion in Ether from Bybit's Ethereum wallet. By exploiting vulnerabilities in SafeWallet's UI, the infamous hacking group injected hostile JavaScript code specifically targeting Bybit, siphoning more than 400,000 ETH. To prevent further attacks again whoops. Safe placed its wallet in lockdown mode before announcing a phased rollout and a reconfigured infrastructure Right. Martin Koppelman, co-founder of Safe, said in a March 3rd Xcom post that their team had developed and shipped 10 changes to the UI. The protocol's GitHub repository showed updates to quote show full raw transaction data now on the UI and quote remove specific direct hardware wallet support that raised security concerns unquote. Among other upgrades.
Bybit's CEO Ben Cho discussed the incident on the when Shift Happens podcast, with host Kevin Faulnier explaining that the attack occurred shortly after he signed a transaction to transfer 13,000 ETH Cho mentioned using a Ledger hardware wallet, but noted that he couldn't fully verify the transaction details. The issue is known as blind signing, a common vulnerability in multi-sig crypto transactions. Safe's latest updates aim to provide signers with more detailed transaction data, according to Koppelman. In response to a post from Kyber Network CEO Victor Tran. Regarding industry-wide security efforts, koppelman emphasized the importance of collaboration, but noted that immediate damage control remains the priority.
Writing quote we're still in the putting out fire mode, but once we have that behind us, we need to come together and improve overall front end and transaction verification security Compliments stated, adding that this will take come from all this, though it certainly was expensive an expensive lesson. There is so much liquidity sloshing around in this crypto world. It still boggles my mind. In this crypto world. It still boggles my mind, you know. I mean we're just like oh yeah, we lost 1.2 billion, maybe one and a half billion dollars, but we got that covered.
1:37:21 - Leo Laporte
It's almost as if they built a technology designed to be easily anonymously transfer funds from one party to another.
1:37:28 - Steve Gibson
It's almost as if it was designed to do that. Wow, and that there's a lot of interest in having that done. You know like oh hey, I got some application for anonymous big dollar transactions.
1:37:42 - Leo Laporte
Used to be, you had a big suitcase to hold all the cash.
1:37:47 - Steve Gibson
Now it's this little tiny wallet and it can hold billions and you have to have mario, who is a big guy able to you know, because that was walt the. Those luggages are heavy when they yeah, I just watched.
1:38:02 - Leo Laporte
I was just watching an old, uh heist show called heat, where it was back in the days when you had to rob classic movie yeah, al pacino, robert, and you had to rob, you know, armored trucks to get cash or rob banks and they brought these big bags in to carry the cash out and it's like no, no one, right, you know what? No one who's any brains robs banks or armored trucks anymore. That's not the way to get it. You just need a little thumb drive and a computer and hire some geeks.
1:38:36 - Steve Gibson
And a few geeks named Mario. That's right, Okay. So meanwhile, back on the encryption front, Last week the BBC reported under the headline Apple takes legal action in UK data privacy row. This, of course, would be in response to a legal demand whose very existence Apple is prohibited from divulging. But it seems that particular cat is well out of the bag. So the BBC wrote Apple is taking legal action to try to overturn a demand made by the UK government to view its customers' private data if required. The BBC understands that the US technology giant has appealed to the Investigatory Powers Tribunal, an independent court with the power to investigate claims against the security service. It is the latest development in an unprecedented row between one of the data belonging to Apple users around the world with UK law enforcement. In the event of a potential national security threat, Data protected by Apple's standard level of encryption is still accessible by the company if a warrant is issued, but the firm cannot view or share data encrypted using its toughest privacy tool, Advanced Data Protection. Last week, Apple chose to remove ADP from the UK market rather than comply with the notice, which would involve creating a backdoor in the tool to create access. Apple said at the time it would never compromise its security features and said it was disappointed at having to take the action in the UK.
The UK's order also angered the US administration, with President Donald Trump describing it to the spectator as, quote something that you hear about with China.
Unquote Tulsi Gabbard, head of intelligence, said she had not been informed in advance about the UK's demand. She wrote in a letter that it was an egregious violation of US citizens' rights to privacy and added that she intended to determine whether it breached the terms of a legal data agreement between the US and the UK. Data agreement between the US and the UK. The Financial Times, which first revealed Apple's legal action, reports that the tribunal case could be heard in the next few weeks but may not be made public. The Home Office refused to confirm or deny that the notice issued in January even exists. Legally, this order cannot be made public, but a spokesperson said More broadly, the UK has a long-standing position of protecting our citizens from the very worst crimes, such as child sex abuse and terrorism, at the same time as protecting people's privacy. The UK has robust safeguards and independent oversight to protect privacy, and privacy is only impacted on an exceptional basis in relation to the most serious crimes, and only when it is necessary and proportionate to do so. Unquote.
1:42:08 - Leo Laporte
I believe that, for now, the intent, the intent, intent is good. Yes, I don't deny that.
1:42:16 - Steve Gibson
Now, myself being a glass half full sort, yeah, I'm still holding out hope that Apple's initial move will have shaken up the UK's legislators sufficiently for them to allow Apple's initial move will have shaken up the UK's legislators sufficiently for them to allow Apple's appeal to succeed and for Apple's very public shot-across-the-bow threat to pull their strongest encryption entirely from the UK will be sufficient to put this troublesome issue back to bed. For a while We'll see. The unresolved question is given that we now have the technology to create and enforce absolute privacy of communications and data storage in a modern democracy, which is designed to be by the people and for the people, with elected representation and government, do the benefits of this absolute privacy obtained by the overwhelming law-abiding majority outweigh the costs and risks to society created by its abuse by a small criminal minority? Don't know. The trouble is that individual governments may decide these issues differently.
Yet the Internet is global and has always promised to be unifying. When we stand back to look at these issues surrounding privacy through encryption and the challenges presented by the biological ages of Internet users and the perceived need to filter their access to this global network, these fundamental issues and concerns created by cyberspace having very different rules from physical space have largely been ignored until now. It feels as if this has all happened so quickly that society has been busy catching its breath, waiting for the dust to settle, waiting for services to be developed and to mature, waiting for those who govern us to catch up. It appears that our societies are finally gearing up to deal with these issues. We've had a really interesting first 50 years of this, Leo. What are the next 50 going to look like?
1:44:45 - Leo Laporte
Yeah Well, that's a question we're all asking in a variety of ways. You know, listening to this makes me think that apple is probably the party that leaked.
You know they're not supposed to reveal that they've received this request but now that I think about it, they probably leaked this uh off the record to a couple of news agencies who took it and run, and that gave Apple the cover then to continue to do what they did, which is pull advanced data protection and appeal. The appeal is kind of like our FISA court the appeal is to a secret court.
1:45:24 - Steve Gibson
Right A tribunal in this case, and you may never know.
1:45:28 - Leo Laporte
You'll never hear the arguments pro or con, and you may not even know the result. The only way we'll know is the canary that Apple has put out now, which is pulling ADP from England. Yep, it's very interesting.
1:45:41 - Steve Gibson
Very, very interesting.
1:45:42 - Leo Laporte
You know what we're really on the cusp.
1:45:43 - Steve Gibson
We could go either way in all of this right now. Yes, it feels to me like, like you know, the pressure has been mounting, yeah, and it's like, as they say, it's gonna blow, it's gonna blow let's just hope it blows in the right direction well, and you know, whichever way it goes, I mean, it may be that we had a decade or so of privacy. Remember those ridiculous days when you couldn't export a key greater than 128 bits?
1:46:20 - Leo Laporte
56. It was 56 bits, it was 40 bits. 40, that's right, it was really low. It was the limits so they could crack it basically.
1:46:28 - Steve Gibson
Basically, yes. So because it was like oh, and cryptography was classified as a munition. Legally it was a munition because you were unable to export munitions to foreign hostile countries. So I mean, maybe it's going to be that that crypto is outlawed, yeah, or maybe some compromise will be made. Maybe it will be necessary for for anyone who wants to offer it to offer it selectively and for there to be a master key, or maybe governments will just say, okay, it's more important to have it than not.
1:47:10 - Leo Laporte
You know, more benefit is derived from it than harm is created from it well, ultimately, I think, if you care, you probably should now act to secure strong encryption. The good news is it's it's fairly easy to implement locally.
You can do it, that's exactly it, and that is ultimately the argument is if, if it is outlawed, only the bad guys will use it, yeah, yeah, and people who care about their privacy, and I think this is you know why everybody should just learn a little bit of crypto well, and of course we we've been advocating tno, trust, no one encryption right or pi pre-internet encryption.
1:47:53 - Steve Gibson
The idea is, if you encrypt it yourself, then it doesn't matter what happens after. It leaves your, your control yeah, that's the key.
1:48:00 - Leo Laporte
Don't put it on icloud, encrypt it and then put it on icloud. And yes, you're fine, right, they don't have the key to it now. Then of course, people come to your house. But that's, that's a trouble for another day. All right, sorry I didn't okay.
Uh, I think we should take a break, oh okay, we can do more small things to talk about, and then our big topic so, oh yeah, well, this would be a good time then. All right, yep, yeah, I'm glad you're here. We're watching, uh, security. Now we're listening to the master. I I feel like I should be sitting on the floor with my legs crossed just listening to the master as we, as we, as we learn about all of this stuff, and it's great, isn't it? We're learning so much. Thank you, steve. Uh, I don't say thank you enough, but thank you for what you do. It's really, really valuable for all of us. We appreciate it it and for Mario in Dallas, who learned everything he knows about AES from this show.
1:48:54 - Steve Gibson
You got it, Mario. Listen to those post-quantum crypto episodes again. You're missing out on a few of those questions you can really get that down right.
1:49:01 - Leo Laporte
Yeah, practice. Our sponsor for this segment is SecurityNow Zscaler, the leader in cloud security. It's a way to protect yourself in a way that unfortunately, uh, current security tools have not. Enterprises have spent over the last years billions of dollars on perimeter defenses, right, firewalls, uh, and vpns. Has that worked? Has it? Everything's fine now, right? No, no, it's not.
Breaches continue to rise. There's an 18 year over year increase in ransomware attacks. That was last year. Get ready, this is going to be worse, much worse than 2025. Last year, record 75 million dollar payout. I'm sure that's just the tip of the iceberg and it's going to get worse.
So what do you do to protect yourself? Traditional security tools are kind of almost the opposite of protecting you. They give you public facing ips and the bad guys. Now that's something they can hang their hat on. By the way, they're black, black hat, so I'll put on a black hat for that. This is now the bad guys. They've got your ip addresses, they can use ai. They can attack you better than ever before with these tools, faster than you can protect yourself. And then what happens? Mr black hat gets into your network. He can wander at will because these tools assume. Well, we have such good perimeter protection, if anybody's in the network, they must work for us, right? So what do they do? They go around. They read your emails, they find your customer information. They exfiltrate it using encrypted traffic, which your firewalls struggle to understand. They've got complete control of your system.
Hackers are exploiting our traditional security infrastructure using AI to outpace your defenses. It's time to rethink your security. Don't let the black hats win. They're innovating and exploiting your defenses. You need no, not Mario and Dallas. You need Zscaler, zero Trust plus AI. So this is such a good solution. It hides your attack surface, so your apps and your IP addresses are invisible Right there. That's a huge gain. It also, if somebody does get into the network, eliminates lateral movement, because users can only connect to the apps they're approved to use the specific apps, not the entire network, and Zscaler continuously verifies every request based on identity and context. It simplifies security management with AI-powered automation, so your life is easier and they can detect threats. Using AI, they analyze over half a trillion daily transactions almost all of which are fine to find those few that are really a threat to you and protect you from them.
It's simple Hackers can't attack what they can't see. Protect your organization with zscaler zero trust plus ai. So hacker getting into your network, wandering at random, mario and dallas getting his his way with you, or zscalercom security. Check it out. Zscalercom security, seriously. Zero trust is the answer. Zscaler is the hero you're looking for. Zscalercom security. We thank him so much for supporting security. Now I'm putting up with my hijinks and we thank you for going to that address, because then they know you saw it here z. You could say that guy with the hats zscalercom security.
1:53:01 - Steve Gibson
Thank you, steve, I'm getting a little silly here I have a feeling that zscaler knows exactly what they're getting. I know, I hope so, when they put their advertising dollars here I, I, I'm gonna keep the black hat though.
1:53:12 - Leo Laporte
This is good, I'm ready. All right, on, we go with the show.
1:53:17 - Steve Gibson
Steve so, uh, I wanted to let our listeners know that if they encounter reports claiming that there's a flaw that's been found in pass keys, uh, the truth is somewhat more nuanced. Oh, I hope so, because this is scary. It wasn't a flaw in passkeys, but there was a problem found. It was a very specific and difficult to perpetrate account takeover flaw that was only possible due to URL link navigation mistakes which had been made in mobile Chrome and Edge. They fixed it back in October of last year, mobile Safari fixed it in January of this year and Firefox patched the problem last month, in February.
At one point in the Passkeys FIDO flow, mobile browsers are given a link with the scheme FIDO colon slash slash, unfortunately, that they were all allowed to navigate with that URL. And that's where this really subtle, very difficult to implement but still possible sort of end around was created. But once the three browsers all started blocking this FIDO colon slash slash scheme from being navigable, then that small loophole which a researcher had discovered very clever guy was closed and Passkey's returns to being what we want the extremely robust network authentication solution that the world needs it to be OK. So I don't know what's going on in the UK. First, of course, as we know.
1:55:18 - Leo Laporte
I think they don't know either.
1:55:19 - Steve Gibson
They order Apple to accomplish the impossible by decrypting data for which the UK knows Apple does not hold the keys. Apple does not hold the keys. Then I read that a court in the UK had demanded that a US-based security researcher remove their reporting of an embarrassing cyber attack and data breach which occurred at HCRG, which was formerly known as Virgin Care, one of the largest independent health care providers in the UK. So seeing that made me curious. So I first found a nice summary of the situation which TechCrunch reported. They wrote a US-based independent cybersecurity journalist has declined to comply with a UK court ordered injunction that was sought following their reporting. That is this the cybersecurity journalist reporting on a recent cyber attack at UK private health care giant HCRG, law firm Pinsent Masons, which is the UK firm.
So the UK law firm Pinsent Masons, which served the February 28th court order on behalf of HCRG, demanded that Databreachesnet take down two articles that referenced the ransomware attack on HCRG. The law firm's notice to DataBreachesnet, which TechCrunch has seen, stated that the accompanying injunction was, quote obtained by HCRG at the High Court of Justice in Londonondon. To quote prevent the publication or disclosure of confidential data stolen during a recent ransomware attack. What you know it was they wanted to report. They wanted to prevent the the, the reporting of the attack, which is not at all the same as preventing the disclosure of confidential data. They apparently felt well, the fact that we were attacked should be confidential.
1:57:35 - Leo Laporte
No one should know about that. That's a secret? No, certainly not.
1:57:38 - Steve Gibson
That would embarrass us. Yes, what would our shareholders think? And all of those people? You won't even believe how much data was stolen. Anyway, the firm's letter states that if DataBreachesnet disobeys the injunction, the site may be found in contempt of court, which may result in imprisonment, a criminal fine or having their assets seized. Unquote. Databreachesnet writes.
Techcrunch, run by a journalist who operates under the pseudonym Dissent Doe, declined to remove the posts and also published the details of the order on grounds that DataBreachesnet is not subject to the jurisdiction of the UK injunction no kidding and that the reporting is lawful under the First Amendment in the United States where DataBreachesnet is based. Dissent also noted that the text of the court order does not specifically name data breaches dot net, nor reference the specific articles in question. Just says you're bad. So TechCrunch says. Legal threats and demands are not uncommon in cybersecurity journalism, since the reporting often involves uncovering information that companies do not want to be made public, but injunctions and legal demands are seldom published over risks or fears of legal repercussions. The details of the injunction offer a rare insight into how UK law can be used to issue legal demands to remove published stories that are critical or embarrassing to companies. The law firm's letter also confirms that HCRG was hit by a ransomware cyber attack. So now they've even admitted that as a consequence of this. Ok, so that made me interested enough to go to the source, where I discovered some additional head-shaking detail, which picks up where TechCrunch left off. Remember that the site is being represented by Covington and Burling and in the UK we have the firm Pinsent Masons. So on his site, the subject of this injunction, dissent Doe wrote when Jason Chris of Covington and Burling, his firm sent an email to Pinsent Masons informing them that DataBreachesnet is a US entity with no connection to the UK and that neither the UK nor the High Court of Justice has any jurisdiction over this site. That should have been the end of the matter, right, but it wasn't, and that's partly why Data Breaches is reporting.
On this Yesterday morning, databreachesnet received an email from its domain registrar that it had been served with the injunction by pinsent masons and that if data breaches did not remove the two posts in question within 24 hours, this website would be suspended. The two posts were not even particularly exciting. They mainly summarized some of suspect files' great reporting and linked to those posts For those who would like to see what HCRG or the court demanded I remove. The posts can be seen at and. In his posting he provided two links which I've duplicated here. One is UK colon more details emerge about ransomware attack on HCRG by Medusa and the second link is Medusa unveils get this another 50, 5-0, 50 terabytes of stolen data from HCRG care group, giving greater insight into the scope of the breach.
He said data breaches informed the registrar that is, their domain registrar that the injunction was not valid and that data breachesnet is not under the jurisdiction of the high court of justice or of the United Kingdom. Jason Chris of Covington and Burling also notified the registrar that not only was data breachesnet a U S entity but, as the site's domain registrar for many years, they could see for themselves that the site was registered to a US person at a US postal address with a US telephone number. Later yesterday the registrar responded. Since your lawyer has already sent notice to the complainant. Since your lawyer has already sent notice to the complainant, pinsent and Masons, we confirm that we will not be taking any action on your domain databreachesnet Good. Additionally, we will be informing Pinsent and Masons to contact your lawyer directly should they have any further issues. This ticket is now closed.
Pinsent Masons did not respond to Monday's email notification by Jason Chris that this site was not under UK or high court jurisdiction, and at no time yesterday did Pinsent Masons contact the domain registrar to say that it was withdrawing the demand for the removal of the posts. That too was surprising. Is it over, or will there be more Data Breaches? Hopes it is over.
2:03:42 - Leo Laporte
There's a little twit connection with this. It was Ian Thompson at the Register, a regular on our shows shows, who revealed this in the register and even has the screenshot of the site uh and the ransomware notification on it. So it was pretty hard to deny it at this point and it's out there and you know. Thank you, wow doing yeah good work, as always.
2:04:08 - Steve Gibson
So you know, a major firm like Pinsent and Mason's must be fully aware of the First Amendment free speech protections. There's no.
2:04:18 - Leo Laporte
First Amendment in the UK but you know we're here, right?
2:04:23 - Steve Gibson
Yeah, and they certainly knew that DataBreachesnet was a US-based website registered in the US, so it had to be pure baseless intimidation. Yeah, you know, of course, somewhere some stuffed shirt at the health care at the uk health care provider was annoyed by the fact that this embarrassingly massive 50 terabyte data breach of their systems was being reported on and decided to aim their law firm at the reporter. You know, just sort of. You know, maybe we can make it go away. Wow, okay, get a load of this one. Everyone's going to hear a very familiar name pop out of this little piece of news which reads piece of news which reads the FBI has recovered 23 million worth of crypto stolen from Chris Larson, the co-founder and executive chairman of the Ripple cryptocurrency, which trades under XRP, or is named XRP. The recovered funds are just a small part of the tokens stolen from Larson in January of last year. The funds were estimated at over $110 million last year, but are now worth over $700 million. And here it comes. And here it comes. Hackers stole the Larson funds by first stealing password stores from password manager LastPass in 2022. Oh, since the attack, the hackers have been slowly cracking passwords and emptying crypto wallets. As of May 2024, over $250 million worth of crypto assets had been stolen using the data obtained from LastPass. Okay, now remember at the time we talked about this. Okay, now remember, at the time we talked about this. Bad guys largely don't care, could not care less, about random people's laundry. They want one thing, which is money. So they're known to be targeting any crypto passwords suspected of being stored in LastPass vaults. With LastPass's failure to increase the repetition counts of their PBKDF system, accounts which had been created in the early days of LastPass were left with very low, or even, in some cases, zero, iteration counts of their hashing algorithm cases zero iteration counts of their hashing algorithm. This made cracking the passwords protecting those early adopters extra easy. Our advice at the time for anyone who had stored crypto access passwords in LastPass was to immediately create a new wallet and transfer the assets from the now unsafe wallet into the newly created wallet, and we can see why that advice, when taken, could help to protect people from exactly this problem and this was the great problem was that this massive blob of data was everybody's vaults, which were encrypted, but in some cases, not strongly enough encrypted, and so, over time, you can do offline decryption in order to obtain people's data in the clear Wow data in the clear Wow.
Also in more post-mortem news. We're still learning more about the early genesis of that attack, which ultimately affected Bybit. The North Korean hackers compromised, we know, the multi-signature wallet provider, safewallet. It turns out this was conducted through a social engineering attack which targeted one of its developers. And remember, social engineering is now the way these things are happening more and more, pretty much. You know. A lot of the other infrastructure has been shored up and tightened up. Social engineering the human factor has now become the weakest link.
According to a new post-mortem report, the point of entry appears to have been a malicious Docker file that was executed on one of the employees' computers. The Docker file deployed malware that then stole his local credentials. The attackers then used the developer's AWS account to add malicious code to the safe wallet infrastructure, which targeted a specific multi-sig wallet which was used by the Bybit cryptocurrency exchange. And so that's the chain of events Social engineering attack guy downloaded and installed a malware-containing Docker file, ran it on his machine. It deployed malware on his computer. That malware grabbed his AWS credentials, sent that back to the bad guys. They used that to get into SafeWallet's infrastructure, make the changes and then infect the Bybit transaction and then infect the Bybit transaction.
The change that Safe has made is to now prominently display the transaction details, which they hadn't been fully bothering to display until now. So they're just trying to make the transaction event more transparent in the hope that that will help people catch any further problem. It's a little bit like you know how, right now, everyone kind of glazes over when they look at a Bitcoin wallet ID. It's just like gibberish, and so you just copy and paste it. So you just copy and paste it. Well, if you can make it somehow more obvious that, wait, what you pasted is not what you copied, then that would help you catch clipboard attacks.
So that I wanted to share a terrific look at how a Windows-centric network that is, a network, an enterprise using secured Windows systems nevertheless was hit by ransomware, even though they had strong and effective malware protections in place. A security research group has been tracking the Akira ransomware group that we've referred to a few times. What they found as they dug into a forensic reverse engineering of a distressingly successful attack, was interesting, and it was surprising even to them. Here's what they shared. They wrote until the compromise. This incident had followed Akira's typical modus operandi After compromising the victim's network via an externally facing remote access solution. Oh.
2:12:10 - Leo Laporte
I was just talking about that, yeah. Network via an externally facing remote access solution. Oh, I was just talking about that, yeah.
2:12:23 - Steve Gibson
The group deployed AnyDesk, a remote management and monitoring tool, to retain access to the network before exfiltrating data. During the latter stages of the attack, the attacker moved to a server on the victim's network via remote desktop protocol. You know RDP. Once again, akira commonly uses RDP Akira being the bad guys right the ransomware group as it enables them to interact with endpoints and blend in with system administrators who use RDP legitimately. The threat actor initially attempted to deploy the ransomware on one of the Windows servers as a password-protected zip file winzip that contained the ransomware binary winexe. However, the victim's endpoint detection and response EDR tool immediately identified and quarantined the compressed file before it was unzipped and deployed.
2:13:30 - Leo Laporte
See, you're fine, you're safe, everything's good, it works right yeah.
2:13:34 - Steve Gibson
At this point, the threat actor likely realized they had alerted the EDR tool and would not be able to evade its defenses. They therefore pivoted their approach. Prior to the ransomware deployment attempt to this Windows server, the attacker had conducted an internal network scan to identify ports, services and devices, first thing you do. This network scan identified several Internet of Things IoT devices on the victim's network, including webcams and a fingerprint scanner. These devices presented an opportunity to the threat actor to evade the EDR tool and deploy the ransomware successfully. The threat actor likely identified a webcam as a suitable target device for deploying ransomware for three reasons. First, the webcam had several known critical vulnerabilities, including remote shell capabilities and unauthorized remote viewing of the camera. Second, it was running a lightweight Linux operating system that supported command execution as if it were a standard Linux device. The camera was.
Well, that's, oh my God. Everyone feels. I mean, linux is free. What could possibly go wrong? That's right, got Linux in your camera, making the device a perfect candidate for Acura's Linux ransomware variant. Third, the device did not have any EDR tools installed on it. Why would it that left it unprotected? In fact, due to the limited storage capacity, it's doubtful that any EDR could be installed on it, but the ransomware could. After identifying the webcam as a suitable target, the threat actor began deploying their Linux-based ransomware with little delay. As the device was not being monitored, the victim organization's security team were unaware of the increase in malicious server message block smb traffic to and from the webcam to the impacted server. Oh my god. And the webcam successfully fully encrypted the servers on the victim's network. Oh my god. Akira was thus able to encrypt files across the victim's network boy.
2:16:14 - Leo Laporte
I mean, this answers the question when you say you know, protect your iot devices. Oh, so they could get into my camera. What's the big deal are my light bulbs? Well, they can actually launch ransomware from these devices.
2:16:31 - Steve Gibson
Yes, oh my god yes, wow, um, I thought this was a super interesting case here. As you said, the vulnerable iot device was not the initial entry point. The honor belonged to some unspecified remote access solution running on a windows machine, as it often does but, even though the iot device wasn't in their um, wasn't in their way um. No, it wasn't their way in.
2:16:58 - Leo Laporte
That's not how they got in it wasn't their way in.
2:17:01 - Steve Gibson
Exactly it was not their way in. The attackers needed an unprotected host for their malware. They were unable to run their ransomware on any of the Windows systems or servers on the network, because all of those systems were being protected by effective real-time EDR Endpoint Detection and Response Security. But their network scan had discovered some Linux-based webcams and that's all they needed. And the security of those cams was quite lacking, which made their job made their jobs even easier. So they loaded their malware on into the cams ram and it reached out over the network using windows file and printer sharing, smb server message blocks protocol to read and write back the encrypted files.
Under that prevention and remediation section of their report, the security firm wrote preventing and remediating novel attacks like this one can be challenging. At a minimum, organizations should monitor network traffic from their IoT devices and detect anomalies. They should also consider adopting the following security practices. And what do you think? Their number one first recommendation was they wrote network restriction or segmentation, zero trust trust place iot devices on a segmented network that cannot be accessed from servers or user workstations or restrict the device's communication with specific ports and ip addresses and all it needs is three routers.
2:18:45 - Leo Laporte
Actually, a vlan would do it right to segment it. Yeah, a vLAN would do it right To segment it, yeah.
2:18:50 - Steve Gibson
A VLAN would do it. Yeah, yep, you know it takes more work and it can limit functionality and it means you cannot just randomly plug anything in anywhere you'd like. Yeah, so some ongoing network management discipline will be needed too always. But this company learned that lesson the hard way.
2:19:09 - Leo Laporte
Put your IoT devices on a separate VLAN, yeah, and don't give them access to the secure VLAN, yeah, wow, that's a great story, isn't that really?
I can't believe that there's enough RAM and memory in a webcam running Linux? I mean, obviously memory's cheap. Now, right? So you're going to run Linux on this. I wonder how many IoT devices are running some little Linux kernel in the background. Why not? It's a free operating system, why not? Wow, great story. You know what else would have helped them? A Thinkst Canary Right, a little honeypot. You want me to do a little break here and then we'll talk about the Bluetooth backdoor. That wasn't. Yeah, we're ready for that. This is brought to you today by Thinkst Canary, our sponsor for this segment on security. Now, another great security solution.
These guys get in, they're in the network, they're wandering around. So what do you do to protect yourself? For one thing, just like this company. Often these you know, once the bad guy gets into your network, you don't know you've been breached. In fact, on average takes 91 days for a company that has been breached to find out. Three months, that's three months. A hacker can wander your network, install stuff. Look for security flaws like that webcam. Uh, you don't want them in your in your network at all. So what's the best way to find out if somebody's in your network, or maybe even a malicious insider going where they shouldn't.
The thing's canary, it's a honeypot that can be deployed in minutes and it can impersonate anything a skater device, a server, a linux box, an ias server. I mean really there's. There's dozens and dozens of personalities and and these things, by the way, that the folks who do the things canary are very accomplished white hat hackers. I mean they teach governments and businesses how to breach networks. They know about this stuff and they've. They've created something that is very secure, rock solid but can easily impersonate anything else. And when I say impersonate, it's a perfect impersonation. I I have a things canary that is impersonating a sonology nas and it's down at the mac address. Bad guys aren't going to look at it and go oh yeah, that's fake. It looks like in every respect like a unprotected nas, uh, including the dsm7 login and everything. But as soon as, oh, the other thing I think Canary can do I should mention this is really cool. Not only are they hardware devices that can assume any personality easily, they can also create files that are like tripwires you can put out throughout your network. They look like spreadsheets or PDFs or docxs or whatever you want. I have spreadsheets that are called employee information, things like that on my network XSLX files, and that's another thing. A bad guy goes oh, I've been looking for that. But the minute they touch it, the minute they attack the Synology and try to log in, the minute they try to brute force my fake SSH server, that's a thanks to Canary and they're going to immediately tell you you have a problem. No false alerts, just alerts that tell you there's something going on.
We've had a ThinkScanary for many years now. They've been with us for eight years and then only once has it gone off although I'm really glad to have one even at my home network here and that was when Megan got I won't name the name of the company but but got an external USB drive and it for some reason decided I'm going to go out and look for all the IP addresses and see what's on the other side. They were spying on us, basically, and I got the alert. You can get it as a text message, an email, slack, syslog. It supports webhooks. They have an API. I mean any way, you want it Immediately? I webhooks. So they have an API. I mean, any way, you want it Immediately. I got the message. I said it's a 10 dot, it's inside the network and I went and I found it. I ripped it from the wall and that was that the other thing.
It's really fun to choose the profile for your ThinkScanary device because it can be anything. You could change it. It's so easy to change. You can change it every day if you want. You pick the profile, register it with a hosted console for monitoring and notifications and then you just go okay, I'm done. Then you wait.
Attackers who breach your network, malicious insiders, other adversaries they cannot resist. You know. They may say I'm going to find a webcam that has linux running on it. Maybe they're going to do that. But before they do that they're going to go. But first let's open this excel spreadsheet with all the employees social security numbers on. I think I want to download that sucker right. Even in this attack you talked about, they didn't go for the webcam first. So the minute they hit your things canary you're gonna know, and that's the key is to know they're in the network.
Now we just have one here. A small operation might have a handful, of bank might have hundreds. It really depends on your operation. But as an example, go to canarytoolstwit. 7,500 bucks a year will get you five of them. That's enough for a pretty good sized business. Spread them around. You want them in every segment, right On every VLAN. You want them in the places the bad guys are going to go. For that money you get ThinkScanaries, but you also get your own hosted console. You get upgrades, you get support, you get maintenance. This is such a good security solution.
Of course, it's not the whole thing. Security is a layer thing, but you got to have that layer that tells you there's somebody in the network. By the way, if you use the offer code TWIT T-W-I-T in the how Did you Hear About Us box, 10% off the thing's canary, not just for the first year, forever, for as long as you own it. Also, if you're at all like, well, I don't know, or the boss says, well, I don't know, here's the thing to tell the boss there's a two-month money-back guarantee, 60-day money-back guarantee for a full refund. I have to tell you in all the years eight years now that Twitter has been doing these ads, partnered with Things Canary. We've mentioned the refund. No one has ever claimed it, because once you get it, once you see it first of all, you fall in love with it. It's so cool. All you have to do is go to canarytool slash love and you'll see what I mean. It's such a great idea. But also because it works. It does exactly what I just told you, exactly what I said. It's exactly what you need. Visit canarytool slash twit. Don't forget to use twit as the offer code. Put it in the how did you hear about us box. Just say twit, bing, bingo, banga, bongo, 10% off. I think this is the greatest solution. I love this.
I remember when we talked to I think it was steve bellovin, right in boston, steve, at our event last pass event, steve wrote the first commonly known honeypot. Uh, in, you know he wrote that book about in search of the wild hacker or the wiley hacker. And, uh, along with bruce cheswick oh, maybe it was cheswick, I think it was cheswick actually yeah and he said it was really hard to write a good honeypot. Now you don't have to, you just plug it in canarytools slash twit. We thank them very much for their support of security. Now, all right, I am. I am proud of myself, steve, because I saw this Bluetooth backdoor story. I read it and I decided not to do it on Twit on Sunday. There was just something fishy about it.
2:26:41 - Steve Gibson
Tell us what happened, tell us all about it. You got it exactly right, my friend. Okay, so I deliberately titled today's podcast the Bluetooth Backdoor because that's what nearly all of the tech press has been calling it, but in this instance, it does feel like the appropriate use of that loaded term, if it was right. So, ok, last Saturday, bleeping computers headline was undocumented backdoor found in Bluetooth chip used by a billion devices and, in fact, probably the reason it's made so much news was that there are so many of these things. It is the the most popular chip used by radio connected, bluetooth and Wi-Fi connected IoT devices. A Chinese firm, espressif, it's the ESP32, which is like it's the go to chip. It costs nothing. It's two euros for one of these things. They're just amazing little 32 bit processors. So there's more than a billion of them. Actually, it was a billion as of two years ago, 2023. The Chinese site was saying yeah, we've made more than a billion of these things, so it's a lot more than that now, anyway.
So last Saturday, bleeping Computer said undocumented backdoor found in Bluetooth chip used by a billion devices. Then the next day, on Sunday, they softened that headline saying undocumented commands found in Bluetooth chip used by a billion devices, and to explain the change they wrote after receiving concerns about the use of the term backdoor to refer to these undocumented commands. We've updated our title and story. Our original story can be found here and in that here was a link, and I got a kick out of the fact that they actually linked to the Internet archive for a copy of their own previous page. So, okay, this podcast has spent some time, you know, batting this issue of when is a backdoor, not a backdoor, right? You know, would forcing Apple to deliberately and publicly redesign their advanced data protection, icloud synchronization and backup to incorporate a master key be adding a backdoor? You know, in this instance I would say no, because this feature of ADP, which would then be added to ADP, the master key, would be neither secret nor malicious, whereas the classic definition and use of the term backdoor is both. You know, it definitely needs to be secret, and if it's not secret, it cannot be a backdoor. So that leaves us with the question of malice. In Apple's case, there's clearly no malice anywhere. Thus, the term backdoor fails to qualify for what Apple has apparently been asked for by the UK on both of those counts.
So what about today's news of what nearly everyone is calling a backdoor. We know for sure that what a pair of Spanish security researchers discovered lurking in an astonishingly widely used Chinese microcontroller chip was at least undocumented and also maybe powerful and prone to abuse if it were to become known by a malicious party. But that part is not even clear. All the reporting said, oh my God. But I'll explain to you what I did and what happened. God, but I'll explain to you what I did and what happened. The intent of why these, these instructions, these commands 29 of them were left undocumented will never be known. My guess is it just cause they're not that important, not cause they were meant to be super secret and, and you know, allow something to be done. Okay, so, um, so here's what we know. Um, and this is from bleeping computers updated coverage after they toned down the language and backed away from the use of the term backdoor, which was the right thing to do. They said.
The ubiquitous ESP32 microchip, made by Chinese manufacturer Espressif and used by over a billion units as of 2023, contains undocumented commands that could be leveraged for attacks. The undocumented commands allow spoofing of trusted devices Okay, and I'll get back to that later. Unauthorized data access, maybe pivoting to other devices on the network that's a variation of the first case and potentially establishing long-term presence. Okay, because it has flash RAM. This was discovered by two Spanish researchers with a security firm, tar Logic, who presented their findings at RootedCon in Madrid. This was last week. A Tar Logic announcement, shared with Bleeping Computer, reads, quote Tar Logic security has detected a backdoor in the ESP32, a microcontroller that enables Wi-Fi and Bluetooth connection and is present in millions of mass market IoT devices. Okay, so that's where everyone got the idea that there was a backdoor. Right, the big you know. The firm themselves, the discoverers of this, clearly labeled it a back door in their presentation. They said exploitation of this back door would allow hostile actors to conduct impersonation attacks and permanently infect sensitive devices such as mobile phones, computers, smart locks or medical equipment by bypassing code audit controls. Okay, again, we'll come back to that. But the researchers warned, wrote Bleeping Computer, that ESP32 is one of the world's most widely used chips for Wi-Fi and Bluetooth connectivity in Internet of Things, iot devices, so the risk is significant.
In their RootedCon presentation, the Tar Logic researchers explained that interest in Bluetooth security research has waned, but not because the protocol or its implementation has become more secure call or its implementation has become more secure. Instead, most attacks presented last year did not have working tools, did not work with generic hardware and used outdated or unmaintained tools, largely incompatible with modern systems. Now I should explain that they're taking this position because that's the thing that they created. What they actually did was create a new set of tools which are modern, which are multi-platform and which offer the ability to explore Bluetooth connectivity. So their main thing was that they're solving the problem for researchers. And then they used it to do some research and that's what led them to this discovery.
They said TAR logic first developed a new C-based USB Bluetooth driver that is hardware independent and cross-platform. This provided direct access to the hardware without relying on OS-specific APIs. Armed with this new tool, which enables raw access to Bluetooth traffic, tarlogic discovered hidden vendor-specific commands opcode 3F in the ESP32 Bluetooth firmware that allowed low-level control over Bluetooth functions. Now, again bleeping computer got it exactly right. Armed with this new tool, which was written in C, which was a hardware-level driver that did not rely on OS-specific APIs, so it was direct to the hardware, they discovered Opcode 3F in the ESP32 Bluetooth firmware that allowed low-level control over Bluetooth functions. So, oh, and Ghidra was also involved, the famous NSA-sponsored reverse engineering tool that helps to reverse engineer firmware. So they were looking at the firmware in the ESP32 and had a tool that let them poke at the hardware.
Bleeping Computer said in total they found 29 undocumented hardware commands, collectively characterized as a backdoor. And now Bleeping Computer has that in quotes. Oh good, that could be used for memory manipulation, to read or write RAM and flash MAC address spoofing, for device impersonation and packet injection. Espressif has not publicly documented these commands, so either they are not meant to be accessible or they were left in by mistake. I think the third. There's a third option. They didn't think it was necessary. They're not all powerful oz commands, they're, they're. They just actually don't matter, you don't need them. So they didn't bother mentioning them right, right.
2:36:45 - Leo Laporte
They're there for their internal use well, no, they're actually there.
2:36:49 - Steve Gibson
Okay, okay, so they're there.
2:36:53 - Leo Laporte
Okay, go ahead yeah, I'm sorry I'll shut up.
2:36:56 - Steve Gibson
So so they have. We have a cve issued 2025, 27, 8, 40, so bleeping computer, said. The risks enabled by these commands include malicious implementations on the OEM level and supply chain attacks. Depending on how Bluetooth stacks handle HCI commands on the device, remote exploitation of the commands might be possible via malicious firmware or rogue Bluetooth connections Well, not rogue Bluetooth connections. And if you've got malicious firmware, then you're already on the device with malicious firmware. So who cares? They said. This is especially the case if an attacker already has root access. Again, if you already have root access, you're already on the device, planted malware or pushed a malicious update on the device Again already on the device. That opens up low-level access. In general, though, physical access to the device's USB or UART interface would be far riskier and a more realistic scenario Actually, it's the only possible scenario.
The researchers explained. In a context where you can compromise an IoT device with an ESP32, you will be able to conceal an advanced persistent threat inside the ESP memory and perform Bluetooth or Wi-Fi attacks against other devices yeah, rogue device While controlling the device over Wi-Fi or Bluetooth. Sure, if you're using your own firmware on your rogue device, or findings would allow the full takeover of ESP32's chips and the gaining of persistence in the chip via commands that allow for RAM or flash modification. Okay, sure, also, with persistence in the chip, it may be possible to spread to other devices, because ESP32 allows for the execution of advanced Bluetooth attacks. You would need those other devices to be vulnerable and no one says they are Okay. So Bleeping Computer said that they had contacted Expressif for a statement on the researchers' findings, but they had not received any comment, and I think that's because the Chinese people said what who cares? Yeah, okay, so next we need to look at what the researchers have explained about their own technology. I've edited it down somewhat to remove the market speak and the redundancy. They said the ESP 30. You know I'm going to skip this because it turns out it doesn't matter. Bottom line.
The rest of their posting talks about the broader scope of their mission, which is to create a platform to support Bluetooth security audits, which is certainly a variable, worthwhile endeavor. So what have we got here? Okay, the Bluetooth HCI. The HCI it defines the boundary between and I should, I've, I should have said that H, that what they talk about in their presentation is this oh, we found undocumented commands in the Bluetooth HCI. They've over and over and over. They say that that defines the boundary between the host processor and the Bluetooth hardware controller. That's what that is. Hci is the abbreviation for host controller interface, and the jargon has become standardized. Our listeners will have often heard me talking about adding AHCI support to Spinrite 6.1. Ahci is the advanced host controller interface that was created to manage SATA-connected mass storage devices. So HCI, host controller interface, is a generic reference describing the hardware boundary, the register set between a peripheral device and its processor. The processor talks to the peripheral by writing into these registers.
So the Spanish security group designed and developed a technology, created a new capability that will allow them to audit the operation of Bluetooth registers in devices. And what did they discover? They discovered that by far the most widely used microprocessor that lives at the heart of by far most iot devices contains an array of undocumented hcl register commands that that they implied could be received over the chip's Bluetooth radio. But it can't. I deliberately chose to use the word undocumented because it's less freighted with intent than the word secret. I have a picture in the show notes of these commands from their slide, which they presented in Spain. It was conducted in Spanish and the slide set is all Spanish, except, as we often see in code, english appears, you know, in code snippets, in code snippets.
But staring at the portions of their 46 slide deck, you know those portions that were understandable to me in English which and also chunks of reverse engineered and disassembled code, I began to get the sneaking suspicion that, while these commands might indeed be undocumented HCI commands which would be executed by Bluetooth hardware, it wasn't clear to me that they were remotely accessible. They appeared to be running their own. The researchers appeared to be running their own code on the ESP32 hardware and also reverse engineering pieces of its firmware and also reverse engineering pieces of its firmware. Nowhere did they ever talk about remotely connecting to a generic ESP32 and executing an attack. Since in this era of helpful AI, you can do translation, now I uploaded the spanish slide deck to chat gpt's latest 4.5 model and which was overkill, and asked for a translation into english. It did a beautiful job for me and my suspicions were confirmed. Now I could read the entire slide deck, beautifully translated. The tar logic posting ended by writing.
Over the coming weeks we will publish further technical details on this matter.
It may be that they have more than they're saying, but I don't think so.
The only thing I believe they've discovered is that the ESP32's Bluetooth HCI controller the Bluetooth hardware in this Espresso 32 chip, contains some commands that are undocumented because documenting them was not important, that are undocumented because documenting them was not important.
Discovering that an HCI controller contains a command which the host CPU issues to it that allows the controller to write to main memory could hardly be considered earth-shattering. The host which issues the command is just as able to write to main memory if it wants to, so big deal. If an unauthorized external Bluetooth radio were able to issue such a command remotely to an ESP32 based device, while presumably providing the data to be written into the system's main memory, and if this discovery existed in more than a billion of the devices we're all using, well then that would indeed be the end of the world as we know it. But the world is still here and I haven't seen any evidence of that capability in their presentation. I just really think they have made a big mountain out of a little tiny molehill, and in fact it now seems clear that this amounts to a host side access to an HCI controller and that the threat that this poses is more like a mouse hole than a back door.
2:46:12 - Leo Laporte
You have a convenient illustration of what that just might look at. Did you generate this with AI? Yes, I did. I think you did, I think you did, I did. Indeed, it's very good.
2:46:24 - Steve Gibson
It's cute Bleeping Computer noted that Espressif, the creator of more than a billion of these amazing little chips, had not replied to their inquiry. That's likely because they also know that this is nothing. At one point in the presentation the security researcher mentioned cloning another device's MAC address. Whoopie-doo, sure enough. One of the 29 undocumented commands was change MAC address. One of the 29 undocumented commands was change MAC address. Well, that's got to be there somewhere, because you're obviously able to set the MAC address of the device when it comes out of the assembly line. You know that's certainly neither a back nor nor big news. So anyway, I'm strongly inclined to come away from all this with a conclusion, exactly as you did, leo. It's what you sniffed from the beginning, that there's really not much here.
It made some attention-grabbing headline news, but nothing I have seen has suggested that the ESP chip is not still completely secure from external attack. You know they talk about being able to establish persistence. But if you're running code in a flash-enabled chip, persistence is not difficult to obtain. So you know, among the undocumented commands is write to flash. But I'm sure you can write to flash from the native instruction set of the chip. So who cares if the hardware blue chip controller can also do it. It just seems crazy to me. Maybe something more will be revealed in the future. It seems unlikely, because they would have gone for it in their main security presentation. I think they just found, oh my god, some undocumented commands in the hardware of the chip. Who cares? It doesn?
2:48:09 - Leo Laporte
It doesn't look like they're and, most importantly, you need hardware access to the chip to get to them.
2:48:14 - Steve Gibson
Yeah, they're registers. They're registers on the blue chip controller, the Bluetooth controller. That's all they found. Registers on the Bluetooth controller.
2:48:25 - Leo Laporte
There's a whole category of hair-on-fire attacks that require somebody sitting down at the device. This one is even, you know, more ridiculous, because you have to actually connect something to the device so that you can write to it, and so forth. But right, I even think that the hardware attacks that require somebody on your machine really don't deserve the attention they often get. They should be fixed, of course, because if somebody's on your machine, all don't deserve the attention they often get. They should be fixed, of course, because if somebody's on your machine, all bets are off anyway by that time. It doesn't matter what right they're. They're in yeah, um, yeah, this is yeah. So I kind of got that, it's just. I mean they should have been documented, I guess, right I mean, who cares they're?
just development tools.
2:49:13 - Steve Gibson
Yeah, I don't even see any reason to document them. They are not necessary for programming Bluetooth. They're useful for managing the chip's deployment, like setting the MAC address so that they all have different MAC addresses. Right, and you know whoopie-doo.
2:49:30 - Leo Laporte
So we discovered how they did that.
2:49:32 - Steve Gibson
Pretty much every chip in the world will do this right.
2:49:34 - Leo Laporte
yes, yes, if it's got flash, you're going to be able to write to it. Um, yeah, I said oh gee, persistence.
2:49:43 - Steve Gibson
Well, that's what flash is for right assistance. Why would it have flash otherwise, right, right uh okay so mostly this is a case of the press picking up a, a headline from a conference and saying oh my god, you know esp32, more than a billion devices, and these guys says it has a, these guys says it have backdoor and it's in spanish, so I'm not really sure what they said well, and it's a great thing in putting the headline used by a billion devices.
2:50:14 - Leo Laporte
That's always a a good bit of link bait. It did get a cve number, but yeah, that's not a big deal to get a cve number no, and and you're able to just request one.
2:50:26 - Steve Gibson
and the cve. When you look it up and I did under NIST it says undocumented functions. Right, that's what the CVE is. It's like, oh boy, we got scroll down and you'll see that it shows undocumented functions. Yeah. Is the there.
2:50:47 - Leo Laporte
It's maybe a little higher here somewhere.
2:50:49 - Steve Gibson
I saw it somewhere.
2:50:50 - Leo Laporte
Yeah, so that's important too, that just because something's in the national vulnerability database doesn't buy it itself. Oh there, it is Hidden functionality. Hidden functionality, that's the CVE name, or the CWE name. Yeah, uh, all right, good, well, I'm reassured.
2:51:13 - Steve Gibson
Yeah, again, it can't be that you're able to remotely change the programming of an unsuspecting ESP32 chip, or it would be the end of civilization as we know it. That would be a bad thing, and we're still here talking, leo, if you could do it via Bluetooth that would be a bad thing.
2:51:33 - Leo Laporte
And we're here. We're still here talking, leo. If you could do it via bluetooth, that would be a bad thing. That it, yeah, I mean we've talked. This is one of the kind of regular topics on the show about bluetooth vulnerabilities. There are plenty right.
2:51:44 - Steve Gibson
Yes, it is a very complex protocol for the first half of this podcast, nearly 20 years. Yeah, we were. They were happening all the time.
2:51:53 - Leo Laporte
I remember Bluetooth snarfing.
2:51:56 - Steve Gibson
Oh yeah, it was, and you notice it's sort of slow. It's gone now.
2:51:59 - Leo Laporte
We don't hear about it.
2:52:00 - Steve Gibson
We like we got that stuff settled down you know.
2:52:03 - Leo Laporte
Yeah, steve Gibson is at GRCcom. Uh, that is a great place to go just to browse. If you've got an afternoon, just go to GRCcom. I'll give you a couple of places you want to check. Of course, first place would be to go get Spinrite, the world's finest mass storage maintenance, recovery and performance-enhancing utility, not just for spinning hard drives, for SSDs as well. If you have storage, you need Spinrite right. 6.1 is the current version. It's steve's bread and butter. It's there. You can get it. Uh, and there's. You know it's there, just it's a must. But there's a lot of other stuff there there's.
Of course, if you want to email steve, there's a comment page. But really go to the grccom slash email and get. All you're doing is getting approved. You're getting wait lists, so you can. You can email Steve, but when you do give him the address, you can see two boxes unchecked. They're opt in for the security Now, a regular security now weekly newsletter and for a very irregular, from time to time update on what Steve's doing newsletter.
And that is all. At grccom slash email, he has the show there as well. Uh, steve has two unique, three unique, four unique. He's got it's all unique. Everything he's got there is unique. He's got a 16 kilobit version if you have very limited bandwidth. He's got a 64 kilobit version which used to be the standard, but it's not anymore. We've gone to 128. But he continues as long. I guess as long as you run an ffm peg, you might as well make a 64 and a 16. He also uh has the show notes, which are a great thing to have for the links to read while you're listening, that kind of thing. But even even better, a couple of days after the show comes out, he'll have a complete human crafted transcript from elaine ferris so you can uh use it for searching. You can. I think if you're going to download the show, you should also download uh the transcript in the show notes so you have the complete set. You know the full thing. Uh, what else is there? There's shields up. There's all sorts of free stuff. He's great about that. He's just all sorts of wonderful utilities. There's information about vitamins and sleeping and all sorts of things. Grccom you can also come to our website, twittv slash SN, for security now, and that's where you'll find a copy of the 128 kilobit audio and the video you can watch Steve's mustache at work. All of that at twittv slash SN. There's also a link there to the video on YouTube Great way to share clips.
I know a lot of times people listen to the show and say I got to tell my boss, my friend, my wife about this. You could do that with the YouTube Very easily. You can do clips. Best thing, of course, is to subscribe to the podcast. That way you'll get it automatically, the minute we're done, audio or video, available in your favorite podcast client. You can even watch us do this live. I should mention that we do the show right after Mac Break, weekly Tuesday afternoons, usually around 1.30 to 2 pm Pacific. That would be 5 pm Eastern time, 5 pm Eastern time, 2100 UTC, now that we're on summertime, 2100 UTC.
And the live streams are if you're in the club and, of course, all the best people are in the club in the Discord.
But there's also YouTube which is open to everyone Twitch, there's Kik, there's Xcom, there's TikTok, there's Facebook, there's LinkedIn.
We stream on all those platforms. There are chats from all those platforms and I see all of the chats and I have a unified chat interface. Sometimes I mention people's comments, but I'm always reading them. Sometimes I even respond. Steve does not. There was somebody saying Steve, steve, and I said no, steve's not, he's busy, he's busy, he's doing a show, my friends. So don't expect Steve to respond, but I will respond if I can and do join the club. We'd love to have you in the Discord chatting along with us. The club is $7 a month. The best benefit, I think what I'm told, is ad-free versions of all the shows All the shows, but there's a lot of other stuff, including special stuff we don't put out in the public, we just do it in the club, that kind of thing. You'll see a lot of activity in the club, not just about the shows, but about everything geeks care about twittv, slash club twit. If you're not yet a member, steve, I will see you next Tuesday for another thrilling, gripping edition of Security.
2:56:31 - Steve Gibson
Now I'll be right here on March 18th, my friend, see you then.