Security Now 1062 transcript
Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.
Leo Laporte [00:00:00]:
It's time for Security Now. Steve Gibson is here. He's worried. He's worried about the future of CISA in the United States. He's worried about Ireland's new lawful interception law that makes spyware legal. He's worried about AI generated malware. Yes, it's here. All that and more coming up next on Security Now.
TWiT.tv [00:00:23]:
Podcasts you love from people you trust.
Leo Laporte [00:00:28]:
This is Twit. This is Security now with Steve Gibson. Episode 1062 recorded Tuesday, January 27, 2026. Void Link AI generated malware. It's time for Security now the show we cover the latest security news. This computer information that you need to know with this guy right here, the king of the hill when it comes to security, Mr. Steve Gibson. Hi Steve.
Steve Gibson [00:01:00]:
Leo, great to be with you again for another Tuesday, the last one of January. So say goodbye to January everybody. I don't know what where it went.
Leo Laporte [00:01:09]:
But be happy to let it go.
Steve Gibson [00:01:10]:
I'm not gonna say many, many people. It was a bad January and from many perspectives and certainly cold also. I guess the weather has just gone crazy too.
Leo Laporte [00:01:21]:
Yeah, really freezing cold. Well, so it's only 52 degrees here in California. It's just terrible.
Steve Gibson [00:01:28]:
I don't know what to do.
Leo Laporte [00:01:31]:
All right, what are we covering on the show today?
Steve Gibson [00:01:33]:
Enough for Vol are Checkpoint Research. I think they call themselves Checkpoint Research. Checkpoint is how we know them. Did a took a close look at what they recently discovered as a the first very impressive and very concerning purely AI generated malware because the developer made the mistake so not a rocket scientist here or a security guy, kind of an anti security guy actually of leaving a directory exposed. His like what one of his servers directories was exposed. They were able to get an literally an inside look at the production of an AI generated malware. And there's some important takeaways from that which we're going to get to. But first we're going to look at unfortunately that sisa's uncertain future which I kind of thought it had been resolved.
Steve Gibson [00:02:48]:
But no, our old friend Rand Paul has stuck his finger in this and is gonna maybe cause some trouble. We'll see. We've got a worrisome new law which has been passed in Ireland which we need to take a look at because for a while I actually had a working title for the podcast Leo was the State versus Encryption. And we're going to end up with a couple stories that are that which is why the podcast carried that working title until I saw that I really we need to talk about malware and AI There is a group in the eu, the Digital Rights Group organization, pushing back on some of what seems to be happening. So we're going to take a look at that. I. I never had a chance to hear what you guys and Alex Stamos on Sunday were talking about relative to Microsoft's acknowledgment that it has turned over some encryption keys to the FBI. We're going to discuss that.
Steve Gibson [00:03:58]:
And I have again, I didn't hear what happened on Sunday, but I have maybe a surprising takeaway for our listeners relative to Microsoft. I try to be very fair. I know I'm very hard on them. Like, most of the time in this case, I take a different position.
Leo Laporte [00:04:16]:
Yeah, I think you and Alex may actually be on the same page. I'll let you know what he thought about it when we get to it.
Steve Gibson [00:04:22]:
Also, our old friend Alex Niehaus had some really useful and insightful feedback about AI enterprise usage and the how it's fraught with some dangers. I want to share that with our listeners. Another listener, Gavin, confesses that he deliberately put a database on the Internet and explains why. Oh, and there are some worries, Leo, about the need to rewind podcasts and how there may be a massive backlog that is growing, which we need to deal with. And then we're going to take a look at the emergence of. Remember our DVD rewinder from last week? Oh, that's right.
Leo Laporte [00:05:11]:
No, people have been leaving this show at the end and not rewinding it. I think this is a massive problem.
Steve Gibson [00:05:18]:
It's really. You're not thinking about the next person. Right. You're just saying, thank you very much, I took the last napkin and F you, I don't care.
Leo Laporte [00:05:29]:
So I always do that. Or Lisa yells at me because I'll leave this much milk in the carton, you know, put it back in the fridge. And it's very disappointing to her.
Steve Gibson [00:05:38]:
And Leo, let's not even get into being married. And toilet rolls.
Leo Laporte [00:05:42]:
Oh, yes. Oh, yeah. Good. Oh, yeah.
Steve Gibson [00:05:45]:
And does it come off the front or off the back? But anyway, that's a whole other topic. We do have a picture of the week, which we may get to someday. Do we have four ads I didn't. Or five I didn't.
Leo Laporte [00:05:55]:
We have, I believe, three, which is for this show, a dearth, a paucity of ads. But we will. We will. I will pause. And sometimes people might have noticed this. An ad will sneak in after the fact.
Steve Gibson [00:06:09]:
Are a cunning linguist. That's all I have to say.
Leo Laporte [00:06:12]:
So as far as I know I will be reading three ads. Others may be inserted against your.
Steve Gibson [00:06:18]:
Okay, so we're going to do our five pauses.
Leo Laporte [00:06:21]:
Yes, as always.
Steve Gibson [00:06:21]:
Pause that refreshes my. My whistle.
Leo Laporte [00:06:25]:
Always want to keep that whistle wetted. I will pause now to tell you about a sponsor that you probably everybody should know about because this is exactly right up your alley, Steve. In fact, it's part of your presentation when we go to Florida for Zero Trust World. The problem being if you are a company, that your vulnerability sits right there in that seat right there at the. At the desk.
Steve Gibson [00:06:54]:
Yep.
Leo Laporte [00:06:55]:
Our show today, brought to you by Hawks Hunt. As a security leader, you know your job is you get paid to protect your company against cyber attacks, right? But that is getting harder and harder. Not only are there more cyber attacks than ever, but these phishing texts and emails and messages that are generated with AI are really good. Very deceptive. I got, as I mentioned, I got fished a couple of weeks ago. Here's the, here's the problem. You might be using one of those legacy one size fits all awareness programs. They really don't stand a chance in today's modern world.
Leo Laporte [00:07:30]:
They send at most 4 generic trainings a year. Most employees ignore them. And you know, in some ways worse, when somebody actually clicks, you know, the fake phishing email, then they're forced into embarrassing training programs that feel like punishment. And we know that's no way to learn. If you are hating the training, you're.
Steve Gibson [00:07:54]:
Not going to learn anything.
Leo Laporte [00:07:55]:
That's why more and more organizations are trying HOX Hunt. This sounds strange, but Hox Hunt makes it fun. Hox Hunt goes beyond security. Awareness actually changes behaviors. And they do it in the simplest of ways, by rewarding good clicks and coaching away the bad. Whenever an employee suspects an email might be a scam, Hox Hunt will tell them instantly, providing a dopamine rush. It gets your people to click, learn and protect your company. Like gold star stickers, Confetti as an admin hawkshunt makes it easy to automatically deliver phishing simulations as often as you want in every form.
Leo Laporte [00:08:35]:
You want email, slack teams, whatever. And you can use AI to mimic the latest real world attacks to make your simulations as good as the real phishing emails. Simulations can even be personalized to an employee based on department location and more. And these instant micro trainings are so much better than the kind of like two hour flash thing that you've had to sit through. These micro trainings are quick. They solidify understanding. They drive lasting, safe behaviors. You can trigger gamified security awareness training that awards employees with stars and badges, boosting completion rates and ensuring compliance.
Leo Laporte [00:09:13]:
Because it's fun, you'll choose from a huge library of customizable training packages, or you can even generate your own with AI Hawkshunt. They've got everything you need to run effective security training on one platform. Meaning it's easy to measurably reduce your human cyber risk at scale. But you don't have to take my word for it. Over 3,000 user reviews on G2 make Hox Hunt the top rated security training platform for the enterprise, including easiest to use best results also recognized as customers choice by Gartner. Thousands of companies love it. Qualcomm, AES, Nokia all use Hoxhunt to train millions of employees all over the globe. Visit hoxhunt.com securitynow today to learn why modern secure companies are making the switch to Hawkshunt.
Leo Laporte [00:10:01]:
That's hoxhunt.com security now. Hox Hunt like Fox Hunt, but with an H. So it's H O X h u n t.com security now instead of hunting the fox, you're hunting the wily hacker. Hoxhunt.com SecurityNow we thank him so much for their support of Steve and the efforts he makes every Tuesday for us. So I've got a picture of the week and we.
Steve Gibson [00:10:28]:
It's another popular one. It generated a lot of feedback as these have been recently. I gave this in the title. This is what was once called Yankee Ingenuity.
Leo Laporte [00:10:39]:
Okay, let me put it up on the big screen and we can look at it together here. I'm going to scroll up. Yankee Ingenuity. Okay, you want to describe that.
Steve Gibson [00:10:55]:
So we don't really know what the backstory is, what's going on here, but when we were growing up, we probably encountered like garden gates where you, you had a. It was a. It was a slider that slid into a mating capture retainer.
Leo Laporte [00:11:16]:
I'm sure there's a name if you go to the hardware store. It's some sort of slide bolt or something.
Steve Gibson [00:11:21]:
Yeah. And so you would lift the arm up, slide it over them, then put it back down and gravity would keep it rotated so that it would stay either locked open or locked closed. Anyway, it's meant for some barn, right? A barn door kind of thing. Well, apparently somebody's having some sort of problem with their gas cap cover of their car.
Leo Laporte [00:11:47]:
It's been popping open.
Steve Gibson [00:11:49]:
And what really impresses me, Leo, is they did not leave it looking like, you know, silver chrome. Oh, no, it's been body painted to be to match the car perfectly. So you, unless you were, I mean if you were walking by it, you might miss the fact they'd added a barn door closing lock to the outside of, of their gas cap cover. So one of our insightful listeners who received it, I got the email out actually early this week. They went out Sunday evening, although Microsoft took it upon themselves to decide that GRC was not trustworthy. So 1547 of the piece of the 9 of the nearly 20,000 is 19,800 and something. 1547 were all blocked at the way if they went to outlook.com or hotmail.com when I saw that that had happened I was able to collect them and they went out with no trouble at all last night. So it's like okay, Microsoft, I guess maybe Sunday freaked you out.
Steve Gibson [00:12:58]:
I don't know because normally I do on Monday or Tuesday anyway. No, I mean it just, that's some. I, you know, had a spasm and so.
Leo Laporte [00:13:07]:
It's never going to stop, Steve. There'll always be a little bit of this here or there, I'm sure. I'm convinced. It's just know.
Steve Gibson [00:13:13]:
Yeah, well the, it's, there's. The spammers are trying to look legitimate and so there, there's a value judgment that's having to be made. So any, in any event, this listener said, well, you know what that is? I thought no, said that's two factor authentication. You've got the in inside, you have to pull the trigger to release the cap and then you have to come around outside and use the second factor to slide the bolt over in order to open the cap. I suspect what most of our listeners do that the automatic cap closer holder broke. And so you know, the COVID was flapping on the breeze and they said hey, you know, we got it. We, we used to have a barn but we don't have it. But we do have the lock that used to keep.
Leo Laporte [00:14:01]:
They're probably closed padding themselves on the back because initially they used duct tape to hold it closed and then they decided to really upgrade it with the.
Steve Gibson [00:14:09]:
Slide which would explain the need to repaint the car in the same color. Yes, that would make sense. Okay. So I've said for years that I have been pleasantly surprised by the success and effectiveness of cisa. You know, it's been an amazing success. The Cyber Security and Infrastructure Security Agency, awkwardly named but boy are they doing a great job. Since its creation 11 years ago, 2015, it's been a CISA has been a huge win for our nation's. Cybersecurity.
Steve Gibson [00:14:49]:
You know, my, my default belief is that government has a difficult time getting out of its own way way more often than not. So CISA was a welcome and actually well needed exception to that. As this podcast has covered since its inception, they've been able to mandate that government agencies pay needed attention to many specific critical security problems that would otherwise have fallen through the cracks. You know, the government agencies have better things to do than oh well, why we really don't want to do an update. They'll have to have our network down for, you know, blah blah, blah, whatever. And, and besides, nothing's happened yet, right? You know, and CIS has also been empowered to set deadlines which had to be honored. Their creation of Kev Kev, the Known Exploited Vulnerabilities catalog was a brilliant means of focusing those always limited and readily, you know, distracted bureaucratic resources where they were needed. And yeah, it's true, 125 security things happen on the second Tuesday of the month.
Steve Gibson [00:16:05]:
But it turns out that two of those are really critical. And CIS has been been instrumental in saying get to the other things when you can, but do these now because these really matter. So that's all been good news. The bad news is that CISA was not created to be a permanent entity. Sadly, the Constitution of the United States is completely silent regarding the need for a permanent cybersecurity watchdog agency within the federal government. I guess our forefathers were unable to foresee that. This means that politicians created CISA and politicians are required to keep CISA funded and and authorized. Last week the record updated us on the state of CISA's continuation writing congressional leaders on Tuesday.
Steve Gibson [00:17:01]:
Meaning last Tuesday released a compromise government funding bill that would once again temporarily, temporarily extend the life of two key cybersecurity laws. The bipartisan legislation would reauthorize the 2015 Cybersecurity and Infrastructure Security Act, CISA or agency and the state and local cybersecurity grant program pushing it through the end of September. The extension in the $1.2 trillion that's the entire funding bill, $1.2 trillion is the latest short term solution in a months long saga for CISA 2015, which provides liability. We've talked about this just last week l crucial liability protections to encourage private companies to share digital threat information with the federal government. And as we've said, it's like they're not going to do it unless they have liability protection. It was the, the, the C Suite executives made that very clear. The Record says both Statutes received widespread support from the cybersecurity community and the Trump administration prior to their expiration last year. They received temporary reprieves in the continuing resolution that reopened the government in November.
Steve Gibson [00:18:31]:
The House did approve a bill to extend the grants effort, but there's been no action on the Senate side. Meanwhile, several proposals have been introduced to reauthorize the the 2015 CISA long term. Please let let that happen. We like why let's not make this another football that we keep kicking this we need and not having it has already like stalled the like there's now a gap that needs to be filled because industry went silent as soon as they lost their liability protections anyway, the record wrote. The House Homeland Security Committee last year passed legislation to renew it for a decade. Thank you. With minor updates but it hasn't been scheduled for a floor vote. A bipartisan Senate duo introduced a bill that would extend the law for 10 years.
Steve Gibson [00:19:26]:
Yes, and provide retroactive protect retroactive protections for companies to that shared cyber threat data even after the law lapsed. But as I mentioned, the record writes Senator Rand Paul, chair of the Senate Homeland Security Committee, has drafted a bill that would trash the legal protection outlined in the original statute. Well thanks. They wrote. House leaders plan to hold a vote later in the week on the spending deal which boosts defense funding to over 839 billion with a B dollars. Lawmakers have 10 days to clear the package for President Donald Trump's signature before federal funding is set to lapse for the programs it covers. With a Senate in recess this week meaning last week, the upper chamber will need to approve the legislation when they return next week, meaning this week if Congress is going to head off another funding lapse and a a partial government shutdown. And as a total aside, there's a lot of conversation now that looks like we may be going into another shutdown over the dsh, DHS and the the reauthorization and increase in other aspects of the budget.
Steve Gibson [00:20:49]:
So we'll see. And I don't know what's up with Senator Rand Paul. He's always something of a wild card and pretty much a pain in everyone's butt. But after first being elected to the Senate in 2010, he's been re elected twice since every six years. So he seems to be what his state of Kentucky wants in a senator. Of course he, he shares that position with Mitch McConnell. But in this case it will be very bad if he gets his way. As again as we noted last week, the executives of the nation's private infrastructure agencies Consider that their vulnerability and breach disclosure protections to be critical and a crucial feature of this legislation.
Steve Gibson [00:21:34]:
So much so that it. In this, in, in the, the. The bills that are being talked about, protections are being made retroactive because they've said we need to have that. You asked us to keep talking to you after CISA lapsed and we did, some of us. So you need to protect us from that anyway. No one can make these executives disclose information which is privately held if they choose not to. So if the government wants to know what's going on, as it should, then protecting those who are voluntarily disclosing is the entire point of this aspect of the reauthorization. You know, we should know in another week or two whether the politicians have now screwed up what had been a surprisingly well designed and well working system.
Steve Gibson [00:22:31]:
As I said, when, you know, we Talked about this 10 years ago on the podcast when it happened, it's like, oh, great, you know, another Homeland Security Agency. Well, then we got surprised because it just, it was so effective and so useful. Of course, at the time, it was really well managed. It was Chris Krebs who was, Was he the original. I don't know.
Leo Laporte [00:22:56]:
He was the one who was fired because he said the election was secure.
Steve Gibson [00:23:01]:
Right. They looked very closely at the 2020 election. Yeah.
Leo Laporte [00:23:05]:
Yeah.
Steve Gibson [00:23:06]:
So I would imagine he was, because if that was 2020 and the. It was created in 2015, then it would have only been five years old at that point.
Leo Laporte [00:23:14]:
Yeah, we talked. Okay, so Alex Stamus is a partner, of course, for a long time. Was Chris Grimes. Yeah.
Steve Gibson [00:23:18]:
Right. Okay. So one of the reasons that I was the. The working title of the podcast until I got to the news about AI and malware was the State versus Encryption. Is Ireland's New lawful. I just love this word. There's this phrase, lawful interception law. Right.
Steve Gibson [00:23:43]:
They can make whatever laws they want. So the first half of this next piece is the new. Is the news that Ireland has just passed a new lawful interception law granting the government significant new powers. The short blurb that carried that news, it. The, the short blurb just said. This is all I first saw. It said the Irish government has passed a new lawful interception law. The new legislation grants law enforcement and intelligence agencies the power to surveil any type of modern communications channel.
Steve Gibson [00:24:25]:
It also grants the agencies the right to use covert software for their operations, such as spyware. The new law will also. Huh. It's not. It's not. You don't have to be ashamed or bashful or shy or pretend you're not doing it anymore. Now it's a law. Now it's legal.
Steve Gibson [00:24:46]:
The new law, it's this little blurb said will also require communications service providers to work closely and aid any government operation. Okay, so add this to the other recent news of pending and enacted legislation. You know, we, we are clearly witnessing. Remember we talked last week about Germany. Basically we can't pronounce the name of the agency because it's got 25 letters in his name but. And mostly consonants, but they're doing the same thing. So we're seeing, we're witnessing an accelerating trend in governments legislating themselves, sweeping rights to intercept, monitor and, and eavesdrop upon pretty much anything they wish. Okay, so this week we have similar legislation to what we discuss we talked about in Germany last week, which has passed.
Steve Gibson [00:25:43]:
It was pending in Germany. Ireland passed this. So I scanned last Tuesday's press release from the Irish government from which I'm going to excerpt two pieces, just two notable pieces. The first point talks about the clear need for an update to their very old law. I don't think anybody would agree with, would argue with that because that was the original law that's being updated by this legislation is from 1993 and you know, need to update. That's non controversial. But point number two says that the new law includes, quote, a clear statement of the general legal principle that lawful interception powers needed to address serious crime and security threats are applicable to all forms of communications. So to call this sweeping.
Steve Gibson [00:26:42]:
You know, I don't know if I mean that's the right word for it. Right. They, they specifically write the Minister proposes and the language here is proposes. But this law passed. Just to be clear, the Minister proposes an updated legal framework which is flexible and includes comprehensive principles, policies and definitions to allow for lawful interception powers to be applied to any digital devices or services which can send or receive a communications message, for example, the Internet of things and email and digital messaging devices and services. The legislation will provide for a clear statement and which is to say the legislation does provide a clear statement of general principle that lawful interception powers apply to all forms of communications, whether encrypted or not, and can be used to obtain either content data, they say the substance of a communication of or related metadata data that provide information about a communication, but not its content, such as phone call or email, time, date, sender, receiver of a communication, the geolocation of an electronic service or source and destination IP addresses. But they're specifically also saying and the content. And it says the legislation will also apply to parcel delivery services.
Steve Gibson [00:28:16]:
The Minister's view, they write, is that effective lawful interception powers can be accompanied by the necessary privacy, encryption and digital security safeguards. Right. Because. Because they're expert in this, leo. The. These legislators, they know their crypto. It says. In June 2025, the EU Commission published a quote roadmap for lawful and effective access to data for law enforcement, unquote, which stated that terrorism, organized crime, online fraud, drug trafficking, child sexual abuse, we knew we were going to get to the kids online.
Steve Gibson [00:29:00]:
Sexual extortion, ransomware and many other crimes all leave digital traces. Around 85% of criminal investigations now rely on electronic evidence. Requests for data addressed to service providers tripled between 2017 and 2022, and the need for these data is constantly increasing. I love the fact that they're. That they treat the word data as plural. I always do that. Right. I know, it's just.
Steve Gibson [00:29:31]:
It's so nice. It's refreshing to see it and. But it always surprises me because I don't remember to do it. They said the com. The Commission paper includes proposals to deliver a technology roadmap on encryption issues with expert input and emphasis. Emphasizes the need to reconcile technology. Reconcile technology and lawful access concerns. Oh, gee, you think? Through industry standardization activities, this EU initiative complements the Minister's proposed approach to reforming the law on interception in Ireland and it will inform the development of the general scheme, and that's capital G, capital S.
Steve Gibson [00:30:14]:
So the general scheme is something that they're going to inform and develop. So. Okay, I have of course much to say about this, but I want to first share the pre release's fourth point regarding, quote, the inclusion of a new legal basis for the use of COVID surveillance software as an alternative means of lawful interception to gain access to electronic devices and networks for the investigation of serious crime and threats to the security of the State, they write. The Minister also proposes to provide a legal basis for the use of COVID surveillance software as an alternative means for lawful interception to gain access to electronic devices and networks for the investigation of serious crime and threats to the security of the State. This is used legally in other jurisdictions for a variety of purposes when necessary, such as gaining access to some or all of the data on an electronic device or network, covert recording of communications made using a device, or disrupting the functioning of a personal or shared IT network being used for unlawful purposes. The Minister proposes to take into account a 2024 report from the European Commission for Democracy through Law to the Council of Europe, the Venice Commission on this subject, which was titled, quote, Report on a Rule of law and human rights compliant regulation of spyware. So in other words, conduct that has historically been denied by governments which were doing it anyway and which no state agency would admit to using or doing, is now being ratified into law and made explicitly legal. I believe that any objective observer who's witnessed the earlier saber rattling and more recently both the pending and enacted legislation that governments are, are, you know, seem determined to, to pursue would have to conclude that we are currently in an environment of slowly eroding privacy protections.
Steve Gibson [00:32:46]:
Encryption happened, right? I mean the math happened. And along with everyone else who appreciated knowing that their communications were private, you know, which, you know, it was just kind of like okay, thanks, that's nice to have. Bad guys started using it and they soon discovered that it protected them from law enforcement. While encryption was not created by any means to protect criminals, the privacy it affords everyone, you know, the privacy it affords doesn't know or care whether you're doing good or breaking the law. It's your, your communications is encrypted. So when bad guys began hiding behind the same encryption that everyone else was using because it was there, law enforcement quite reasonably asked providers for the contents of the bad guys encrypted messages. And they were told that the system had been deliberately designed to provide absolute communications content privacy for all of its users, regardless of their use. And that we, the providers of this technology, we were unable to comply with lawful court orders to turn over their users data.
Steve Gibson [00:34:10]:
They said they did not have that data and they had no means of obtaining it. Now that stumped the world's governments for a few cycles until someone had the bright idea to simply require the world to work differently. They said, we all agree that citizens have fundamental rights to privacy, except in cases where that privacy is being abused and is not in the public interest. So we've decided that we will determine when and where people should have privacy. And since we're a nation of laws, we're going to make it legal to do whatever we need to in order to obtain the privacy violating access to our citizens communications that we've determined we need to have. Always of course, in support of the greater good. And besides, think of the children and you know, that same objective observer that I talked about before would see that we're currently in a period of transition. The truth of encryption caught the world's governments off guard.
Steve Gibson [00:35:21]:
They've all seen the same movies that we have. Those movies, think about it, uniformly depict both hackers and intelligence services cracking the encryption. Whether you know, like, like whenever they were asked to whenever it was really necessary to do so. So everyone knows that really good encryption just takes somewhat longer to crack. Right? That's what the movies all showed us. The politicians just assumed that was true. Why wouldn't they? They believed that was the way encryption really worked right up until they encountered the truth of today's encryption. They didn't really understand that modern encryption is absolutely unbreakable.
Steve Gibson [00:36:09]:
That's what the industry created, period. So what we've seen is that it took them several rounds of stumbling and failed legislation and. And trying to figure out how to ask for what they wanted to finally figure out that what's actually needed is for them to outlaw any encryption that no one can break. They want the encryption we have in the movies, and they're going to keep writing and rewriting legislation until they get it. So the formal legislation of the use of spyware is just the next step along that path. Now they're saying we're going to make our use of spyware legal. That will be lawful if we decide that we need to deploy it in order to obtain access to encrypted communications. Another step down that path.
Steve Gibson [00:37:08]:
So we're not yet where we're going to end up. But again, our objective observer of the last several years would have to conclude that the world's governments, their law enforcement and intelligence agencies will not be satisfied until it's possible to obtain access to the communications, probably of anyone they desire.
Leo Laporte [00:37:29]:
It strikes me this is a pretty savvy move on the part of the Irish government because I think that what they're recognizing is, well, we can't demand clear text from Signal, WhatsApp and all these companies. So the next step is, and we've talked about this before, to go pre encryption, to go where the messages are in plain text, that is on your device, pre encryption. And to do that, they need the spyware. So I think this is the next stage. And this is saying, all right, I get it, Signal is going to withdraw from Ireland if we make it illegal to have strong encryption. So, oh, I got it. We'll just get on everybody's phone. This is the next step.
Leo Laporte [00:38:18]:
After this is what Russia and China do, which is mandate that you put a special app on the phone so that we can see everything that's going on. Yeah, but. But that's. Now the interesting thing is they may have to go to that point, because I think they may be. Here's where they're technically less literate. They may overestimate the ability of spyware to do this. Right. But these kinds of exploits are not easy to get.
Leo Laporte [00:38:46]:
No, they're generally one time use because you know Apple will patch it the minute they figure it out.
Steve Gibson [00:38:52]:
Exactly. As soon as anyone finds it, they're able to say whoops, you know, so.
Leo Laporte [00:38:57]:
They may have a, this may be where they're, you know, they, before they thought oh we can break encryption now that maybe they're starting to realize we can't. Oh, but spyware. But maybe they don't really understand that it's not as trivial as you might think. I mean I guess the NSA had its tools. That's what we learned from Edward Snow.
Steve Gibson [00:39:14]:
And you're right, it, it may be that where we end up where, where they for example the EU ends up is requiring an app on everyone's phone.
Leo Laporte [00:39:24]:
I think it's the only. It's the ultimate. Right?
Steve Gibson [00:39:26]:
Yep.
Leo Laporte [00:39:27]:
Everybody has to run this app and then they can see everything. And it's all pre encryption. So signal. You can still use signal.
Steve Gibson [00:39:36]:
Well, yes, it's bad PI. We, we, we, we. You know pre Internet encryption was a great thing once.
Leo Laporte [00:39:42]:
Now Internet decryption, it's pit.
Steve Gibson [00:39:44]:
Yeah, yeah.
Leo Laporte [00:39:49]:
But the other, the other good side of this is as you say, the math happened and it's all, it's easy. It's well known how to do encryption now. So it's.
Steve Gibson [00:40:00]:
The counter argument is when it's illegal, only the criminals will use it.
Leo Laporte [00:40:04]:
Right. Or people who are motivated and, and, or smart enough to figure out how to do it without.
Steve Gibson [00:40:11]:
Well, it again, it. I mean they're going to outlaw the. They're in. They're going to outlaw their inability to spy.
Leo Laporte [00:40:18]:
Right.
Steve Gibson [00:40:19]:
In which case you will be a criminal even if you're just encrypting to talk to your mom. If they want to see what you said to mom and you're unwilling to give them the keys, then you're guilty of that.
Leo Laporte [00:40:31]:
You know, I said this in an interview 25, maybe no 30 years ago that ultimately hackers might be the freedom fighters of the 21st century. That the people who understand how to get around these things may actually be the people who are fighting for our freedom. Yeah.
Steve Gibson [00:40:47]:
Neo.
Leo Laporte [00:40:48]:
Neo, yeah, right. Neo. This is the point of the Matrix. All right, I'll let. Do you want to, do you want to refresh? Do you want to hydrate? Okay, he's nodding folks. You can't hear it, but his head is vigorously bobbing up and down and he has a gigantic mug of of Joe in his hand. Our show today actually brand new sponsor. I want to tell you about.
Leo Laporte [00:41:14]:
I'm very excited to welcome Trusted Tech to Security. Now, this is something you may need. If you're using Microsoft 365, there's a pretty good chance you're paying for licenses you don't need or can go the other way. You might be missing ones you do. And let's not forget that coming in July, Microsoft is going to implement a significant price increase for M365. And with it there'll be a lot of nuance, a lot of subtleties. Trusted Tech helps businesses of all sizes get the most out of their Microsoft investment by ensuring that their M365 environment is well supported and aligned with how the business actually operates. So you're not spending too little, you're not spending too much.
Leo Laporte [00:41:58]:
It's just right. But that's something it's nice to have some help with. Microsoft licensing is very complicated. The options can vary widely. But Trusted Tech, they're the experts. Their team helps organizations understand what they have, what they need, and how to get the most out of what they're paying for. If you want to make sure you're getting M365 done right, trusted Tech can help you. They are right now.
Leo Laporte [00:42:23]:
You can go there and get a free Microsoft 365 licensing consultation. It's very straightforward, won't take much time. They're really good at this. These are the pros. Visit trustedtech.team and don't forget this part. SecurityNow365 that's trustedtech.team SecurityNow365 where you get a clear data backed view of your current licenses, optimization opportunities and next step. You know, if you're wondering, Kevin Turner, former Microsoft coo, has very good things to say about Trusted Tech. He vouches for him.
Leo Laporte [00:43:02]:
He says Trusted tech has an incredible customer reputation and you have to earn that every single day. The relentless focus you guys have on taking care of customers gives them the value and differentiates you in the marketplace. That's high praise from a guy who knows. Trusted Tech also can elevate the Microsoft support experience with its certified support services. You can ask them about that. And by the way, some of the biggest and best use trusted tech for their certified support services. Enterprises like NASA, Netflix, Neuralink, Apple, Intel, Google and Lockheed Martin. Okay, you couldn't get a better list of Clientele and they're saving 32 to 52% compared to the average Microsoft Unified support agreement.
Leo Laporte [00:43:51]:
Great support for less. Whether you're looking to fine tune your Microsoft 365. Licensing or improve the way your organization receives proactive Microsoft support or both. Trusted Tech offers free consultations to help you understand your options. This is a name you need to know. Go to TrustedTech team SecurityNow365. You could submit a form to get in contact with Trusted Tech's Microsoft licensing engineers. The guys who know TrustedTech Team, slash SecurityNow 365.
Leo Laporte [00:44:26]:
Use that address. Make sure you do, because that way they'll know you saw it here. Trustedtech.team SecurityNow365 and we thank them for believing in us and supporting the mission of Steve Gibson and Security. Now thank you Trusted Tech. Back to you Steve.
Steve Gibson [00:44:46]:
So the day after Ireland proudly enumerated the various features of its newly passed expansive legislation, EDRi, the European Digital Rights Organization, perhaps in response to Ireland's announcement posted under the headline EDRi launches new resource to document abuses and support a full ban on on spyware in Europe. And you know, okay, good luck. It seems that the European Union has their own equivalent of the EFF and its edri. Their posted piece begins by stating the context Europe's spyware crisis remains unresolved. And they write, spyware remains one of the most serious threats to fundamental rights, democracy and civic space in Europe. And of course, Leo, as you just point out, spyware is where Europe wants to go because they want the access that they can't get, EDRI said. Over the past years, repeated investigations have shown that at least 14 EU member states have deployed spyware against journalists, human rights defenders, lawyers, activists, public political opponents and others. Notice, we're not, we're not saying criminals, we're saying people we don't like for one reason or another.
Steve Gibson [00:46:18]:
We're going to just spy on them. So who imagines that that won't accelerate hugely if spyware is made legal anyway? That's me, edri said. These cases have revealed the reality of an opaque, dangerous market that thrives on exploiting vulnerabilities and endangering us and the state's reluctance to provide any accountability or justice for victims. Right, so they're going to legalize. They're going to legalize what they've been doing. Despite the findings of the European Parliament's PEGA inquiry committee in 2023 and the push from human rights organizations, the European Commission has so far refused to follow binding legislation to prohibit spyware. Not only that, it has done nothing. Right now, no EU wide red lines exist against the use of Spyware.
Steve Gibson [00:47:15]:
Well, right, 14 states have done it and they want to be able to keep Doing it. So they wrote, this means that victims lack effective remedies, authorities face no scrutiny, and commercial spyware vendors continue to operate with near total impunity, enriching themselves by violating human rights and even benefiting from European public funding, because after all, this is taxpayer dollars that are, you know, and these spywares not cheap. At the same time, they said, this political inaction is increasingly being challenged. Investigative journalists, researchers and civil society organizations have continued to expose spyware's human impacts and the opaque markets behind its development and deployment. A broad coalition of civil society and journalism organizations has openly called on EU institutions to end their inaction and to adopt a full ban on commercial spyware. Adding to this push, EDRI has also adopted a comprehensive position paper calling for a full ban on spyware in the European Union as the only possible path forward from a human rights perspective. So, you know, basically the, the battle is escalating and, and it's being made more visible and more public. We've got, we got EU states wanting to legalize their use of spyware and the, the human rights privacy protecting organization saying, let's make it very clear that this is not legal.
Steve Gibson [00:48:55]:
They said our collective refusal to accept the normalization of the use of spyware is also visible inside the European Parliament. On 21st of January this year in Straussberg, an informal interest group against spyware was launched, bringing together MEPs from across political groups with the aid with the aim of maintaining scrutiny and challenging the Commission's inaction. While this does not replace legislative action, it signals that political pressure is growing instead of fading. Right. Like I said, it's becoming more and more public, so we'll see what happens. The. This spyware document pool that this posting introduces is a really terrific piece of work. I'm only going to share a tiny piece of it, but I've dropped a link to the entire pool into the show notes.
Steve Gibson [00:49:55]:
It's the. The end of the URL is spyware hyphen document hyphen pool and it's at the top of page 6 in the. In the show notes. The piece I wanted to share from it addressed the nature and the size of the commercial spyware market. They wrote, the commercial spyware market has grown rapidly over the past decade. This market is now worth billions of euros, driven by the sale of these tools to governments, law enforcement agencies, and sometimes private actors. Its growth is fueled by an ecosystem that combines technological sophistication with near total opacity, allowing companies to operate across borders and evading accountability. This makes spyware a Highly profitable, yet extremely dangerous sector where abuses remain hidden until uncovered by researchers or investigative journalists.
Steve Gibson [00:51:00]:
The global spyware industry is estimated to be worth on the order of, of 12 billion euros per year.
Leo Laporte [00:51:09]:
It's illicit though, right?
Steve Gibson [00:51:11]:
I mean, that's absolutely. It is not legal anywhere. Right.
Leo Laporte [00:51:14]:
That's amazing.
Steve Gibson [00:51:16]:
12 billion euros companies are, are paying, you know, like Maduro in Venezuela is like, you know, he would be a typical customer because he's got lots of money and he would like to spy on anybody who opposes him publicly.
Leo Laporte [00:51:34]:
Well, and now they're going to get the Irish government as a customer. So that's good, right?
Steve Gibson [00:51:39]:
More than 880 governments have contracted for commercial spyware, according to the UK's Cyber Security Agency.
Leo Laporte [00:51:49]:
Well, that's like half. That's like Everybody.
Steve Gibson [00:51:52]:
Yeah. In 2023, there were at least 49 distinct vendors, along with dozens of subsidiaries, partners, suppliers, holding companies and hundreds of investors across the supply chain. 56 of the 74 governments identified by the Carnegie Endowment procured commercial spyware from firms either based or connected to Israel. The Israeli firm Paragon was acquired in 2024 by an investment firm in a deal worth up to 900 million euros. And this is the market that Ireland, as we have been saying, has just taken out of the shadows and made legal for their own use for what they're calling lawful interception.
Leo Laporte [00:52:49]:
Wow.
Steve Gibson [00:52:49]:
The other piece of data that I thought our listeners would find interesting was about the market for the vulnerabilities that enable the creation and deployment of this spyware. They write, the buying and selling of zero day vulnerabilities is closely linked to the spyware market. As these flaws allow spyware to bypass security protections and operate undetected. The vulnerabilities market is dangerous because it magnifies risk. A single zero day can compromise millions of devices. Once a vulnerability is found, the risk is any. The risk is that anyone can exploit it. So, and they're saying, for example, by comparison, if you, if a good guy finds a zero day, they report it, probably receive a bounty and it's removed from the ecosystem, it's removed from the device.
Steve Gibson [00:53:45]:
However, spyware using a zero day never wants to disclose it. They want to use it as long as they can so that zero day remains present until it's somehow discovered, so thus magnifying risk. Also, it drives innovation in spyware. Spyware vendors continuously adapt their tools to exploit newly discovered vulnerabilities. Of course, as we know, it also drives Apple to, to keep revising their chips in this ongoing cat and mouse battle against what the Spy was able to do it lacks accountability. Vulnerabilities are traded secretly with minimal regulation, creating an ecosystem with no rules. That poses a risk to all of us. Concentration multiplies risk.
Steve Gibson [00:54:33]:
Many people are using only two OS, Android and iOS. And some apps are globally used. WhatsApp, Gmail, and so forth. Once someone breaks into one of these systems, they can have access to hundreds of millions of devices. And Leo, this is the point you often make about our monoculture. You know, the fact that there's basically either Android or iOS, there are not 20 different OSs, each, you know, struggling to maintain their own security. So, so this makes the point that because we have such a, a, a very vertical and, and narrow selection of platforms, you find a problem, you get access to a huge chunk of the world. They wrote a zero day vulnerability costs via brokers between first.
Steve Gibson [00:55:28]:
Okay, so this is what the payout is for. Zero day vulnerabilities today. Five to seven million dollars for exploits targeting an iPhone. That is you find a zero day for an iPhone today and through a broker you can obtain between 5 and 7 million dollars. Android phones get up to 5 million. Chrome and Safari. Zero days are between 3 and 3 and a half million dollars. WhatsApp and iMessage pull between 3 million for WhatsApp, 5 million for iMessage.
Leo Laporte [00:56:06]:
But this proves my point. They wouldn't be worth that much if they were so easy to use and you could use them in a widespread fashion. These are very targeted, very specific attacks.
Steve Gibson [00:56:17]:
Yes, yes. And the reason they're getting that much money is of course the spyware vendors then turn around and charge that much money per customer to the nation states. Yeah, yes, exactly. Which ultimately the taxpayers finance.
Leo Laporte [00:56:34]:
Yeah. Oh, that's nice.
Steve Gibson [00:56:35]:
You know, governments don't generate their own cash. In 2024 the Google Threat Analysis Group reported that 20 out of the. Okay, 20 out of the 25 vulnerabilities found on their products, which in this is Google's take group, so that's Android and Gmail. In 2023, 20 out of 25 were used by spyware vendors to perform their attacks. As of June 2025, more than 21,500 new vulnerabilities had already been published. So we're seeing a rate of 133 new vulnerabilities per day across, you know, not, not all high quality zero days in iOS obviously, but broad spectrum 133 vulnerabilities are being found of all types everywhere per day. They finish, or I'm finishing quoting this, this piece of it they, they by right. Even though at least 14 EU countries are reported to have used commercial spyware, regulation in Europe remains entirely absent.
Steve Gibson [00:57:53]:
So Germany is saying we want to do this. Ireland is saying now we can do this. The air in the European Union is for whatever reason making noises like, oh, this is bad, but they're not actually taking any action. So to say that the future of encryption currently exists in a state of tension and uncertainty I think would be no overstatement given the reality of the overwhelming power of the world's governments and the necessity for vendors to abide by their laws. Right. I mean as, as, as we know all signal can do is say, well we're leaving. They, they just can't ignore the laws in the prevailing regions where they want to operate. As much as I wish it were not the case, I do not see the interests of the EFF and the EDI ultimately winning out here.
Steve Gibson [00:58:51]:
Governments are never going to be satisfied until and unless they're able to intercept and monitor the communications of specific groups of individuals under the order of their courts. At a minimum, that's clearly the path that we're on. And as for the absolute or the legal use of absolute encryption, I would say enjoy it while it lasts. Eventually only criminals will be able to use unbreakable encryption. You know, its use will have been criminalized so that those who do use it, as I said earlier, will be guilty of at least that. And I think that's, I think that's where we're headed, Leo. I, I mean governments do not do. They're just going to object.
Steve Gibson [00:59:42]:
And it's, it's unfortunate too because, you know, while pre Internet, when law enforcement had to use more analog means, you know, wiretaps and, and physical searches, everything wasn't binary either. Like yes, encryption, you have encryption or you don't. I mean in the analog world there's, you know, hiding stuff in a mattress. I mean, you know, if it was, it was, it was a different way world. Now it's, it's either it, it is absolute. I mean it, it would almost be better if encryption actually worked the way it did in the movies, where, but, but was also very, very, very hard to break where if you really, really, really, really needed something, you could get it. Unfortunately, what's going to happen is governments are going to legislate themselves the ability to flip a switch and, and have it all. They're, they're going to say if, you know, a phone operating in, in within the European Union must have our software on it and oh, it'll have some benefits.
Steve Gibson [01:00:50]:
It'll, it'll be, you can use it to take the train and, and fly and you know, it'll be, you know, stand in for, for you and, and be a digital ID and other things. And it'll also be there, able probably to capture what they want to, when they want to.
Leo Laporte [01:01:09]:
It's, I mean I, you could absolutely have a program on a phone that would see all plain text, you know, everything that was typed in or dictated in.
Steve Gibson [01:01:20]:
Yep.
Leo Laporte [01:01:21]:
Before it went into an encrypted.
Steve Gibson [01:01:22]:
Yep.
Leo Laporte [01:01:23]:
That wouldn't be hard to do. You'd have to violate maybe some of Apple's rules. But if you're the government, you say, oh, Apple, you don't have to approve this app. We're just going to put it on every iPhone.
Steve Gibson [01:01:34]:
Yeah.
Leo Laporte [01:01:35]:
You can see everything.
Steve Gibson [01:01:36]:
Any macro program that we're used to is able to watch what you do, capture those actions and then store them. Well, it could also be capturing the keyboard.
Leo Laporte [01:01:47]:
This argues very strongly for an open source operating system. I wish I had a phone that I could really use with an open source operating system. And now with Vibe coding I could probably, before this show's over, seriously code up a, you know, I say use NACL or some well known crypto library that's, that's reliable. And I would like you to write me a encryption and decryption program and I'm going to send my friend Steve the decryption program. And you know, I, I think that would be. So it's going to be very difficult to control this. This is like a print your own gun thing.
Steve Gibson [01:02:26]:
I mean, but yes, it is difficult to control. But that encryption decryption program that you just mentioned, hypothetically, it uses os APIs.
Leo Laporte [01:02:38]:
Right.
Steve Gibson [01:02:39]:
So I mean it doesn't actually have, there's no action access to the XY coordinates that the user's touching on the screen. That service is provided by the os, so that can always be tapped at that level.
Leo Laporte [01:02:56]:
Yeah. I mean you could capture scan codes from a keyboard, but there's nothing to keep the OS from seeing those as well.
Steve Gibson [01:03:02]:
Yeah. You need to send me the leophone.
Leo Laporte [01:03:05]:
Right.
Steve Gibson [01:03:06]:
And leophone is open source.
Leo Laporte [01:03:09]:
We have open source hardware and open source software and make sure no government intrusion on either.
Steve Gibson [01:03:16]:
Yeah.
Leo Laporte [01:03:16]:
Well, you know that there are people who will be strongly enough incentive to do that. To do that. And that ironically is the people the government wants to catch. The normal people who aren't doing that. We're sitting ducks.
Steve Gibson [01:03:33]:
Yeah. Which brings us to Microsoft and BitLocker.
Leo Laporte [01:03:40]:
Oh yeah.
Steve Gibson [01:03:41]:
After this next break.
Leo Laporte [01:03:42]:
Okay, and we'll talk about this. This is a news story this week which we talked about on Twitter and Alex Stamos who is a very well respected security guru who did have his thoughts. I want to hear what you have to say about it and I'll give you Alex's thoughts as well.
Steve Gibson [01:03:55]:
Perfect.
Leo Laporte [01:03:55]:
Coming up on Security now, our show today, brought to you by Zscaler, the world's largest cloud security platform. You know we talk about AI and the, if you're a business, I think it's just generally accepted that if you're not using AI, that would be kind of the equivalent to a business saying, well, we don't really need a telephone either. We'll just take orders by carrier pigeon, I guess. So the rewards of AI cannot be ignored. But, but there are risks and those should not be ignored. Right? The, there's, and the risks aren't just bad guys. The risk is also loss of sensitive data through the use of AIs, both local and, and sas. And then of course there's the attacks against enterprise managed AI, the prompt injection.
Leo Laporte [01:04:44]:
And bad guys are also using generative AI and that's really increasing their capabilities. They can rapidly create phishing lures that are pitch perfect. They can write malicious code, they can automate data extraction. We're going to talk about writing malicious code in just a little bit. It's happening now. So there's all these issues. There were 1.3 million instances of Social Security numbers leaked to AI applications by businesses using AI applications. Right.
Leo Laporte [01:05:14]:
Not intentionally. ChatGPT and Microsoft Copilot together saw nearly 3.2 million data violations last year. There is a solution. It's time for a modern approach. Zscalers Zero Trust plus AI Zero Trust removes your attack surface. It secures your data everywhere. It safeguards your use of public and private AI. Yeah, Zscaler can do that.
Leo Laporte [01:05:39]:
They can even protect you against ransomware and AI powered phishing attacks. Check out what Siva, the director of security and infrastructure at Zuora says about using Zscaler. Watch this. AI provides tremendous opportunities, but it also brings tremendous security concerns when it comes to data privacy and data security. The benefit of Zscaler with ZIA rolled out for us right now is giving us the insights of how our employees are using various gen AI tools. So ability to monitor the activity, make sure that what we consider confidential and sensitive information according to, you know, companies data classification does not get fed into the public LLM models, et cetera. With Zero Trust plus AI you can thrive in the AI era. You can stay ahead of the competition and you can remain resilient even as threats and risks evolve.
Leo Laporte [01:06:29]:
Learn more@zscaler.com security that zscaler.com security we thank him so much for supporting Security now and Steve Gibson. All right, let's talk about this BitLocker thing, okay?
Steve Gibson [01:06:45]:
So on the heels of Microsoft news and eDri's pushback comes Microsoft's admission that they provided BitLocker keys to the FBI. When asked, the headline of Thomas Brewster's piece in Forbes, which set off this firestorm of discussion and controversy, was, quote, Microsoft gave FBI keys to unlock encrypted data, exposing major privacy flaw with the tagline. The tech giant said it receives around 20 requests for BitLocker keys a year and will provide them to governments in response to valid court orders. But companies like Apple and Meta set up their systems, so such a privacy violation is not possible. Okay, so here's what we know from Forbes reporting, Thomas wrote. Early last year, the FBI served Microsoft with a search warrant, asking it to provide recovery keys to unlock encrypted data stored on three laptops. Federal investigators in Guam believed the devices held evidence that would help prove individuals handling the island's Covid unemployment assistance program were part of a plot to steal funds. The data was protected with BitLocker, the software that's automatically enabled on many modern Windows PCs.
Steve Gibson [01:08:19]:
To safeguard all the data on the computer's hard drive, BitLocker scrambles the data so that only those with a key can decode it. It's possible for users to store those keys on a device they own. But Microsoft also recommends BitLocker users store their keys on its servers for convenience. While that means someone can access their data if they forget their password, or if repeated failed attempts to log in lock the device, it also makes them vulnerable to law enforcement subpoenas and warrants. In the Guam case, Microsoft handed over the encryption keys to investigators. Microsoft confirmed to Forbes that it does provide BitLocker recovery keys, and if it receives a valid legal order, Microsoft spokesperson Charles Chamberlain said, quote, while key recovery offers convenience, it also carries a risk of unwanted access. So Microsoft believes customers are in the best position to decide how to manage their keys, unquote. He said the company receives around 20 requests for BitLocker keys per year, and in many cases, the user has not stored their key in the cloud, making it impossible for Microsoft to assist.
Steve Gibson [01:09:38]:
The Guam case is the first known instance where Microsoft has provided an encryption key to law enforcement. Back in 2013, a Microsoft engineer claimed he had been approached by government officials to install backdoors in BitLocker but had turned the requests down. Senator Ron Wyden said in a statement to Forbes, quote, it is simply irresponsible for tech companies to ship products in a way that allows them to secretly turn over users encryption keys. Allowing ICE or other Trump goons to secretly obtain a user's encryption keys is giving them access to to the entirety of a person's digital life and risks the personal safety and security of users and their families, unquote. Ron Wyden, of course a Democrat the law enforcement Law enforcement regularly asks tech giants to provide encryption keys, implement backdoor access or weaken their security in other ways, but other companies have refused. Apple in particular has repeatedly been asked for access to encrypted data in its cloud or on its devices. In a highly publicized showdown with the government in 2016, Apple fought an FBI order to help open phones belonging to terrorists who shot and killed 14 in San Bernardino, California. Ultimately, the FBI found a contractor to hack into the iPhone.
Steve Gibson [01:11:16]:
Privacy and encryption experts told Forbes the onus should be on Microsoft to provide stronger protection for consumers personal devices and data. Apple, with its comparable file vault and Passwords system and Meta's WhatsApp messaging app, also allow users to backup data on their apps and store a key in the cloud. However, both also allow the user to put the key in an encrypted file in the cloud, making law enforcement requests for it useless. Neither Apple nor Meta are reported to have turned over encryption keys of any kind in the past. Matthew Green, cryptography expert and associate professor at the Johns Hopkins University Information Security Institute, said, quote, this is a, this is private data on a private computer and they made the architectural choice to hold and retain access to that data. They absolutely should be treating it like something that belongs to the user. If Apple can do it, if Google can do it, then Microsoft can do it. Microsoft is the only company that's not doing this.
Steve Gibson [01:12:30]:
He added, it's a little weird. The lesson here is that if you meaning Microsoft, have access to its users keys, eventually law enforcement is going to come for them. Jennifer Granik, the ACLU's surveillance and cybersecurity Council, raised concerns about the breadth of information the FBI could obtain if agents were to access gain access to data protected by BitLocker. And that's really a good point too. It's like, you know, they're not getting selective access to just what they want. They've got your drive. She said, quote, the keys give the government access to information well beyond the time frame of most crimes, everything on the hard drive, then we have to trust that the agents only look for information relevant to the authorized investigation and do not take advantage of the windfall to rummage around. In the Guam case, the court dockets show the warrant was successfully executed.
Steve Gibson [01:13:35]:
The lawyer for defendant Chesa Tanoro, who pleaded not guilty, said the information provided to her by the case's prosecutors included information from her client's computer that it included references to BitLocker keys that Microsoft had provided the FBI. The case is ongoing. Both Matthew Green and Jennifer Granik said Microsoft could have users install a key on a piece of hardware like a thumb drive, which would act as a backup or recovery key. Microsoft does allow for that option, but it's not the default setting for BitLocker on Windows PCs. Without the encryption keys from Microsoft, the FBI would have struggled to get any useful data from the computers. BitLocker's encryption algorithms have proven impenetrable to prior law enforcement attempts to break in, according to a Forbes review of historical cases. In early 2025, a forensic expert with ICE's Homeland Security Investigations Unit wrote in a court document that his agency did, quote, not possess the forensic tools required to break into devices encrypted with Microsoft, BitLocker or any other style of encryption, unquote. In one previous case, federal investigators obtained keys by discovering that a subject had stored them on unencrypted drives.
Steve Gibson [01:15:06]:
Now that the FBI and other agencies know Microsoft will comply with warrants similar to the Guam case, they'll likely make more demands for encryption keys, Green said. My experience is once the US Government gets used to having a capability, it's very hard to get rid of it. Okay, so the first takeaway from this is obvious, and it doesn't involve any sort of moral or ethical judgment either way. It's just the facts. Because encryption is absolute and unforgiving, it can be super useful to have a backup plan of some kind, right? You know, someone who will never forget someone to hold on to one's emergency encryption backup keys. There's no doubt about that. Only if you are willing to take sober and full responsibility for not never forgetting how to log in, would it make sense to have no backup whatsoever anywhere. That said, one option is to allow Microsoft to be the entity to hold on to your keys in the event of an emergency.
Steve Gibson [01:16:32]:
They're certainly the default. Easy choice. The only downside to that, and again, without any judgment here, is that they will also turn your keys over to law enforcement after a judge approves their request. And you know that may not be a bad thing if you're certain that this would never become an issue for you. But if that's a concern, it's a good thing that you're now aware that Microsoft cannot be a trusted guardian of your privacy. They will capitulate. And now all global law enforcement and intelligence services know that. So it might be better to entrust those secrets to a close friend who law enforcement would never think to ask.
Steve Gibson [01:17:25]:
But as I said, that's the first takeaway. There's another, and it's much more subtle, but I very much want to point it out to our listeners. This Forbes article reminded me of that previous instance 13 years ago, back in 2013, when a Microsoft engineer claimed he'd been approached by government officials to install backdoors in BitLocker. My recollection was that it was more than a claim and that it was also more than once. For one thing, there were multiple people involved, so it wasn't just hearsay from one guy, you know, and you know. So the FBI asked. I don't have a problem with them asking, as the saying goes, well, you can ask. Okay, so to set this up for our listeners, I needed to share that.
Steve Gibson [01:18:20]:
I want to share the first portion of Mashable's coverage of this incident from 2013. Mashable's coverage of the story was introduced with the leading question headline, did the FBI lean on Microsoft for access to its encryption software? They wrote, the NSA is not the only government agency asking tech companies for help in cracking technology to access user data. Sources say the FBI has a history of requesting digital backdoors, which are generally understood as a hidden vulnerability in a program that would, in theory, let the agency peek into suspects computers and communications. In 2005, when Microsoft was about to launch BitLocker, its Windows software to encrypt and lock hard drives, the company approached the nsa, its British counterpart, the gchq, and the FBI, among other government and law enforcement agencies, that is, say Microsoft approached them, saying, we're about to add encryption to Windows. They wrote Microsoft's goal was twofold. Get feedback from the agencies and sell BitLocker to them. However, the FBI, writes Mashable, concerned about its ability to fight crime, specifically child pornography, apparently repeatedly asked Microsoft to put a backdoor into the software. And then they tell their less technical audience, a backdoor or trapdoor is a secret vulnerability that can be exploited to break or circumvent supposedly secure systems.
Steve Gibson [01:20:07]:
For its part, the FBI categorically denies asking for such access, telling Mashable that the Bureau does not ask for back doors and that it only serves companies lawful court orders when it needs to access users data. And legally it would still need a warrant even if a backdoor did exist. Peter Bittle, the head of the engineering team working on BitLocker at the time, revealed to Mashable the exchanges he had with various government agencies. Biddle told Mashable, quote, I was asked multiple times, confirming that a government agency had inquired about back doors. Though he couldn't remember which one. He said, quote, and at least once the question was more like if we were to officially ask you, what would you say? According to two former Microsoft engineers, FBI officials complained that BitLocker would make their jobs harder. An FBI agent reportedly said, quote, it's going to be really, really hard for us to do our jobs. If every single person could have this technology, how do we break it? The story of how the FBI reportedly asked Microsoft to backdoor BitLocker to avoid, quote, going dark.
Steve Gibson [01:21:37]:
The FBI's term for a potential scenario where encryption makes it impossible to intercept criminals communications or break into a suspect's computer provides a snapshot into how US government agencies try to persuade tech companies to to weaken their security products or even poke a hidden hole to make them wiretap friendly last week. And this was written back in 2023. So 13 years ago, the New York Times, ProPublica and the Guardian Mashable Rights revealed that one of the ways the NSA circumvents Internet cryptography is to ask companies to put backdoors into their products. The FBI is reportedly doing the same in the name of fighting crime, and its persuasion techniques appear to be very similar. According to reports, both the NSA and the FBI are subtle in their requests, which are never formal, never written, but are usually uttered during casual conversations, almost jokingly.
Leo Laporte [01:22:48]:
Nico called plausible deniability. Right.
Steve Gibson [01:22:50]:
Exactly. Nico Cell, the former. I'm sorry, the founder of the privacy enhancing app Wicker was approached by an FBI agent after speaking at the RSA security conference at the end of February, again 13 years ago, as first reported by CET. According to Nico, the agent asked, so are you going to give us a back door? She declined, and after pressing the agent asked him to explain if he had a written request and to reveal his boss. The agent backed down.
Leo Laporte [01:23:28]:
Yeah.
Steve Gibson [01:23:29]:
Cryptography and security expert Bruce Schneier said he's heard of these same types of tactics from others. The government has. The government has approached, seeking technological backdoors. Bruce told Mashable, it's never an explicit ask. It's an informal Oblique mention, joking conversation where you're, you're felt out as to whether you might be amenable to it. If you're amenable to the, then the conversation continues. If you're not, well, it's like it never happened. Despite the requests being informal, Schneier and other surveillance experts are concerned.
Steve Gibson [01:24:11]:
A request is a request, and despite not being legal, Bruce said, it's in the case of Microsoft. According to the engineers, the requests came in the course of multiple meetings with the FBI. These kinds of meetings were standard at Microsoft, according to both Biddle and another folks former Microsoft engineer who worked on the BitLocker team who wanted to remain anonymous due to the sensitivity of the matter. Biddle said, quote, I had more meetings with more agencies than I can remember or count. He said the meetings were so frequent and with so many different agencies. He doesn't specifically remember if it was the FBI that asked for a backdoor. But the anonymous Microsoft engineer we, meaning Mashable, spoke with confirmed that it was in fact the FBI. During a meeting, according to Biddle and the Microsoft engineer who were both present at the meeting, an agent complained about bitlocker and expressed his frustration, saying, quote, you guys are giving us the shaft, unquote.
Steve Gibson [01:25:22]:
Though Biddle insisted he didn't remember which agency he spoke with, he said he did recall this particular exchange and Biddle wasn't intimidated. He replied, no, we're not giving you the shaft. We're merely commoditizing the shaft. Biddle, a believer in what he refers to as neutral technology, never agreed to put a backdoor in BitLocker and other Microsoft engineers when rumors spread that there was one, later denied that was ever a possibility. Niels Ferguson, Microsoft cryptographer and principal software development engineer, wrote, quote, the suggestion is that we are working with governments to create a backdoor so that they can always access BitLocker encrypted data. That will happen over my dead body, unquote. For Biddle, this. I mean, these guys were serious.
Steve Gibson [01:26:21]:
And, and if you take a look, Biddle has a, has a Wikipedia entry, you get a sense for him. I, I, you know, those were the good old days of Microsoft. Mashable writes. For Biddle, this was proof of a fundamental paradox facing government agencies and security software. How do you get secure software you can rely on while also retaining the ability to break into it if people use it to commit or cover up their crimes? Biddle said, quote, I realized that we were in this really interesting spot, sort of stuck in the middle between wanting to do a much better job at protecting our users information and at the same time realizing that this was starting to make government employees unhappy. Despite Microsoft's refusals to backdoor its product, the engineers kept working with the FBI to teach them about BitLocker and how it was possible to retrieve data in case an agent needed to get into an encrypted hard drive. At one point, the BitLocker team suggested the agency target the backup keys that the software creates. In some instances, BitLocker prompts users to to print out a piece of paper with the key needed to unlock the hard drive to prevent loss of data if the user forgets his or her key.
Steve Gibson [01:27:51]:
The anonymous Microsoft engineer said, quote, as soon as we said that, the mood in the room changed dramatically. They got really excited, unquote. In that instance, law enforcement agents wouldn't need a back door at all, as the engineer suggested. All they would need was a warrant to access a suspect's documents and retrieve the document that would unlock his or her hard drive. Okay, and this finally brings me to the point I wanted to make. Mashable quotes Christopher Segoyan Writing for Christopher Segoyan, a privacy and security expert at the ACLU, whether or not BitLocker has a backdoor or not isn't even that relevant again 13 years ago, since it's a feature that very few Windows users employ or even have access to. It's not included in most Windows versions and it's not a default setting, something that Segoyan said is not an accident. He told Mashable, quote, the impact is minimal because so few people use BitLocker, but it does speak to a friendly relationship between the companies and the government.
Steve Gibson [01:29:17]:
He said, if you want to keep your data out of the US government's hands, Microsoft is not your friend. Microsoft is unwilling to really make the government go dark. They're never really willing to protect their customers from the government. They're willing to take some steps, but they don't want to go too far, unquote.
Leo Laporte [01:29:39]:
This is from an era when we were still using TrueCrypt. Yes, that was the choice for people who really cared about privacy.
Steve Gibson [01:29:46]:
Right. So what I wanted to share about that last bit was I think it's wrong. Okay, so first of all, this is a reminder about the way the world has changed during the intervening 13 years. At the time Christopher Chigoyan was quoted, he correctly noted that bitlocker was a non issue since it was so infrequently used. Right. The FBI probably wouldn't actually encounter it in the field. I doubt he would feel the same way about bitlocker today. It will not be enabled on machines that have been upgraded to Windows 11 if earlier Windows was not using it.
Steve Gibson [01:30:30]:
But most modern PCs that ship with Windows 11 pre installed, even the Home Edition with its simpler drive encryption, which is a bitlocker without as much UI and options, will have their hard drives encrypted out of the box if they're using a Microsoft account. To me, this seems entirely pro consumer. Microsoft doesn't have to push BitLocker encryption. It certainly causes some pain and annoyance for both them and their Windows users, but it's extremely good for the privacy of their users that someone cannot remove their machine's drive and mount it on another machine to dump its entire contents. Those days are over. Might this mean that the FBI needs to obtain a court order to compel Microsoft to disclose the encryption keys of someone whom they have convinced a judge may have evidence crucial to a crime which they're working to solve? Yes, that might be necessary. But all of that is only necessary because Microsoft is pushing everyone to encrypt their drives in the first place. And there's no way any law enforcement agency anywhere is happy about that.
Steve Gibson [01:32:03]:
If Microsoft were not defaulting to to using BitLocker, things would still be the way they were 13 years ago when Christopher Sigoin said who cares anyway? No1 uses BitLocker today. Most new systems do. And most Windows users obtain the huge benefit of having their drives data much more securely protected from non casual non government attack than if it were not encrypted. I don't disagree that Microsoft might be able to do more and that they may be short changing their users privacy when push comes to shove. If they can provide unlocking keys to their users in an emergency, and that's the point right of their escrowing, then they can also provide them to law enforcement when under order to do so by a court. On balance, I would venture that many, many, many more Windows users data have been saved by this policy than have been compromised by a law enforcement subpoena. I'm sure that Microsoft would always require a court order before disclosing. That's a given.
Steve Gibson [01:33:29]:
It's not as if anyone can just ask for someone else's decryption keys and get them. So the data does have the same protection as does our other personal property and possessions. The protection is not absolute, no. And yes, if users were capable of taking full responsibility for the decryption of their data, that is for backing up their keys, then it could be absolute. But at least in the United States, the protection is in line with what US Citizens enjoy in the other areas of our lives. So anyone who objects to turning their keys over to Microsoft has now been forewarned that their keys, if this a concern, may be disclosed to law enforcement upon legal demand. If that's a concern. Windows Pro and Enterprise and Education Edition allow users to disable Microsoft's default key escrowing by setting two policies.
Steve Gibson [01:34:41]:
There are two policies in the registry. Do not allow BitLocker recovery information to be stored in Azure AD and do not allow BitLocker recovery information to be stored in Microsoft account. After doing that, their BitLocker recovery keys can be rotated so that Microsoft never obtains the updated keys. BitLocker was designed correctly the way all modern crypto is, so that there's a, like a full volume master encryption key which never leaves the device. Then there is a, another key that encrypts that key and it's that secondary key which Microsoft gets a copy of. It's also written into the TPM protected by a pin. And it's also, it's the thing that you're able to back up to your usb. It's the, it's the master key encryption key.
Steve Gibson [01:35:46]:
So it is, it is possible with two simple commands in, in Windows to rotate that key basically to discard while, while bitlocker is unlocked so that, that, that the, the, the volume, the, the, the full volume master key is decrypted to rotate the key that encrypts it after you've disabled sharing this with Microsoft. And if Microsoft has a key, it's no longer of any value. Remember though, there's no one to come crying to if you're unable to log into your computer or there's any sort of problem. So I would absolutely print this on paper. You know, a USB drive, a USB thumb drive is not high fidelity storage. I wouldn't trust anything that I care about to a USB drive print it on paper because it's too important not to. But my feeling is again, Microsoft doesn't have to be encrypting everyone's drive, right? And if everyone's drive was not encrypted by default, most would not be and the FBI would have no problem. The fact that they have chosen to switch to default encryption on to me, that's 100% in the service of their customers.
Steve Gibson [01:37:12]:
That's a good thing and for everybody's sanity because encryption is absolute. Using an online account, which we all know they don't make easy not to use these Days, if you can even find a way around it. They're backing up those keys. And again, they're not doing it to be a friend to the FBI. They're doing it because people are going to and have forgotten how to log onto their computer. And if you can't do that, you can't access your hard drive ever. So again, I think it's exactly the right set of, of trade offs Leo. It's.
Steve Gibson [01:37:48]:
Everybody's drive is encrypted. There is a get out of jail free card stored up in Redmond's servers and you're able to turn that off if you're knowledgeable about how to do so and wish to do so. So what did Alex think?
Leo Laporte [01:38:04]:
I think it essentially agreed with you that it wasn't hair on fire. It's very good for people to understand that this can happen so that you can make a choice depending on your, you know, your tolerance of risk and your threat and.
Steve Gibson [01:38:22]:
Yeah, and some people may just not want Microsoft to have that ability. I get it. Actually, I wouldn't because, I mean, I don't want to have an online account. I don't want Microsoft in my business.
Leo Laporte [01:38:33]:
There's a lot of reasons not to use an MSA account to log into Windows. It's a shame Microsoft makes that harder and harder and harder. I think that's where maybe Microsoft is a little bit at fault. But you're right. I don't think they're doing it to make it easy for law enforcement to get the data. They're doing it because people lose their keys and they're just want to protect them. Yeah. He pointed out though that Apple has found a way to do all of this without having access to the keys in a way that they could give this to law enforcement.
Leo Laporte [01:39:04]:
Apple by default encrypts the hard drives on, on, on your Macintosh with Fire File Vault. They do have a way of saying I forgot but they're, they're using some sort of kiosk system so that they don't actually have access to the key. And as you pointed out, Google also does this. In fact, all phones do this without a backdoor. Remember the FBI has tried to get Apple to give up the keys in the San Bernardino case. And Apple said yeah, no. And that probably hurt their reputation with some people considerably and definitely buffed it up with some people like me. So I think it's good that we know this.
Leo Laporte [01:39:51]:
It's good that there are alternatives. It is interesting. Microsoft doesn't do what Apple does. They could, they could take that extra Kiascro step and make it possible to, you know, do it.
Steve Gibson [01:40:06]:
I'll have to look into that because I don't understand what it is that they're doing. If you. Maybe it's because you have access to the physical hardware but I mean if, if the user doesn't have to remember anything.
Leo Laporte [01:40:20]:
Well, no, the TPM stores it. I think that you, when you log in. Yeah, I know when you log in it unlocks it. So it's not, you know, on my Linux box where I use Lux, I have to unencrypt the drive before I can log in. I mean that's a two step process. Apple's not like that. You log in, then there is a long process, fairly long process of unencrypting the drive and then you're in. And I'm not sure.
Steve Gibson [01:40:49]:
So if it's that your device contains the key and they're able to get it from the device, then it is device bound.
Leo Laporte [01:40:56]:
I think that's what it is like tpi.
Steve Gibson [01:40:58]:
Okay, and that makes sense. But then what happens if the user loses their device? They have a backup and that's what, that's when they want to use the backup to restore it onto a different device but that new device won't have that secret. So again, I've not looked at this for so long and things change but it would be interesting to know it.
Leo Laporte [01:41:20]:
Generates a recovery key which you could print out and that you can use.
Steve Gibson [01:41:27]:
Okay.
Leo Laporte [01:41:27]:
And that Apple doesn't have access to.
Steve Gibson [01:41:29]:
That in that case it's exactly like what we would do with BitLocker except that Microsoft defaults to also having a copy of that recovery key. Apple does not default to having a copy of the recovery key.
Leo Laporte [01:41:43]:
Yeah, you can allow my iCloud account to unlock my disk, which is the BitLocker solution. But Apple's explicit by the way when you set this up about how to.
Steve Gibson [01:41:55]:
Do this unlike that's why I've chosen them to be the backbone for my, you know, for my stuff is I trust them more than anybody else.
Leo Laporte [01:42:04]:
I mean look, we automatically, on mobile devices it's all encrypted automatically. And you know I, we talked about this the other day when I was setting up this Linux laptop. I just think it's better to encrypt it because that way you don't have to worry about erasing it. And the issues of is some of this going to be, you know, accidentally stored on an SSD and all of that stuff, it's just, it's gobbledygook without the password, without the master key. Yeah, I like that. I think that's the right way to go. So that's, I'm comfortable with that. And yeah, I have to enter my password twice on my Linux box.
Leo Laporte [01:42:39]:
Well, I enter it once and I use a fingerprint the second time. I don't find that to be too onerous.
Steve Gibson [01:42:44]:
No, I think biometrics is going to be the way, the way things are going. We're having to authenticate more and more often to say we're, that we're sitting in front of our computer. And so just having a fingerprint reader in a keyboard or a mouse makes a lot of sense.
Leo Laporte [01:42:58]:
It's a great solution. Yeah.
Steve Gibson [01:43:00]:
So we heard from me and we heard from Alex Stamos. We're next going to hear from Alex Niehaus.
Leo Laporte [01:43:05]:
After a break, our longtime friend, one of the, in fact the very first advertiser Estaro, by the way, Chris Segoyan, for those who are wondering, is Sal Segoyan's brother. Sal, of course, very well known Apple guy for a long time. His brother's very well known security guy. So that is the same Segoyan. That's the Segoyan family. You're watching security now with Mr. Steve Gibson and I'm Leo Laporte. We're glad you're here.
Leo Laporte [01:43:32]:
We encourage you to join the club, support the show if you believe in what we're doing. Yeah, the advertisers support it, but that's not enough to keep us on the air. We need your support too. A quarter of our operating expenses are paid for by Club Twit members. It's going to be more this year as advertising support dwindles a little bit. That's partly our fault. Lisa did all the ad sales for a long time. My wife and our CEO, she decided to, I guess semi retire.
Leo Laporte [01:44:00]:
She didn't want to do that anymore. So we've outsourced the ad sales, but it's taken a while to get up to speed and right now we're a little bit underwater. So this becomes even more important. If you want to support the programming you hear on Twitter, join the club. It's very simple. Twit tv, Club Twit, lots of benefits, including ad free versions of all the shows, access to the Club Twit discord, all the special programming we do just for club members. But the main reason to do it, to keep these shows going. Twit tv, Club Twit.
Leo Laporte [01:44:32]:
Now on we go with more security.
Steve Gibson [01:44:35]:
Now, Steve friend Alex shares, I think some tremendous business centric perspective on what the application of AI will mean to the enterprise and what pitfalls it's more than likely to offer. So he writes. Gents Security now has spent a lot of time over the last couple of months on AI and how its first, most natural application is software development. After a recent experience packaging up a hobby script as a public open source PowerShell module, I could not agree more that the development tool set is rapidly changing. But there's always a but. I worry about mechanically produced code, particularly in enterprise systems that deal with financial and personal information at scale. Think a brokerage or a multi state healthcare system. If we look at the historical waves of management thinking about the development costs of crucial enterprise systems, we see an endless push to reduce those costs that's inevitably led to declines in quality, reliability and most of all, security.
Steve Gibson [01:45:52]:
In the early days of enterprise development, engineers worked in house. I started as a developer at Mass General hospital in the 1980s when outsourcing was not yet in the lexicon. Yet MGH developed mumps in house, which today is the core database environment of the largest electronic medical records vendor in the usa. Outsourcing became all the cost cutting rage among enterprises, followed on quickly by the offshoring of enterprise development. Executives based on then current business consulting doctrine decided that it wasn't their core business. They thought their businesses just made widgets or sold products or provided a service. It was orthogonal to their core function. History shows what a strategic mistake that thinking was.
Steve Gibson [01:46:51]:
It led directly to the situation we find ourselves in today, a race to the bottom to procure development resources that cost a fraction of in house resources. Security now regularly documents the results, breaches, botnets, system failures, and worse. Over time, enterprises discovered their IT systems are the business you can't make a widget, sell it, or service it without an enterprise system. Unfortunately, many businesses continue to dismantle their core capabilities with a massive mistaken Shoemaker's children syndrome. AI's rapid development means yet another giant epoch in computing technology is just starting. We're about to live through another turn of the wheel of technological progress. Having used AI in a simple vibe coding project, it's clear to me that AI cannot replace developers. But that's how it's being pitched to the same enterprises that previously committed the not our core function mistake.
Steve Gibson [01:48:04]:
Amazon and Microsoft are both insisting that replacement is the primary benefit. Enterprises now completely beholding to these hyperscalers take their cues from them. I use AWS and Azure Cloud products every day. In real client situations, I can tell you quality isn't their North Star. Anyone who's ever struggled with Microsoft's automated API documentation can tell you it isn't worth the electrons used to display it. In other words, we should not repeat the mistaken business assumptions that drove the outsourcing debacle. Instead, we can upscale developers skills while still retaining the focus on human development. Imagine the benefits we could we if imagine the benefits if we used AI not to replace those $25 per hour outsourced developers who who produce the worst code we've ever seen, but instead train them to use AI to write tests, to check the level of every included NPM or PYPY package against the CVSS database before recompiling, or to fuzz their functions.
Steve Gibson [01:49:26]:
That last is the hallmark of outsourced code vulnerabilities. It works, but only on the happy path. We're still in the early hype cycle days of AI, but the hype sometimes becomes at least the partial reality. Thinking about AI only as replacement for developers makes the same mistake we made with outsourcing only magnified many times. You both know how much I love the show and your tenacity in producing compelling podcasts week after week, decade after decade. Thanks for that. Security Now's number one fan Alex so I wanted to share Alex's perspective because I think it is super valuable and exactly correct. It also makes such complete sense.
Steve Gibson [01:50:15]:
My own perspective tends to get wrapped up in the technology. So, you know, the effect AI will be having on internal enterprise development isn't the sort of thing I tend to focus on. So thank you, Alex. I imagine this may give many of our enterprise centric listeners something to think about and perhaps discuss with their peers and managers. And Leo, you can, you can see his point, right?
Leo Laporte [01:50:37]:
Oh, he's absolutely right. Yeah.
Steve Gibson [01:50:39]:
It's absolutely being sold as think of all the people you can fire.
Leo Laporte [01:50:43]:
Right. Well, and we talked about this last week, I mean, and I've been thinking about it since you posed the question. The skills that you have as a coder are not thrown away. When you're doing vibe code, you very much need similar skills. It's almost as if you're a team leader working with junior coders and instructing them. The sad thing, and I think the real problem we're going to have is that a lot of companies are using AI coding tools to replace the junior programmers, which means there's no longer a pipeline for people to become, you know, senior programmers because they don't get to do it. So I hope companies will continue to hire entry level coders to work with these tools. What's interesting is you're going to get a lot more applicants who are very skilled with these tools.
Leo Laporte [01:51:33]:
That's what kids are doing in college right now in computer science classes. They're learning how to use these tools. And maybe, let's face it, maybe that's the future of coding. I wouldn't be surprised. You, you, you, it's just another kind of high level language.
Steve Gibson [01:51:48]:
It's just, I do think it's the future. I agree with you completely. I think it's very clear that, that AI has such a profound ability to code that getting, you know, that, that learning how to get AI to give you the answer that you want is part of the.
Leo Laporte [01:52:06]:
Yeah, part of the learning. Yeah, yeah. But Alex is right. We, we, we can't treated as a panacea. We have to think about the consequences and, and how we're going to keep that pipeline going and how we're going to go forward instead of throwing everything out and starting over. Because that's not going to work either.
Steve Gibson [01:52:23]:
Well, I think we're going to go through a period of pain, Leo.
Leo Laporte [01:52:26]:
Yeah, we know that.
Steve Gibson [01:52:26]:
While, while the C suite executives realize, whoops, we did throw out the baby.
Leo Laporte [01:52:32]:
Whoops.
Steve Gibson [01:52:34]:
Gavin has a confession to make. Another listener of ours, he said, hi Steve. And he says right up, I have a confession to make. I have knowingly opened up public database access in production systems. He said, here's how this came about. A few years ago, I became the sole software developer at a small UK ISP following the departure of several senior team members. The company had a plethora of legacy systems studio scattered among various cloud service providers. Following COVID 19, sales plummeted.
Steve Gibson [01:53:11]:
With many customers shutting up shop and very few businesses investing in connectivity products, we had to start, we had to start cutting costs or risk going under ourselves. One of our biggest costs was managed database instances. My predecessors had spun up individual database servers, many SQL and PostGri for each of our many applications across different clouds. AWS, DigitalOcean, etc. And multiple different accounts within each. In terms of application isolation, it was a great approach, but it was costing a small fortune. My task was to consolidate as many of these databases as possible which would bring our costs down quickly, amounting to thousands of pounds per month. There were three possible ways of accomplishing this.
Steve Gibson [01:54:08]:
After first migrating all application databases to a central instance, first move all of the applications from the various cloud accounts into a single account and VPC so they could all, you know, virtual PC so they could all access the new database instances privately or give each of Our applications static IPs and set up security rules in front of the new database instances to limit their access or open up full public access to make the pat and make the passwords as strong as possible. Now, remember this, Gavin is a listener, so he knows what he's saying. He wrote, the problem with moving all of our applications is that it would take a very long time and many of them relied on cloud provider specific utilities and network configurations for which we would need to find alternatives and rewrite large swaths of legacy code. In other words, there was a lot of lock in and actually moving them would have meant, you know, would have been a huge burden or the second question. He says, the problem with the static IP solution is that they became quite expensive, meaning static IPs and some of our platforms, Digital Ocean apps, for example, at the time didn't offer them at all. So static IPs were out. You couldn't use static IPs and firewall rules. So he writes reluctantly, the third option was chosen and management were happy to take on the additional risk.
Steve Gibson [01:55:52]:
Right? Right up, up until there's a major breach, they're happy, he says. But he said management were happy to take on the additional risk, which I explained to them, especially in light of the immediate expected cost savings. He says. And so for about four years we were running with our main databases publicly exposed to the Internet. But now today, after a lot of work, all of our apps are in private subnets and linked VPCs and thankfully our databases are no longer exposed. I know that my company was not alone in doing this sort of thing, having spoken with other devs in the industry. So here it is, a real world example of how this can happen. And not through negligence, rather through unfortunate resource pressures.
Steve Gibson [01:56:46]:
Luckily, we were never compromised to our knowledge. Good for you, Gavin, for acknowledging that. And we just about managed to bounce back as a business and I'm making sure this sort of thing won't happen again. All the best, Gavin listening since 2018. So, first of all, Gavin, confession is always good for the soul. Second, I have no problem whatsoever with what you needed to do, because none of it was done blindly or without thought and a clear understanding and balancing the costs and the potential consequences. So I would judge that to be, well, of course not maximally safe, at least entirely responsible. You were not irresponsible, and given the constraints you were operating under, the interim solution you adopted was the best you could achieve.
Steve Gibson [01:57:41]:
No one should fault you for that. John David Hickin, his subject was on ISPs selling your DNS data, he wrote. Can't you just set up the ISP's modem router as an edge router, he says turning off WI fi as well. And connect another router or more behind that, he says an old solution of yours repurposed. Cheers, John. Okay, so I received other similar questions and he's talking about ISP spying. So I wanted to take a minute to examine the ISPs advantage. What we need to consider is that our ISPs minus Cox cable, knows exactly who we are by household name, address and payment information.
Steve Gibson [01:58:37]:
And they're in the very special and privacy sensitive position of having direct access to our individual Internet traffic. We've invested endless podcasts examining cookies and fingerprinting and tracking beacons and all manner of privacy breaching and privacy protecting solutions and technologies. And there among it all sits our ISPs through which every bit, byte, kilobyte, megabyte and gigabyte of our traffic flows. No one else on the entire planet enjoys such direct access to exactly what those in our household are doing from moment to moment now. Before the era of let's encrypt and their great Encrypt the Internet TLS revolution, our ISPs were often privy to the detailed content of everything we did. In retrospect, it was quite bracing. Today with everything encrypted, ISPs are unable to see into our connections, but they could still see where we're connecting. And if we're not encrypting our DNS, they can also track every domain name, anyone in our household and any of our home's IoT devices looks up tracking the remote IP we're connecting to is much less useful today than it was years ago due to the massively widespread use of multi domain hosting.
Steve Gibson [02:00:24]:
Cloudflare has a large pool of IP addresses which provide services to their large pool of of customer websites. Among those, there's a many to many relationship. So having an ISP only able to see a customer's traffic destination is far less useful today than it was 20 years ago. However, there's still a problem and that's that sni, the server name indication that's carried in the TLS client hello handshake is only encrypted when both ends support and negotiate. TLS version 1.3 TLS 1.3 introduced ECH encrypted client hello for the express purpose of preventing anyone who might be examining Internet traffic like an ISP from picking up the destination domain of any new connection at this point in time. At the start of 2026, more than half of all Internet traffic is now using TLS version 1.3, but less than half and at least a third is still not. But yet the privacy leakage that has you know, that has continued to occur during the TLS handshake is slowly draining away over time and eventually we'll all get to 1.3 after destination IP and TLS handshake leakage. DNS is the remaining potential privacy leak.
Steve Gibson [02:02:04]:
Firefox users are now being automatically protected thanks to an agreement between Mozilla and Cloudflare to use Firefox's built in DNS over HTTPs with Cloudflare's DNS resolvers by default. So Firefox users default protection, but the Chromium browser family by default will upgrade to DOH if and only if the DNS provider the user has manually configured for unencrypted DNS also supports DoH. This is called opportunistic DoH. But since most users have not manually reconfigured their DNS and just run with whatever DNS their ISP has provided, that will be unencrypted DNS over udp. So only people using Firefox today will have their DNS lookups masked from DNS snooping. And I don't know why, but that's the way it is. One increasingly popular solution is to use or obtain a home router that can perform its own remote DOH lookups and configure it to use. One of the major free CDN DNS solutions offered by, you know, Cloudflare, Google Open DNS next DNS or whatever service you choose.
Steve Gibson [02:03:38]:
After that, all of the internal networks, DHCP configured devices, meaning typically all of your computers and mobile devices and IoT devices which would all be using DHCP to get their LAN IPs will be using standard DNS to the router. But then all queries for domain names will be encrypted and handled by the router. ISPCs, nothing. So with everything using TLS now and TLS Moving to version 1.3 with encrypted client hello to mask the target domain and your browser or router using DoH. The only remaining privacy concern is an ISP able to observe the destination of your traffic. I don't see that ever going away. As I noted, thanks to the increasing use of CDNs and cloud hosting which aggregate many domains among IP addresses. That's far less you know, certain than it once was that an ISP can't absolutely know where you're going.
Steve Gibson [02:04:48]:
But for anyone desiring absolute privacy from ISP snooping, the final step would be to use some form of traffic tunneling. So that means TOR or a vpn. Using one of those means that the ISP is able to determine nothing beyond the fact that you're using Tor or a VPN that they can see, but nothing else. So, circling back around to John's question, it should be clear that as long as an ISP is the carrier of a subscriber's traffic, nothing else the user might do inside their network, like a router within a router, would change the nature of the traffic which emerges from their LAN to pass under the ISPs watchful and perhaps curious traffic logging and monitoring eye. If they're doing that, don't know one way or the other on any specific issue. And finally, Troy. Wow, Shahumian. I hope I said that right.
Steve Gibson [02:05:57]:
Troy. He said. Steve, omg, I just realized how many podcast files I have that probably need rewinding. Can you recommend a program? Can you recommend a program to do this? Okay, so Troy, first of all, I feel your pain and I can only imagine the size of the podcast rewinding backlog you might be facing. Now, I know that many of our listeners also listen to other podcasts, so your burden might be even larger. But just taking security now, since this is podcast 1062, if you've been listening from the start, or if you started late, then went back to catch up from the beginning, and if God help you, you have not previously rewound any of those before now. Well, it's not. It's not going to be pretty.
Steve Gibson [02:07:03]:
I don't envy the corner that you've painted yourself into. Okay, now what you really need is some sort of. And this is what you're asking for, right? Some sort of mass gang parallel podcast rewinder. And I was thinking about this. Once I finish reworking GRC's E Commerce facility and Lori and I get moved to our new place that'll be a project in the spring here I plan to readdress GRC's valid drive freeware and although I don't have it on my roadmap, I may see whether I might be able to sneak in some sort of mass podcast rewinding facility sort of as a side feature. So you could copy the podcast onto a, onto a thumb drive or, you know, USB attachable storage. And then I would have valid drive just rewind them all for you.
Leo Laporte [02:08:02]:
So I have already Started the Vibe coding to help me rewrite a podcast rewinding program. Claude says I'll be happy to help you build a podcast rewinding program. I need to understand more of what you're looking for. Do you want a command line? A desktop gui, A web app? A mobile app?
Steve Gibson [02:08:22]:
Can we get all the above, Leo?
Leo Laporte [02:08:24]:
Because we could do it in Python.
Steve Gibson [02:08:28]:
This is why we have AI and data centers. Is that so we can rewind people's podcasts? It's. It's an un. Unappreciated problem.
Leo Laporte [02:08:37]:
Do we. We want basic playback controls, which would include. Include rewind, variable speed playback, podcast feed management, progress tracking. Yes. Let's do it all.
Steve Gibson [02:08:48]:
Okay, we did. Well, we definitely need pod progress tracking because with 1062 podcasts to be rewound after you finish listening, you're gonna have to see how far along you've gotten. Not clear to me. Well, we don't really know yet, Dewey, how quickly a podcast can be re. Rewound.
Leo Laporte [02:09:11]:
That's a good question. And that's probably why I probably shouldn't have chose Python, but something. Maybe I should have written this in c. Something a little fast.
Steve Gibson [02:09:17]:
You got to use a compiled language. Yeah, you don't want to. You know, maybe, you know, people. People did buy.
Leo Laporte [02:09:28]:
Whoa, I'm sorry. I clicked a button and it erased us both. Here we go. People did what you were seeing.
Steve Gibson [02:09:35]:
People did buy, like, a little standalone PC in order to run Spin, right? Oh, if it turns out that rewinding podcasts does take some time per podcast, then maybe it would make sense to get a little, you know, auxiliary PC so you're just, you know, so you're not locked out of getting any work done while your podcasts are being rewound, especially if you've got a large backlog.
Leo Laporte [02:10:00]:
We need asynchronous podcast rewinding for sure.
Steve Gibson [02:10:03]:
Yeah, definitely. Parallel multitasking.
Leo Laporte [02:10:06]:
Multitask concurrency. Yes. That means I'm going to have to use a. Probably rust to be good for.
Steve Gibson [02:10:11]:
For that, fire up a thousand threads and give each one a podcast.
Leo Laporte [02:10:15]:
Yeah. Patrick Delahanty said you wouldn't believe how much time Twitch spends rewinding podcasts because listeners didn't do it themselves.
Steve Gibson [02:10:24]:
So really, it was that sticker. The sticker on Blockbuster said, be kind, rewind, rewind.
Leo Laporte [02:10:32]:
It's for us. All we're doing.
Steve Gibson [02:10:34]:
Just don't think of yourself when you're done watching a podcast. Just leave it there. Leave it hanging.
Leo Laporte [02:10:40]:
Oh, no, I've made a mistake. I've made a horrible Mistake. Claude code says user declined to answer questions. So it's given up. It's given up. Laughs I'll start over.
Steve Gibson [02:10:55]:
We're going to take our final break, Leo, and then we are going to look at my take on what this means. That AI has now been found generating malware.
Leo Laporte [02:11:10]:
Yeah. Well, we knew it was just a matter of time, but it's worse than we thought. No? Yeah, because it's probably pretty good. It has access to all the malware ever written before, so it can really refine the concept. That's coming up. You're watching Security now with Steve Gibson. We're glad you're here. Keep watching.
Leo Laporte [02:11:32]:
We do the show every Tuesday, so we'll be back in February with the first show of the month, Tuesday next week. Next week. AKA that's right. Steve always boils it down to the essentials. AKA next week. We do the show every Tuesday right after Mac break Weekly. That's about 1:30 Pacific. I'm sorry, I keep breaking my pledge.
Leo Laporte [02:11:55]:
I want to do this in 24 hour time. From now on. I'm getting rid of the o clocks, the AMS and the PMS. 1330 Pacific. That's 1630 East Coast Time. That's 2130 UTC. You can watch us live on YouTube, X Twitch, Facebook, LinkedIn and Kik, of course, in the Club Twit Discord as well. Or download episodes from Steve's site or Twitter TV sn.
Leo Laporte [02:12:21]:
I'll explain all that at the end of the show. Meanwhile, let's get back. He's hydrated to security now.
Steve Gibson [02:12:33]:
Okay, what we've been expecting has happened and it's every bit as bad as we worried it would be. Last Tuesday, Checkpoint Research published their analysis of a newly discovered malware which they named Void Link. Their research was titled Void Link. Evidence that the era of advanced AI generated malware has begun. What we all knew had come or was coming has arrived. Checkpoint summarized this news with five key points. They wrote, Checkpoint Research believes a new era of AI generated malware has begun. Voidlink stands as the first evidently documented case of this era as a truly advanced malware framework authored almost entirely by artificial intelligence, likely under the direction of a single individual.
Steve Gibson [02:13:36]:
Second, until now, solid evidence of AI generated malware has primarily been linked to inexperienced threat actors, as in the case of Funk Sec, or to malware that largely mirrored the functionality of existing open source malware tools. Voidlink is the first evidence based case that shows how dangerous AI can become in the hands of more capable malware developers. Third, Operational Security opsec Failures by the Void Link developer exposed development artifacts. These materials provided clear evidence that the malware was produced predominantly through AI driven development, reaching a first functional implant in under one week. Fourth, this case highlights the dangers of how AI could enable a single actor to plan, build and iterate complex systems at a pace that previously required coordinated teams, ultimately normalizing high complexity attacks that previously would only originate from high resource threat actors. And finally, from a methodology perspective, the actor used the model beyond coding, adopting an approach called spec driven development, first tasking it to generate a structured multi team development plan with Sprint schedules, specifications and deliverables. That documentation was then repurposed and as the execution blueprint, which the model likely followed to implement, iterate and test the malware end to end. Okay, so we've been rejoicing over the surprising jump in Claude code's ability to operate.
Steve Gibson [02:15:38]:
For example, it has, you know, has made it. CLAUDE has made enabling end to end creation of applications possible. You know, as they say, everybody's doing it. Unfortunately, we've known that everyone would eventually include malware authors. That's now happened and it's as bad as we worried it would be. I'm not going to examine this particular instance in depth because what's the point? There will be another one tomorrow and the day after or, you know, an hour from now. This is clearly the beginning of of an entirely new problem domain. Nevertheless, Checkpoint's introduction is worth sharing, they wrote.
Steve Gibson [02:16:25]:
When we first encountered voidlink, we were struck by its level of maturity, high functionality, efficient architecture and flexible dynamic operating model. Employing technologies like EBPF and LKM rootkits, and dedicated modules for cloud enumeration and post exploitation in container environments. This unusual piece of malware seemed to be a larger development effort by an advanced actor. As we continued tracing it and tracking it, we watched it evolve in near real time, rapidly transforming from what appeared to be a functional development build into a comprehensive modular framework. Over time, additional components were introduced, command and control infrastructure was established, and the project accelerated toward a full fledged operational platform. In parallel, we monitored the actors supporting infrastructure and identified multiple operational security failures. These missteps exposed substantial portions of voidlink's internal materials, including documentation, source code and project components. The leaks also contain detailed planning artifacts, Sprints, design ideas and timelines for three distinct internal teams.
Steve Gibson [02:17:58]:
They have in quotes because it was all AI teams spanning more than 30 weeks of planned development. At face value, this level of structure suggested a well resourced organization investing heavily in engineering and operationalization. However, the Sprint timeline did not align with our observations we had directly witnessed the malware's capabilities expanding far faster than the documentation suggested. Deeper investigation revealed clear artifacts in indicating that the development plan itself was generated and orchestrated by an AI model, and that it was likely used as the blueprint to build, execute and test the framework. Because AI produced documentation is typically I'm sorry, because AI produced documentation is typically thorough. Many of these artifacts were timestamped and unusually revealing. They show how in less than a week, a single individual likely drove voidlink from concept to a working, evolving reality. As this narrative comes into focus, it turns long discussed concerns about AI enabled malware from theory into practice.
Steve Gibson [02:19:21]:
Voidlink, implemented to a notably high engineering standard, demonstrates how rapidly sophisticated offensive capability can be produced and how dangerous AI becomes when placed in the wrong hands. The general approach to developing Void Link can be described as spec driven development. In this workflow, a developer begins by specifying what they're building, then creates a plan for, breaks that plan into tasks, and only then allows an agent to begin implementing it. Artifacts from Void Link's development environment suggest that the developer followed a similar pattern, first defining the project based on general guidelines and an existing code base, then having the AI translate those guidelines into an architecture and build a plan across three separate teams paired with strict coding guidelines and constraints, and only afterward running the agent to execute the implementation. Void Links development likely began in late November 2025. And remember, we're in in the end of January when its developer turned to Trey T R A E Solo, an AI assistant embedded in in tray, an AI centric ide. While we do not have access to the full conversation history, Trey again T R A E if anyone wants to Google it automatically produces helper files that preserve key portions of the original guidance provided to the model. Those tray generated files appear to have been copied alongside the source code into the Threat Actors server and later surfaced due to an exposed open directory.
Steve Gibson [02:21:20]:
This leakage gave us unusually direct visibility into the project's earliest directives. In this case, Trey generated a Chinese language instruction document. These directives offer a rare window into voidlink's early stage planning to and the baseline requirements that set the project in motion. Okay, so Trey, spelled T R A E is a creation of ByteDance, the famous Beijing based creator of TikTok. It's been around since last February, so it's relatively new and it's been maturing rapidly. What makes Trey appealing is that it's an IDE, an integrated development environment centric solution. Trey's documentation explains writing Trey IDE is your powerful AI powered code editor from ByteDance, featuring Claude 3.5 and GPT4 and Deep Seq integration. By the way, that's back in February.
Steve Gibson [02:22:25]:
It's updated now. It's designed to be your coding companion, offering AI assisted features like code completion, intelligent suggestions and agent based programming capability. When developing with Trey IDE you can collaborate with AI to boost your productivity. Trey IDE provides essential IDE functionality including code editing, project management, extension management, version control, and more. It supports seamless migration from VS Code and Cursor by importing your existing configurations during coding. You can engage in real time conversations with the AI assistant for help, including code explanations, documentation generation and error repair. The interface is fully optimized for both English and Chinese users. The AI Assistant understands your code context and provides intelligent code suggestions in real time within the editor.
Steve Gibson [02:23:28]:
Simply describe your requirements to the AI assistant in natural language and it will generate appropriate code snippets or autonomously write project level code and cross file code. Tell the assistant what kind of program you want to develop and it will provide relevant code or automatically create necessary files based on your description. With support for multiple programming languages and a rich plugin ecosystem, Tray IDE helps you build complete projects efficiently. So I want to give everyone a sense for for what's happening in this segment of the world. So here's an independent review posting made last May, three months after Trey's released to the world. The guy wrote Meet Trey AI, a free AI coding agent with model context protocol mcp. He wrote, AI code assistants are flooding the market, but most still feel like chatbots taped to an editor tray. IDE takes a different route.
Steve Gibson [02:24:35]:
It ships an integrated development environment with a built in agent framework that parses your entire code base, talks to outside tools through the model context protocol, and crucially cost nothing to install if you're still paying for a $20 monthly subscription. Trey AI is an AI coding agent that offers local first setup and a zero dollar price tag, making it worth a test drive. So what is Trey? Trey AI is a free AI coding agent with model context protocol that offers itself as a collaborative partner for software engineers. It's designed to fit into a developer's existing coding environment, not as a replacement, but as an intelligent AI assistant. Trey provides budget relief. The main editor and completion model are free, removing the time item or the line item that has kept many finance and ops leaders from greenlighting AI pair programming pilots. Instead of a single do Everything helper, Trey lets you spin up specialist agents, one for refactoring, another for writing tests, a third for documentation with each AI agent getting its own prompt, Tools and guardrails Enterprise style data rules without enterprise pricing code stays on your machine. Any Any files briefly sent for indexing are wiped after embeddings are created.
Steve Gibson [02:26:14]:
Regional hosting US, Singapore, Malaysia, etc. Keeps government teams calm about residency. What does Trey II bring to the table? Working Together Trey's development environment is built to work with existing developer setups. The goal is to improve how developers and AI can cooperate for better outcomes and faster project creation. Direct AI Communication Developers can talk to Trey using straightforward language and simple instructions, and they can delegate work, facilitating a more interactive relationship between humans and AI. Custom AI Assistants Trey offers a flexible system for setting up specialized AI agents. It comes with a standard agent called Builder for everyday tasks. Past that, developers can create their own group of AI helpers, each with specific tools, skills, and ways of working, so the AI can be adjusted to fit precise project requirements.
Steve Gibson [02:27:14]:
Connecting to other tools Trey can link up with different external applications. Currently, it uses a system known as Model Context Protocol, which allows its AI agents to gather information from outside resources to better complete the tasks they're given. Understanding Project Details Trey gains a good grasp of a project's specifics by looking at code repositories, information from online searches, and documents provided by users. Developers can also set up custom rules to fine tune the AI's behavior, making sure it handles tasks exactly as intended and smart code suggestions as developers type. Trey offers intelligent code completions as it can anticipate what the developer is trying to write and automatically fill in code segments, helping speed up the writing process. The idea is to make the interaction feel natural, allowing developers to assign tasks or ask for help using simple commands. This approach could fundamentally change team dynamics, making AI less of a tool and more of a team member. And so, in conclusion, he adds, the arrival of free, capable AI coding agents like Trey AI isn't just another tech trend.
Steve Gibson [02:28:37]:
It shows a maturing of AI into a practical aid for a highly skilled and often costly workforce. Its mix of free pricing, configurable agents, and tight privacy controls offers a low risk way to explore agentic coding without rewriting procurement rules. For CTOs and engineering managers, the math is straightforward. Swap a paid copilot for a free locally hosted agent system and redirect budget to GPU credits or headcount. While AI won't be replacing entire development teams anytime soon, tools that augment their abilities, especially free ones, are certainly worth trying if your roadmap includes AI assistant development. But your finance team keeps asking for ROI proof Trey may be the simplest, yes, you can give for the entire quarter. Okay, so I don't mean to suggest that this Trey IDE centric AI coding system is in any way, you know, super special. Quite the contrary, in fact.
Steve Gibson [02:29:48]:
I'm sure the world is already being flooded with similar and similarly powerful AI based solutions. I just wanted to share a sample of the tool that happened to be picked by the chain, the Chinese language speaker who created this particular void link malware. As is always the case for these sorts of things, my interest in sharing this on the pad on the podcast is, you know, giving this event, you know, the news that this event brings some context. And as I said at the top of the show, unfortunately today I truly fear that the worst, that the news is worse than bad. And I am unable to find a silver lining here. We're all familiar with the notion of asymmetric warfare, sometimes referred to as guerrilla war. The use of malware to any malware to penetrate, infect, exfiltrate and encrypt an enterprise's resources is inherently asymmetric. One loan malicious hacker hiding somewhere, anywhere on the Internet, perhaps literally in his mother's basement, is able to single handedly attack and significantly negatively impact the national economy of the United Kingdom in one well placed attack on Jaguar Land Rover.
Steve Gibson [02:31:24]:
It's the very definition of asymmetry. The problem with this emergence of AI and its expected application to the empowerment of all forms of coding is that I believe history and the evidence suggests that the bad guys will be gaining a far greater advantage from their malicious application of AI to create malware. Then the good guys will be gaining through their use of it to do what? It's not at all clear what the good guys can do that isn't already being done. In other words, I cannot see how the benefit from the application of AI to both sides is in any way even close to being symmetric. I believe that AI's value is extremely asymmetric here and that the asymmetric battle that's being waged for the past decade, that's been waged for the past decade is about to become far more asymmetric. In years past, we've observed that hacker talent encompasses a wide range from the so called script kiddies at the low end to the elite hackers at the high end. And we know that this also takes a pyramid shape with a great many lower end wannabe hackers at the bottom and a much more rarefied few at the top of the pile. Recently we've seen that the followers of this Podcast have already been been employing AI to create successful solutions that they would never have been able to create otherwise.
Steve Gibson [02:33:26]:
And you, Leo, as a lifelong coder, could have written your news feed reader from scratch the old fashioned way.
Leo Laporte [02:33:35]:
Yeah. So for you still be working on it.
Steve Gibson [02:33:37]:
Yeah, Claw, exactly. Claude's AI served as a powerful accelerant.
Leo Laporte [02:33:44]:
Yep.
Steve Gibson [02:33:44]:
But we know from the testimony of our listeners that for many of them who were coding adjacent but not coders, AI has now bridged that gap to allow them to create their own functioning tools that never existed before. So what AI has already done is completely eliminate coder wannabe script kiddies from the low end by empowering them to author their own powerful malicious code. They no longer need to follow somebody else's script. Any mischief they can think of to get up to and AI will happily manifest in code just for them. Consequently, we are almost certainly facing a forthcoming explosion in the volume and variety of malicious attacking code. I would like to be able to imagine some form of silver lining for the defenders in this asymmetric war, but as I said, I have been unable to come up with any. What we see is an epidemic of misconfiguration and lazy configuration communication failures and finger pointing, lingering old designs and practices and systems that remain online despite not having received any attention for years. AI is not going to fix any of that.
Steve Gibson [02:35:27]:
We also see employees in positions of trust on internal enterprise networks being tricked into clicking malicious links and inviting malware inside the house. No form of fancy AI coding is going to fix any of those things. Every single one of those is a human factors failure. We already know how to fix every one of those things, but we haven't cared enough to do so. And there's reason to believe, I think, that we're about to pay the piper even more fully than we have been. A great many of the world's enterprises are sitting ducks and entire new generations of would be hunters who have been using slingshots have all just been up armed with advanced cyber rifles.
Leo Laporte [02:36:23]:
Kind of makes me want to sit down and try my hand at some malware myself.
Steve Gibson [02:36:30]:
Got a three hour.
Leo Laporte [02:36:34]:
Wow.
Steve Gibson [02:36:35]:
Yeah, I mean I, I don't think it's even. I think that, that the bad guys are going to jump on this. Oh yeah, it's going to spread that they'll be sharing tips and tools and tricks, you know, within their communities. It's. It's going to be a mess. And again, we, we know, I mean, yes, there are flaws in software, that's a problem, but those are not the stories that we're covering anymore. Largely, it's big mistakes being made by enterprises, their employees and their IT people that are just not, they're not fixing the things we already know how to fix. They're not patching servers for which there have been patches for years.
Steve Gibson [02:37:22]:
And AI isn't going to help there, but AI is going to help the malware.
Leo Laporte [02:37:28]:
It's, it's job security for us. And that's the good news. Yeah, very interesting, very interesting.
Steve Gibson [02:37:36]:
You know, Leo, when I made the jump from three digits to four digits in my software to be able to handle this podcast, maybe it's glad that now we can go to 9,000, 999.
Leo Laporte [02:37:46]:
We might have to at this rate. Fortunately, by then it'll be our AIs doing the shows, not us. We'll. We'll be resting somewhere. Steve Gibson. He hangs his hat@grc.com the Gibson Research Corporation. That's where you'll find, of course, all of the great work that Steve does, most of it give away free stuff like shields up. But there are two paid programs there, his bread and butter.
Leo Laporte [02:38:14]:
One is spinrite, the world's best mass storage maintenance, recovery and performance enhancing utility, which everyone who has mass storage needs. The other is Brand new as DNS Benchmark Pro, both of them@grc.com he also of course has the podcast there. He's got several unique versions. A 16 kilobit audio version for the bandwidth impaired, a 64 kilobit audio version for people whose ears are good enough, and he also has transcripts written by an actual human, not AI, Elaine Ferriss. That's increasingly going to be a selling point, by the way. Oh yeah, human did this one, which makes it extra special. He also has the show notes there, which are really great. I mean, every week he puts a lot of work into this.
Leo Laporte [02:39:00]:
It's 21 pages, I don't know how many thousands of words. But this is more than the weekly column used to write for. Oh yeah, for World, right?
Steve Gibson [02:39:09]:
Yeah, that, that actually, that was one page.
Leo Laporte [02:39:12]:
Yeah.
Steve Gibson [02:39:12]:
And it, it was an L shape that fit around an ad.
Leo Laporte [02:39:17]:
Probably you felt constrained by the number of words you were allowed. So now, no constraints. He. He does the news as much as it needs. No more, no less. If you want to get that emailed to you, you can also do that. Go to grc.email. the.
Leo Laporte [02:39:33]:
The main purpose of that page is just to whitelist your email address so you can email him with pictures of the week or comments or suggestions. But there's also two checkboxes below it. One for the weekly newsletter, the Security now show notes. The other for a very infrequent email he'll send out when he has new products to tell you about. He's only sent it out once in his whole life, so don't count on a lot of email from Steve. I'm just saying. There's also lots of articles, lots of great stuff. It's the kind of site you spend, you know, an hour browsing and, and the time just whizzes by and you forget where you've been and oh my goodness.
Leo Laporte [02:40:10]:
And there's another thing. And another thing. GRC.com we have copies of the show at our website, Twitter TV SN for security now, there's a YouTube channel dedicated to Security Now. There's also of course, the easiest way to get it. Subscribe in your favorite podcast client. You'll get it automatically as soon as we're done. There's audio and video. We have the video version, so you could subscribe to one or the other or both.
Leo Laporte [02:40:33]:
Leave us a five star review if you will. Let the world know about Security Now. We'll be back here next Tuesday, as I said, round about 1330 Pacific Time, 1630 East Coast Time, 2130 UTC. You can watch us live. We'll be looking for you then. Have a great week, Steve, and I'll see you next time on Security Now.
Steve Gibson [02:40:57]:
Thanks, buddy. Till then, February.
Leo Laporte [02:41:01]:
Hey, everybody, it's Leo laporte. It's the last week to take our annual survey. This is so important for us to get to know you better. We thank everybody who's already taken the survey and if you're one of the few who has not, you have a few days left. Visit our website, Twit TV Survey. 26ill it out before January 31st. And thank you so much. We appreciate it.
Leo Laporte [02:41:26]:
Security Now.