Security Now Episode 918 Transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.

Jason Howell (00:00:00):
Coming up next on Security Now, Leo Laport is out. I, Jason Howell, filling in for Leo, of course, though, joined by Steve Gibson, who has all the news, all the information for you this week. Chat, G P T and Company Secrets. Very interesting stuff there. Apple's sweeping update of older devices, A WordPress attack campaign that's already six years old. A trifecta of Android security news updates close to my heart anyways. And the main event, the complexity of modern video codex and the embedded security risk within. As a result of all that complexity, Steve breaks down that complexity. Next on Security Now,

... (00:00:42):
Podcasts you love from people you trust. This is TWiT.

Jason Howell / Steve Gibson (00:00:52):
This is Security Now with Steve Gibson. Episode 918, recorded Tuesday, April 11th, 2023. A dangerous interpretation.

This episode of Security Now is brought to you by DeleteMe, reduce Enterprise Risk by removing employee personal data from online sources. Protect your employees and your organization from threats ranging from doxing and harassment to social engineering and ransomware. By going to join tv. And by Cisco Meraki. With employees working in different locations, providing a unified work experience seems as easy as herding cats. How do you reign in so many moving parts? Well, the Meraki Cloud Managed Network. Learn how your organization can make hybrid work, work. Visit and by Kolide, Kolide is a device trust solution that ensures that if a device isn't secure, it can't access your apps, it's zero trust for Okta. Visit and book a demo today. It's time for security now. That's right. We're gonna talk all about this week's big security news. And actually we probably already know her by now.

I'm not Leo, I'm Jason filling in for Leo. Leo is deep in his vacation. I actually just returned from vacation, so I'm like kind of clueless about a lot of the news. That's why I am super thrilled that we have Steve Gibson the man of the hour to talk all about this week's security news. What's up, Steve? Well, Jason, it is great to see you sitting there, <laugh> behind the microphone. Yeah, I'm glad to have you this week. Oh, oh, that's right. No, that's right. I forgot. You're, you're sitting right next to me. You can you can help me troubleshoot these things on my phone. Could you? Yeah. Tell me what you see there. No, tell. I'll let you know. Don't tell anyone what you see there. <Laugh>. It's my password. And no one needs to know that. Oh good to see you, Steve.

Thank you for for doing the show with me. Allow me to Glad pass the party. Glad to have you Coha co-hosting with us. So this is security now episode nine 18 for what is this? April 14th. And as we've been April 11th, recently, April 11th, just, just for the <laugh>. It's okay. Hello, you're, you're April 11th, your security news for the future. You've always have been very futuristic. There you go, <laugh>. Good cover. So as we've been doing recently, we're gonna be seeking some answers this week. What did Microsoft and for a sponsor of the network, by the way, ask from the courts and what did the courts say in return? When can chatting with chat G p t leak corporate secrets? Why has Apple suddenly updated many, much older of their eye devices? Why bother naming a six year old ongoing WordPress attack campaign, which Samsung handsets went out of security support, and what two user focused policy changes has Google just made for their Android users?

And do we really have additional chat G P T hysteria? Yeah, believe it or not, after entering those questions and examining an example of the benefit that comes from rewriting solid state non-volatile storage, we're gonna take a rather deep dive into a tool that, well, it was meant for good, but I fear it may see more use for evil. And of course, we do have a fun and engaging picture of the week. So I think another great podcast for our listeners, oh, of course, today's podcast is titled A Dangerous Interpretation. Excellent. We've got so much to talk about some of those topics close to my heart in the Android world. So I'm curious to hear your take on them a lot coming up. So don't go anywhere. Wanna first take a moment to thank the sponsor of this episode of Security Now, and that is DeleteMe.

If you're dedicated to protecting the data inside your company's network, maybe you're an executive and employee or, or an executive trying to protect executive employee data outside of your network. That's what DeleteMe is specifically designed for. It's designed for the enterprise. Deleteme for Business, makes it simple to remove executive and employee personal data available on the open web. So, bad actors, you know, we talked about it on this show all the time. Steve tells you all about this stuff. Utilize publicly available data online, in particular in social engineering attacks. So this includes data from sources like data brokers, right? The more they know about a person, the more that data is accessible to them, and especially if it's data that normally other people don't have access to, that can be really valuable information for these attacks. It's social engineering. The data's being effectively weaponized against executives and employees in ways security professionals may be overlooking easy access to employee personal data online and vulnerable data leads to potential harm.

Things like doxing, harassment, social engineering, like I was just talking about. Ransomware attacks increase with every volatile moment. So who actually is impacted by this data being exposed? Well, executives and board members, right? They're targeted, they're harassed online by cyber criminals, activists, ex-employees who are using their families' personal data to get to them. Executives actually have a 30% higher p i i exposure risk than the average employee. Also, of course, public facing employees may have their home addresses, their affiliations exposed online by activists, angry customers, other bad actors and then individual contributors, personal email addresses and mobile phone numbers are used to socially engineer their way into enterprise systems. It's all about like, what is the value of this Very personal information in getting what these bad actors want, right? That, that information when they're speaking to the writer, depending on your perspective, the wrong person can gain them access to other things because people just simply assume, oh, well, you know that information, okay, then you're the right person to talk to.

That's just one example. This information is super valuable. Delete me actively monitors for and removes personally identifiable information for your employees to reduce enterprise risk. So you can protect yourself, you can protect your employees and reduce risk with, DeleteMe in just five easy steps. So, employees, executives, and board members, you just, you complete a quick signup, DeleteMe, actually scans for exposed personal information based on the information that you plug into that, that signup process opt out and removal requests begin from there. And then initial privacy report is shared and ongoing reporting initiated well on an ongoing basis of course. And then DeleteMe provides continuous privacy protection and service all year. I signed up for DeleteMe. Of course, I was, I've been away on vacation, so I've only just begun the process. But even in signing up and, and kind of plugging in the information, there's a part of that, that process where you upload kind of a credential, like a photo of your driver's license.

And that's because some of these data brokers won't remove things unless they absolutely know who you are and that you are the person that's requesting this removal. So it kind of goes a step further than just I think I want that information removed. It's like, I want that information removed. Here's my information, here's my, my, you know, my proof that I am who I say I am. Get it off the net entirely. So I'm really curious to see how this ends up, right? Because this information is valuable and it can be used for harm, protect your employees and your organization by removing their personal data from online sources. All you have to do is go to join That's join tv. And plug that in. Check it out for yourself, and protect yourself online. Find out what kind of information you have hiding out there, you didn't even know it was out there.

And get it off, get it off the web so that it can't be used against you or your company somewhere down the line. And we thank Delete me for their support of Security Now, all right, picture of the week time. Always a fun glimpse into the bazaar. And this week is no different <laugh>. So I captioned this one. Why developers sometimes feel misunderstood. <Laugh>. so it's a two frame photo that the, the upper frame shows a picture of a, of a, of a display from a computer that's just been turned on. And the display it has coming, you know, from the bios or the firmware. And it says, keyboard not found, press F1 to continue delete, to enter setup, and which of course you can't do, right? No, because there's no keyboard, not only, and then, and the second frame just sort of shows some random guy in his, you know shirt and tie <laugh> apparently staring at his screen thinking he's kind of dead.

He's dead inside at that point. It's like keyboard. And he is like, did I misread this keyboard not found? Press F1 to continue. And maybe he did press it a couple times. Nothing of course happened because there's no keyboard. But I mean, I see in front of him, it's on the desk, apparently it's just not plugged in or something. Yeah, I don't, is is the reason that it gives, because this is ridic, it's ridiculous in and of itself, but is the thought that like, once you plug in the keyboard, then you can proceed. But I guess if that was the thought, like, I suppose it should spell that out, like keyboard not found you know, attach keyboard and then dot, dot dot. Yeah, I think you're giving, I'm trying to make sense. Too much. You're giving too much credit to the people who wrote this system.

Yeah. The idea was that there are, there are a wide range of things that can go bump in the boot up process, right? Like the C moss Yeah. May have discharged. And so the, there, there's no check some on the on, on the C os setup information or who knows what. So there's like a bunch of different things that could be wrong. And the universal response to any of them is press F1 to continue delete, to enter setup. <Laugh>. Unfortunately, one of those things that can go wrong is we just checked, there's no keyboard here. Yeah. Prevent you from unfortunately doing any of that. Still get the default. How do we proceed? Well press one, <laugh>. Okay. All right. I'd, I'd be the guy in the tie looking dead inside too. If why developers sometimes feel misunderstood, <laugh> you might say it's well deserved, <laugh>.

 Okay, so a few weeks ago we talked about how wrong it was that Sony had chosen to legally force the DNS provider 9.999 to remove DNS resolution from its service as a roundabout means of blocking access to pirated Sony intellectual property, which, you know, amounted to some MP3 tunes which were being hosted on the internet. The proper action, which we discussed at the time, would've been to legally pursue the provider hosting the content or failing that maybe go to the illegal domains registrar and, and, and shut down the site that way. But Sony's complaint apparently said, well well, in fact, they, they, their complaint to the court did say, well, they didn't return our calls. So anyway, crazy. I was reminded of this due to some recent news last week from Microsoft's digital crimes unit, their D C U from whom Sony could take a few lessons, three organizations, Microsoft's D C U, the cybersecurity company for also happens to be a sponsor here on the TWIT network and the Health Information Sharing and analysis center, which Abbreviates to Health isac, I S A C.

They've all joined together to take technical and legal action to disrupt cracked illegal copies of Cobalt Strike and other abused Microsoft software, which have been used by cyber criminals to distribute malware, including ransomware. So in a change in the way Microsoft's D C U has previously worked, they're teaming up with for, to remove illegal legacy copies of Cobalt strike so they can no longer be used by cyber criminals. And we've talked about Cobalt Strike before, but it's due for a bit of a review. Although Cobalt Strike is frequently associated with malware. In fact, you know, that's the only context in which we're ever talking about it. Because of the nature of this podcast. Cobalt strike is actually a legitimate for software product. It's a very popular and very powerful postex exploitation tool used to simulate adversaries. And as it happens, sometimes older versions of the software have been altered by the bad guys who use its extreme power, not for benign red team testing exercises, but for their criminal purposes.

Cause I mean, it's, it's the real deal. It, it actually does penetrate. Cuz if that's what, if you're trying to do red team exercises, well, you need something that's real, right? But unfortunately, in some cases it's, you know, it's indiscriminately real if it's been hacked. So cracked illegal copies of Cobalt strike have been used to launch destructive attacks such as those against the government of Costa Rica and the Irish Health Service executive and countless more Microsoft SDKs and APIs are abused as part of this, of the coding of the malware as well as criminal malware distribution which is used in the infrastructure to target and mislead victims. Okay? So specifically the ransomware families that have been associated with or deployed by cracked copies of Cobalt Strike have been linked to more than 68 specific ransomware attacks impacting healthcare organizations in more than 19 countries around the world.

So that's been the focus of this work, and it's what earned them a court order. These attacks have cost hospital systems millions of dollars in recovery and repair costs, plus interruptions to critical patient care services, including delayed diagnostic imaging and lab results, canceling medical procedures and delays like delays in delivering chemotherapy treatments and, and so on. So, I mean, and we've talked about this before, it's sort of despicable that, that healthcare organizations are targets of ransomware and the bad guys couldn't care less. Of course, they're not located in the same country as those whose services are being denied. So, you know, I guess this should be expected. So anyway, the Friday before last March 31st, the US District Court for the Eastern District of New York issued a court order, which allows Microsoft for, and that health ISAC group to forcibly disrupt the infrastructure being used by criminals to facilitate their attacks.

The court order allows that trio to notify relevant ISPs and computer emergency readiness teams, you know, certs who will then assist in taking the infrastructure offline, which serves the connections between criminal operators and the infected Vic victim systems so that, you know, they're not saying pretty pleased. They're, they're taking the steps to remove these systems from the internet. And, and, and it's a, you know, it, it's international treaties and, and, and sort of the, the, the assumption that whatever is on the internet has the right to be on the internet that makes this difficult. But it's, it's necessary to elevate this to court order in order to make it happen. And I would argue this is what Sony should have done. They're certainly big enough to get a court's attention and explain the problem and say, Hey, these guys aren't returning our phone calls, so, you know, give us the right to go to their hosting provider and, and have the hosting provider turn them off anyway.

Re we know rather than obtaining a quarter or against an innocent DNS provider. Well, which also by the way, didn't really solve the problem anyway, right? Because anybody not using 9.9 0.9 0.9 would've still been able to get to and access Sony's content anyway. Sony should have done whatever was necessary to take the offending pirate server off the air. Just as this group is now doing with instances of malicious cobalt strike. Microsoft noted that they're also expanding a legal method that is, this is an expansion of illegal method used successfully to disrupt malware and nation state operations to target and abuse security tools used by a broad spectrum of cyber criminals. Disrupting cracked legacy copies of Cobalt Strike will significantly hinder the monetization of these illegal copies. You know, because you can find them on the net and slow down their use in cyber attacks.

So there's gonna be a second order effect. On the one hand, you actually disconnect the ones that are in use, but at the same time, people are gonna be less likely to go purchase new cracked copies cuz they know that they're gonna be jumped on by, you know, wherever they are in the world, thanks to this court order. That'll, so they'll, they'll, they'll, they'll hopefully reevaluate and change their tactics away from using Cobalt strike for the publisher of Cobalt Strike. Noted that in recognition of the power of their tools, they've always taken considerable steps to prevent the misuse of their software, which includes stringent customer vetting practices. You know, you don't wanna just sell this to anybody, however, criminals are known for stealing older versions of security software. Among them Cobalt Strike, creating cracked copies to gain backdoor access to machines and deploy malware. Ransomware operators have been observed using cracked copies of Cobalt Strike and Microsoft software to deploy Conti Lock Bit and other ransomware as part of the newer ransomware, which we've talked about a lot being now being offered as ransomware as a service under that model.

Although the identities of those conducting the criminal operations are currently unknown, the group has detected malicious infrastructure across the globe, including in China and in our good old U S A as well as Russia. In addition to financially motivated cyber criminals, they've observed threat actors using cracked copies to act in the interests of foreign governments, including from Russia, China, Vietnam, and Iran. Microsoft has said that its defender security platform has detected, this is a little sobering around one and a half million infected computers communicating with cracked cobalt strike servers over just the past two years. One and a half million infected machines communicating with cracked Cobalt strike servers just in the past two years. And in 2020, a recorded future report found that more than 1400 malware command and control servers were using Cobalt Strike as their backend at the time. The census search engine currently returns around 540 Cobalt strike servers hosted in the wild.

So again, as I said, they can be found now Microsoft for and the Health ISAC have the means to go get them turned off. And it's unfortunate that bad guys are using powerful software meant for training and attack stimulation, but I guess it's not surprising. It's the world we live in and going after the bad guy's communications, since they're still needing to use the public internet for that. Communication is clearly the right solution. When all other measures fail, it makes a lot more sense than Sony compelling a d a a single d n s provider to block a bad guy's d n s lookups. That's just crazy and it's not gonna be very effective.

Okay, so can chat G B T keep a secret? That's not surprising that <laugh> we're talking about chat G P T, everybody is I stumbled upon this and I knew that our listeners would get a kick out of it. It seems that some employees of Samsung Semiconductor were using chat G P T to help in their diagnosis and repair of some problematic proprietary Samsung code. But in order to do this, they needed to upload the code and some documents to chat G P T so that it could see what was puzzling these employees. The only problem was the uploads contained Samsung's sensitive proprietary information after finding three separate instances of this happening. Samsung warned its employees against using chat G P T in their daily work. The data uploaded to chat G P T included internal documents and code meant to ide to identify defective chips.

Samsung now limits the length of questions submitted to chat G P T to one K Byte while the company develops its own internal chat, G P t like AI for its own private use. And it's really hadn't occurred to me before, but it brings to light something that, as I said, just I hadn't considered, which is that it will likely become quite common with, within an organization to want to be able to leverage the power of these new large language model ais for, you know, to aid their own internal proprietary work and research. But what if the necessary details of that research cannot be allowed to leave the company's control? So this is a very different application than, you know, AI assisted travel planning or, you know, we need some, we need some help coming up with some new creative evening meals for the family.

 And this in turn suggests that before long we're gonna begin seeing AI companies offering to sell standalone pre-programmed AI systems for exclusive use within and buy a single organization. And such systems will likely be compartmentalized so that the, the, the published AI side can be refined and upgraded over time while keeping any proprietary information that's incrementally informing that AI separate and safe. And, and, you know, Jason is still difficult for me to believe that like, we're actually talking about this, like we're talking about like, okay, soon you'll be able to purchase an AI thing, like an AI cabinet and, you know, stick it in the server room of your organization and connect it into your network and all your employees will be able to chat with this AI thing that you purchased. But I think what's interesting to me is that I feel like for a while, quite a while in techno, in technology circles, chat bots in general have been kind of a punchline because they've never worked nearly as well as promised.

And now sudden the lower right corner corner <laugh> right, it, it comes up in the lower right corner of your screen. Yes. It's like, you know, oh geez, hi there, yes, you know, I see that you're alive. Would you like, what would you, would you like some help? It's like, go, how do I click? How do I get rid of this thing? Your your help is, is brutal and, and not not enjoyable or useful whatsoever. And now suddenly it's the solution to all of our problems. <Laugh> suddenly it's good enough or, or pretends to be good enough. It is just amazing how quickly this thing has, this is changed around. I think you're absolutely right though, and they've gotta already be working on this, right? Because that, again, in all of these different facets, a the the promise or the potential that, you know, proponents of AI are putting out there as far as the benefits to, you know, companies and, and thinking about their, their businesses and different ways and everything.

All of that hinges like you point out on, you know, in in particular that there's pieces of information that that hinges on the need for private secure systems to hold onto that information so that Samsung's employees aren't putting out proprietary information in order to gain the benefits of the as systems. And I'd be really surprised if these companies aren't already working on, we've got an AI to bring all these benefits to your company. It's within your walls, you know, and, and you can rest assured that the information you put in there never goes anywhere outside of your company. They're, they're working on that right now. They have to. There's so much money to be made for that. It's that it's that little cube in the corner with that weird blue glow feed me who's in there. Feed your data developer. That's right <laugh>. So last Friday, apple release security updates for iOS, iPad, os, Mac, OS, and Safari.

The updates will remove two flaws and thus terminate the use of a pair of zero days that were being exploited in the wild. If you check, you are looking for version 16.4 0.1 in iOS, iPad, os and Safari and Mac os Ventura will update to 13.3 0.1. And sure enough, this morning I checked my iPhone and it was still lagging behind four days after the updates released on Friday. I think this demonstrates that this is not a five alarm fire and Apple's goal is to eventually bring all devices current without it being any big emergency. These individual updates are large, right? And there are a lot of Apple devices out there in the world, which are all eager to remain current, but there's just no way, and I mean actually almost never really any need to send the same large chunk of code out to every wandering I device simultaneously.

I mean, that's just too much data. So for things like this, apple is clearly trickling the updates out unless a user specifically checks as I did to see whether they're current. And immediately my phone said, oh there is something new, would you like it? And of course I said, yeah. Okay, so the point is we're not seeing mass attacks using new vulnerabilities because that would bring them to everyone's attention quickly and cause them to be found identified and immediately fixed. Instead, those who are finding ways to penetrate today's mobile devices, and boy we'll be talking about this at the end of the podcast, are leveraging the fact that they're unknown for targeted infiltration and it's, it's only valuable to them as long as they stay unknown. And for something like this, when they're found, well, yeah, they're found, but we don't know how long they were in use before they were found.

So, you know, that's, that's always a big question in this case, the supposition is supported by the fact that these two flaws that were fixed were reported to Apple by Google's tag team, you know, t a g Threat Analysis Group and Amnesty International's Security Lab. The fact that Amnesty International Security Lab was involved further supports the notion that the device is being used against individual high value political targets are what were attacked. There are, you know, headlines in the tech press about this urging their readers to update immediately due to the danger of being, you know, a few days late. And I would say update when you get around to it. If you're worried, you know, otherwise Apple will get around to it for you and eventually you'll be asked to reauthenticate to your device after an otherwise transparent overnight auto update. The two new vulnerabilities were a used after free issue in WebKit that leads to arbitrary code execution when processing maliciously crafted web content.

So that's frightening, right? You just get someone to go to a webpage and you can execute arbitrary code on their, on their phone, not good. The second one was an out of bounds right issue in iOS surface accelerator that enabled apps to execute arbitrary code with kernel privileges. Okay? So that was, that, that wasn't a over the, over the internet remote problem. But a malicious app that had snuck itself into the app store or was originally good and then an update of the app made it bad, it would be able to break out of the, the, the protection of the kernel and, and obtain kernel privileges. So Apple indicated so, you know, so the point is those are both bad. I mean, tho those are what the, the bad guys would die to know about so long as nobody else knows about them. So Apple indicated that it has addressed the first with an improved memory management and the second with better input validation, adding that it's aware that the bugs quote may have been actively exploited.

You know, and from all of this, I, I salute Apple for being as security conscious as they are. But out of all of this, that's the only thing that annoys me. You know, no one is more proactive than Apple and Google is clearly at parody with them. But it would be nice if publishers were more forthcoming with their language and I suppose that their attorneys won't let them be. You know, those were both zero day code execution flaws obviously having been used in the wild, you know, no one should blame Apple for this, but I suppose the point is that someone would blame them if they were told the whole truth. So at least we can be thankful that our devices are being kept up to date. And it's also worth noting that further details about these two vulnerabilities have been deliberately withheld due to their active exploitation and to prevent additional bad guys from learning about and also abusing them.

In other words, you know, they're in use right now. One last point is that we can presume that Apple is aware that these flaws are being abused because older devices that would not normally receive updates are doing so yesterday Apple Backport patches to fix these problems to older iPhones, iPads, and Mac that had not been otherwise receiving recent security updates. Now, updates for these are also being made available for these older out of patch cycle devices all, and we've seen Apple do this before, right? When something is really a problem, they'll, they'll break their policy and in the, in the interest of, you know, maybe it's in the, in the interest of their users or maybe it's because they receive intelligence telling them that sp that known older devices are being victims of these because they're not being patched in this case. All models of the iPhone six s and seven first iPhone se Apple Air two, I'm sorry, iPad Air two, the fourth gen iPad Mini and the seventh Gen iPod touch all get updates.

Well happens. I have a seventh generation iPod touch sitting next to me here because it still has a headphone jack. After reading that it might be getting an update. I checked and again, kind of woke it up. Oh yeah, would you like that? And anyway, so yes, I updated it and it's current now too. Again, eventually Apple would've gotten around to it. So I'm just saying that this is not something Apple does, you know, for a may be actively exploited vulnerability where they're not sure, no, they know these flaws are being used against their selected users in the field. And oh, also note that even older Mac os Big Sur is being updated to 7.7 point, I'm sorry, 11.7 0.6. And Monterey has been updated to 12.6 0.5. So, you know, they're cleaning up some clearly very bad updates that the fact that Amnesty International was involved in finding these really suggests that they were, you know, used by the commercial spyware industry sold to governments probably, and then you know, being used illegally to infect people.

And Jason, I think we should take our second break and, or, or our first break and tell our listeners about our second sponsor. Indeed we shall. And then we'll get to even more secure news coming up. Oh, we will. Oh indeed. We got a lot more to talk about. But first, let's take a moment and thank the sponsor of this episode of Security. Now the Cisco Meraki, the experts in cloud-based networking for hybrid work. Whether your employees are working at home, maybe they're working at the cabin in the mountains, that would be nice. We're in a lounge chair at the beach in Costa Rica. I'm just throwing that out there. A cloud managed network provides the same exceptional work experience no matter where they are. You may as well roll out the welcome mat because hybrid work, well, it's here to stay. Alright. It's become a staple in in many <laugh> aspects.

 Hybrid work works best in the cloud, has its perks for both employees and leaders. Workers can move faster, they can deliver better results with a cloud managed network, while leaders can actually automate distributed operations, build more sustainable workspaces and proactive, proactively protect the network. An I B G market pulse research report connected for Meraki highlights top tier opportunities in supporting hybrid work. Hybrid work is a priority for 78% of C-suite executives. Leaders, of course, want to drive collaboration forward while staying on top of or even boosting productivity and security. Hybrid work also has its challenges as we know. The IDG report raises the red flag about security, noting that 48% of leaders report cybersecurity threats as a primary obstacle to improving workspace or workforce. Experiences always on security monitoring is part of what makes the cloud managed network so awesome. Now, it can use apps from Meraki's vast ecosystem of partners turnkey solutions built to work seamlessly with the Meraki Cloud platform for things like asset tracking, location analytics, and so much more.

And by doing that, they get a whole lot of benefits. They can use it to gather insights on how people are using their workspaces. You know, when a smart space environmental sensors can actually track activity and occupancy levels to stay on top of cleanliness is one example. Reserving workspaces. So based on vacancy and employee profiles also called hot-desking. This allows employees to scout out a spot. They can do it in a jiffy locations in restricted environments can be booked in advance and then include time-based door access. And then there's mobile device management. So integrating devices and systems that actually allow it to manage, to update and troubleshoot company owned devices. Even when the device and employee are in a remote location, you can turn any space into a place of productivity and empower your organization with the same exceptional experience no matter where they work with Meraki and the Cisco suite of technology.

Super powerful stuff. You can learn how your organization can make hybrid work work. All you gotta do is visit I'll spell that for you. M e r a k i to check it out for yourself. And we thank Cisco Meraki for their support of Security Now. All right. I feel like it's probably safe to say that every episode that I fill in for Leo, there's some sort of security story about WordPress <laugh>, but yet yes. It's so, it's so everywhere and I guess that's the point. Well, yes. So, you know, it's a, it's a con, it's a constant on the podcast because it turns out, and I'm always surprised by this statistic, 43% of the Internet's websites wow, are built on WordPress. Wow. I, it's like, it's amazing. Yeah. You know, it's far and away the most common cms, you know, content management system.

 Wordpress has 43%. The runner up is at in distant second place at that Shopify with a seven, with 4.1%. Huh? So 43% for WordPress, 4.1% for the number two guy. Yeah. third place is Wix at 2.3%, followed by Squarespace at 2%. So yeah, WordPress is the big target far and away on the web, so not surprising that they're the ones that are taking all the incoming. Yeah. last Wednesday, the team at security S E C U R I finally gave a name to a long-running WordPress exploitation campaign, which they've been tracking for years. They named it Biatta, B A L A D A, Biatta injector. Their post last week was titled Biatta Injector Synopsis of a massive ongoing WordPress malware campaign. And in their, in their post, they said, our team at security has been tracking a massive WordPress infection campaign since get this 2017.

But up until recently, we never bothered to give it a proper name. Typically, we refer to it as an ongoing, long-lasting, massive WordPress infection campaign. That's a mouthful every time you want to talk about it. That leverages all known and recently discovered theme and plugin vulnerabilities. Other organizations and blogs have described it in a similar manner, sometimes adding terms like mal campaign or naming domains that it was currently using, which amount to several hundred over the past six years. This campaign they wrote is easily identified by its preference for, and this is a, a particular PHP function string dot from care code anyway, it's used for obfuscation. They said the, the use al also it's known for or recognized by the use of freshly registered domain names, hosting malicious scripts on random sub-domains and by redirects to various scam sites, including fake tech support, fraudulent lottery wins, and more recently push notification scams, displaying bogus capture pages, asking users to please allow to verify that you are not a robot.

Since 2017, they said, we estimate that over 1 million WordPress websites, 1 million have been infected by this campaign each year. It consistently ranks in the top three of the infections that we detect and clean from compromised websites. So the oth the other top two of the top three come and go, this thing just hangs in there year after year after year. They said last year, in 2022 alone, our external website scanner site check detected this malware over 141,000 times with more than 67% of websites loading scripts from known beata injector domains. We currently have more than 100 signatures covering both front end and backend variations of the malware injected into server files and WordPress databases. So, I mean, this thing is just doing like everything it can, and I got a ca I got, well, I didn't get a kick out of the fact. I thought it was interesting that there, that it leverages every known theme and plugin vulnerability.

So it's continuing to stay alive and to stay relevant and in the top three because it just keeps expanding its vocabulary as problems are found in themes and, you know, security vulnerabilities in themes and plugins. So they said, as you can imagine, referring to this massive infection campaign using generic terms has not been convenient. However, assigning a name to this malware was never at the top of our priority list. Our security researchers deal with dozens of new malware samples every day. So we typically don't dwell too much on well-known malware that's adequately covered by detection rules and only necessitates minor tweaks and adjustments when we spot a new wave. So essentially it's sort of like, it's sort of fallen into the background, right? I mean, it's, it's always there. It's been there since 2017. It's per, it's pervasive and prevalent, but it's like, eh, it's, you know, they're focused on new stuff and, and they've already got this existing thing pretty well covered.

So they explained that in late December last year, our colleagues, their colleagues at Dr. Webb shared some valuable information that led us to choose the name. Finally, Beata Injector, A post published on December 30th, 2022 titled Linux Backdoor Malware Infe Infects WordPress, WordPress based websites caught our attention and it was widely circulated in internet security blogs with titles like Linux Malware uses 30 plugin vulnerabilities to target WordPress sites. The article discusses two variants of the malware Linux dot backdoor dot WordPress exploit one and the same thing, do two, and provides a comprehensive overview including targeted plugins and varies of compromise. The interest generated by this information they wrote prompted numerous inquiries from various sources leading us to examine the post closely on New Year's Eve to determine if immediate action of some kind was required. To our surprise, we instantly recognized that described malware as being the ongoing massive campaign we'd been tracking for years.

Upon closer inspection, we found that the information provided was accurate, but the vulnerabilities injected code and malicious domains all dated back to 2019 and 2020. Nevertheless, the Post offered interesting details about how campaign operators searched for vulnerable websites and injected malware. We soon obtained samples of the Linux binaries, which were written in Go language from virus total where other researchers had been creating collections. Most of the samples were compiled with debug information and even a simple strings command provided quite insightful information, names of functions, string constants, paths of files included in the project. These files consist mostly of source code for various go libraries providing additional functionality such as conversion functions and support for internet protocols. However, the main malware code was located in the file, c slash users slash host slash desktop slash beta slash client slash main dot go the file path Beta client implied that the developer could refer to this software as Biatta client, and they said, we know that the malware sends data to a command and control server.

So that would be a biatta server component. Whether our assumptions were correct or not, we adopted this name internally and think that it provides some convenience. Yeah, when talking about a really long lasting malware campaign in many languages, biatta means ballad to avoid ambiguity. We added the word injector to reflect the nature of the malware campaign that injects malicious code into WordPress sites, hence beta injector. So that was sort of interesting because they had, they'd never run across that particular aspect of the campaign, which, which they discovered thanks to the Linux-based malware that was using 30 vulnerabilities to inject Biatta into their, the, the WordPress sites that were being hosted on that Linux-based server that allowed them to finally look at code that had been compiled with debug strings still in place. Which basically it, it's the thing that allows a debugger, when a debugger hits a problem and shows you where it is, it makes much more sense for you to be able to see the names of things than only the the hex addresses of things.

So it was arguably a mistake for the bad guys ever to fail to strip debug strings out out of their code, but they did fail. So anyway this was, I thought, some interesting background about, about a long-lived multi-year, you know, six years in counting, very aggressive and effective focused campaign against WordPress sites. And, you know, it's easy to become inured to big numbers. We're we're talking about big numbers all the time, you know, two to the 32 is 4.3 billion and you know how many stars are in the sky and so forth. But here, 1 million individually infected WordPress sites is a lot of sites. So, you know, I wanted to cover this now because I suspect that it won't be the last time that we're hearing about this quite determined Beata injector, WordPress malware that shows no signs of giving up and going away, unlike other malware that sort of, you know, is a flash in the pan. You know, somebody is really dedicated to this.

Okay, so Mozilla has updated their Firefox Monitor data breach monitoring and alerting service, giving it a dedicated website. It's at, but the page you wanna go to is If you go to, you're greeted with the caption gonna modest. We monitor all known data breaches to find out if your personal, if your personal information was compromised. And then they said, here's a complete list of all of the breaches that have been reported since 2007. Okay, now I'll get back to that list in a second. It's on the screen now, scrolling slowly, and it could probably scroll for the rest of the week before you got to the bottom of it. And that's kind of the point here. So as you might expect from a webpage, which boasts a complete listing of what amounts to every site breach ever you'll be scrolling for a while.

Thankfully, the page is sorted from yesterday to ancient meaning from most recent to least recent. And for those who don't know, I see the scrolling has increased in speed. If we hopefully to finish by the end of the podcast, <laugh>, for those who don't know Mozilla's Firefox Monitor site performs the same sort of checking that Troy Hunt's have I been POed site offers where registered email addresses are cross-referenced against the database of all previous data sets obtained from website breaches. Troy's facility offers a feature that I appreciate as the owner of Once I authenticate my ownership and control over the domain, have I been POed will perform a wild card search for any and all email addresses within the domain. So a, you know, star star ampersand or mean I'm not ampersand at sign You know, any email address that was presumed to be from ever, but where Breach Notification is concerned, I, in my opinion, there's nothing wrong with having more than one such solution.

So what Firefox is doing, what Mozilla is doing is welcome. This new dedicated Firefox Monitor breach listing page is a bit breathtaking to behold. For example, there were two site breaches on March 31st a couple weeks ago, one of a site called Sundry Files, and ironically, a site named Leaky Reality had a site leak, which is not, probably not the reality that they were, that they were intending to be leaking, whatever that is. In both cases, what was leaked were email addresses, i e p addresses passwords and usernames. Before that was a breach on March 24th of the grad cafe, which lost email addresses, genders, geographic locations, IP addresses, names, passwords, phone numbers, physical addresses and usernames. Whoops. Before that, on March 11th, the site Shopper Plus lost its visitors dates of birth, email addresses, genders, names, phone numbers, physical addresses and spoken languages. And on the same day, H d B financial services was doubtless embarrassed that to have lost control of its client's dates of birth, email addresses, genders, locations, loan information, names, and phone numbers.

So it really is quite eye-opening to scroll back through this listing to get a sense for just how continuous and frequent these breaches are. Just those I just quoted were the last couple weeks. You know, and we don't talk about them here every week because it would be information overload and because no one specific site breach would be useful to most if any of our listeners. But I strongly recommend that everyone who's listening to this podcast take a minute or two to check out Mozilla's page. And it certainly makes sense to get your email addresses registered there so that you will be notified if or when your name pops up in a breach. Up to five different email addresses may be registered per Mozilla account. And, you know, I have an account at Mozillas as I'm a Firefox user that's free and there's no downside to it. And to me that seems like a no-brainer to get your email addresses registered there. Once again, is the site you want to go to.

Okay. we talked about this last week or a variant of this joining Italy's weird ban on chat G P T, which is what we talked about last week. We have Canada's privacy watchdog now launching an official investigation into open AI's Chat G B T Service. The Canadian officials say they launched the investigation after a complaint alleging that the service was non consensually collecting personal data, which is exactly what the Italians were worried about. And, and I suppose this is mostly a case of maybe a, like a bright light su suddenly being shown on something that had been going on unnoticed and unremarked for quite a while. You know, we talked about this last week and I commented that, you know, as far as we know chat, G b T is, is sucking in publicly available data only in exactly the same way that Google spiders the web and indexes all publicly available data.

So the thing that's a little bit unnerving about chat GBTs, you're can have a conversation with it and you know, that seems to put it in a different class than a, than more passive Google search. Anyway it does appear that the industry's AI chat bots are gonna need to start paying a little more attention to whom they're chatting with because there's, there've been some, some, you know, saber rattling by, by privacy advocates saying, you know, you shouldn't be having inappropriate conversations with 13 year olds. And now it becomes important to know the age of the person on the other end of the chatbot. And Jason in honor of you my co-host on the podcast who's also one of the hosts of Twits all about Android podcast, we have three quick bits of Android news, two of which you are already up to speed on because again, you are the Android bot.

 So Samsung's line of 2019 smartphones has formally last month reached their end of life and will therefore no longer be receiving security updates. March, 2023 was the last security patch level for devices which include the Galaxy S 10 S 10 plus, and the Galaxy S 10 E. It's a good series of phones. Now this Yeah, remember, I remember, and this does, and this is the point too. This doesn't mean the end of the world, right? But it's just a note that if any of our listeners may still be using a four year old phone who are also concerned about remaining current, and as I put it, hooked up to the life support IV line of constant security patches occasionally, some of which are critical, it might, it might be time for a hand me down of that device to someone who is, who is less worried than a security now listener and think about upgrading your device to something current that will get reattached to life support so that you can go forth with confidence, pan that problem off to somebody else, <laugh> that that's right.

Although I should mention this Fix does or this patch does contain the fix for the XOs modem zero day that you talked about in episode nine 15. Nice. That was March 21st, just a couple weeks ago. So that's a good like parting gift I suppose. Exactly. It's like, well, we fixed that really serious thing, at least here you go. Yeah. Google also has a couple policy changes. They've moved to restrict the amount of personal data which loan apps may gather from Android users. Although this new policy took effect on April Fool's Day, it's no joke according to the, you know, that's April 1st. Of course, according to the new rules, loan apps can no longer access photos and user contacts. Google's policy change follows reports of, so some loan app makers engaged in predatory behavior such as harassing borrowers and threatening to expose their private communications.

This is unbelievable. And photos, unless they paid their loans or agreed to higher interest rates. Wow. Yeah. Super scammy. I I, yeah, I don't think you want to get a loan from a phone app that seems like <laugh> seems like a bad idea. And I mean, this isn't, Google's only efforts to kind of curb this going on. Not too long ago they put they did a ban on apps offering annual interest rates that exceed 36%. That's the kind of predatory <laugh> behavior that that's going on with a lot of these apps. So don't look to, you know, these kinds of apps as the purveyors of, of respecting privacy and, and basically humanity. You know, it seems like a lot of times, you know, these kind of apps can be all about, you know, how do we make the most amount of money and in, you know, whatever way is necessary.

And when it comes to data harvesting on, on devices, it's really easy to get access to a lot of this information that could be Yeah. Useful for them to shame you into, into paying down the line. And it's just kind of scummy behavior. So, well, it's not, it's not gonna be Luigi coming to break your, your kneecaps, but still no, your kneecaps are on your, you want, you don't want the app to have this personal data. Yeah, no. Wow. No. So finally, Google also announced that in the future all Android apps, which allow users to create accounts of any kind will by early 2024, also need to allow users to delete their accounts. And any associated data app makers must honor requests to delete to, to delete accounts either directly through the app as you're getting ready to delete it, or through a web dashboard.

The web dashboard requirement allows the data of an already removed app to also be deleted without the app first needing to be reinstalled in order to request the deletion. And this new requirement, as I said, will enter into effect sometime in early 2024. So, you know, a gather another good thing that you know, as Android continues to march forward. Yeah, absolutely. Without something like this, it's really, you know, it's a, it's the, it was the wild West as far as like how you get this information deleted. You know, do you contact the developer directly if you have no way of Right. Otherwise canceling, you know, the account and everything. I guess the flip side to this is, and the question that I have is, okay, great, give them an easy, you know, way to delete this information. But, you know, again, the, the cynical side of me is like, but how do we know that random developer 5,000,042 out there has actually deleted that information?

What does that mean to, to I delete your account? You know? Yes. Did you delete reference to it or did you actually delete the information? Did you delete and scrub the information? What does it mean <laugh>, the user interface now has a button. Yeah. Did you connect it to anything? Right. <laugh>, you told us you don't want your account anymore. Noted. What's the next, it's like the doorbells that, that doesn't ring anything. It's like the, it's the button in the elevator that you pushed to close the door and it just, it's there to pacify you more than anything. Uhhuh <laugh>. Okay. So one last piece of non-security related news, but something near and dear to my heart, and I know a lot of our listeners who are also spin right fans the guy that I quoted last week, Matthew Hele hi.

Hi. His is the tweet that I shared reminding me that Microsoft's exchange server plans would have the beneficial effect of forcing older exchange servers to upgrade, which I certainly acknowledge, although I still think it's slimy and extortion. He also shared a recent experience that he had was spin, right? His tweet last week noted that he was a listener from the start and that he had also been a development tester of spin, right? So Matthew wrote, he said, after hearing on security now from the user that ran level three on an S S D, he said, I noticed the same slow response at the start of my 500 gig Samsung S S D eight 60 E v o. Matthew then showed the five point benchmark performed by Reed Speed. For those who don't know, Reed Speed is, was an early offshoot of the, the work on spin right's new device drivers.

 It and I, I created it in order to allow users to verify that the device drivers were working on their systems and benchmarks are very popular. You know, the d n s benchmark that I may, that I created in 2010 is the most downloaded thing that I've ever made at like 2000 downloads a day now. And I think I, I quoted something like eight something million downloads total, just cuz people wanna know about the speed of things. Anyway, so Reed speed takes a, it takes benchmarks at, at five locations on a drive, the beginning, the middle of the end, and then in the two, in the, in the, at the one fifth and four fifth positions. So, you know, zero fifths, one fifth, two fifths, three fifths, and, and wait five fifths. So maybe it's fourths. Anyway, <laugh>, so he, he cited the before benchmark numbers, 4 57 0.25, 11.85, 20.9 5, 43 0.3 and 5 43 0.3.

So we can see that the end of the drive, both of the, of the final two spots were reading at 543 megabytes per second. But the beginning at 4 57 was way slower. But like, wait, this is an s s d, it's solid state. It's like, you know, Ram, right? And then the, and then the next spot, what went from 4 57 to five 11, still not 5 43. And then the, the third spot was up to five 20. And again, still not 5 43. Okay, so then he ran a level three pass over this very nice 500 gigabytes, you know, half a terabyte Samsung eight 60 E V O drive. After doing that, he reran the reed speed benchmark and in the same tweet reported his findings 5 42 0.7 5, 42 0.4 5, 42 0.3 5, 42 0.5 5, 42 0.4 flat access. Even the front that went from five from 4 57 0.2 is now at 5 42 0.7, the same speed as the end.

All of the drive is now responding equally fast. And then he finished saying, even better, it seems to have stopped my frequent blue screens. He said, Perens, which still occurred after a level two pass. And that makes sense cuz that's a read only pass. He said, after weeks of troubleshooting, I was at the point of seriously considering blowing windows away and starting with a new copy. So glad to avoid that draconian effort. Okay, now blowing windows away and reinstalling it would've solved the problem. But a 60 minute level three pass with the alpha release of spin, right, which is what he used, certainly saved him a ton of time and he didn't have to reinstall anything because it was the rewriting of the drives data. That's what was needed. What's happening inside our SSDs is closely related to the RO hammer style DRAM problems we keep being dragged back to through the years, which are exhibited by today's ultra high density dram.

In both cases of DRAM and S S D, it's necessary to appreciate that the density of the storage cells and the size of the feature details etched into these chips of both technologies are absolutely, absolutely as small and tiny as they can possibly be. It's a competitive world. So if it were possible for those feature details to be any smaller and thus for the devices to have a higher bit density while the devices still function, they would be right. I mean, no one's giving anything away. These things are as dense as they possibly can be. So justice with DRAM where the engineers were pressured to push it, perhaps a bit too far. S s D technology has a widely known problem known as Read Disturb. If you Google Read Disturb, you'll find out all about it. I did Google Read Disturb just now and the top result was a description from the acm, the Association for Computing machinery.

The description reads, read Disturb is a circuit level noise in solid state drives SSDs, which may corrupt existing data in S S D blocks and then cause high read error rate and longer read latency. Again, high read error rate and longer read latency, which is exactly what Matthew was seeing. Error correction is not perfect. There's a limit to how much error can be corrected, and there's also a statistical probability that a particular set of bit errors will slip past undetected, which explains the blue screens that Matthew was occasionally seeing. The reason the front of Matthew's s s d was so much slower was that that's where the operating system files are stored. So they're being read over and over and over and much less often if ever being rewritten over time, the electrostatic charges, which are stored in the SSDs bit cells drift away from their proper values due to the read disturbance caused by all of the adjacent reading activity going on.

It got to the point where Matthew's S S D was always needing to work much harder to read some of the data whose bits had drifted further from their proper values. They always needed to be corrected through multiple rereads varying thresholds and more extensive error correction. GR SP's read speed benchmark, which samples the read performance of those five regions with highly repeatable accuracy saw this difference. And the cure for this was simple. Simply read and rewrite the troubled data to reset the drifting bit values back to their proper states. But the problem with doing that, as we know, is that writing fatigues SSDs, it's not something you want to be doing all the time and you certainly don't want to do it if you don't need to. Which brings us to the reason I've become so excited about what we've discovered earlier in this work on spin, right?

Which is that given the proper technology, this can be detected and fixed in a highly targeted fashion. Unfortunately, the current spin right, doesn't have the architecture to do this. Doss and real mode doesn't provide the sensitivity for the fine grained Reed performance measurements. We need spin right? Will need to use the power of protected mode to detect the precise instant when rights are occurring in memory as the data is read back into that memory. And that's where I'm headed as fast as I can get there with Spin right seven. But today's spin, right 6.1, which is at least able to run at the maximum speed that Mass Storage media can go, can perform a full drive rewrite. That's one of the several things that Spin writes level three does when benchmarking and a drives flaky behavior begins to show that it might be needed doing that and maybe only on the front of the drive, which could have been done is still a lot better than waiting for the drives data to dec degrade to the point of system failure. And in the future, spin right will be able to locate the exact trouble area and perform a selective rewrite of only those spots that need it. So spin right six one first and then it's on to spin, right? Seven.

And Jason, we're gonna be on to the show's main topic, a dangerous interpretation after you tell us about our final sponsor. Indeed. Indeed. All right, that's coming up. Don't go anywhere, you're not gonna go anywhere. <Laugh> this episode, security Now is brought to you by Kolide. Kolide is a device trust solution that ensures unsecured devices can't access your apps. Now, Kolide has some really big news. If you're an Okta user, Kolide can get your entire fleet to 100% compliance. Kolide patches one of the major holes in zero trust architecture and that's device compliance. And think about it, your identity provider only lets known devices log into apps, but just because the device is known doesn't actually mean it's in a secure state. In fact, plenty of the devices in your fleet probably shouldn't be trusted at all. Maybe they're running an out date OS version or maybe they've got unencrypted credentials just lying around.

If a device isn't compliant or isn't running the Kolide agent, it can't access the organization's SaaS, apps or other resources. Plain and simple, the device user can't log in to your company's cloud apps until they've fixed the problem on their end and that's it. For example, a device will be blocked if an employee doesn't have an up-to-date browser. Using end user remediation helps drive your fleet, kind of push them in the direction of 100% compliance without overwhelming your IT team. So your IT team can be focused on other important things without Kolide IT teams have no way to solve these compliance issues or stop insecure devices from logging in. But with Kolide, you can set and enforce compliance across your entire fleet. Mac, windows, Linux doesn't matter. Kolide is unique in that it makes device compliance part of the authentication process. So when a user logs in with Okta Kolide alerts them to compliance issues and then prevents unsecured devices from logging in, it's security that you can feel good about because Kolide puts transparency and respect for the user at the center of their product.

To sum it all up, Clyde's method means fewer support tickets, yay, less frustration. Yay. And most importantly, 100% fleet compliance. That's what you really need to make sure everything is protected the way you want. Visit and you can learn more by going to that url. You can also book a demo that's K O L I D, and we thank them so much for their continued support of this podcast. All right, main event time. It's the main event, dangerous interpretation. Tell us all about it, Steve. Okay, so anyone who's been following this podcast for a year or two or you know, 18 <laugh> we'll live in who's counting? Well, that's right. We'll have encountered our frequent observation actually toward the beginning. I think it probably wasn't yet an observation. It grew over time when we kept seeing the same problems occurring over and over and over, which was the inherent security dangers created by interpreters.

Mm-Hmm <affirmative>, it's a recurring theme here because the active interpretation means following instructions to perform some sequence of actions. What we typically see when an interpreter's security vulnerabilities are analyzed is that the interpreter's designer inevitably assumed that valid instructions would be received for their precious little interpreter to read of. But turns out that's not always the case. And when not bad things happen, a research paper was recently submitted for presentation during the upcoming 32nd use Nick's security symposium. That paper described the frankly amazing work that was done by a pair of researchers at the University of Texas at Austin with the help of another researcher at Oberlin College. The interpreter in question, which these three set their sights on is one that every one of us is surrounded by with multiple instances of throughout our day and our daily lives. And that's video and thumbnail creation, specifically the ubiquitous H 2 64 video.

Kodak, although the title of their paper is intended to capture people's attention, the content of their paper shows that the title is not overblown. The paper is titled The Most Dangerous Code Deck in the World, finding and Exploiting Vulnerabilities in H 2 64 Decoders. And one thing I want to point out at the outset is that almost without exception, most of the research papers we discuss here talk about hidden and unsuspected problems that were found and then fixed not so this time, the problems this tool is designed to unearth are generally well-known, but are too widespread and ubiquitous to have all been found. The authors offer several convincing proofs of concept case studies, but I fear that since the tool is now open sourced on GitHub for anyone to use, it won't just be the good guys who are motivated to leverage its power for finding exploitable flaws across the industry's video decoding interpreters.

I have a bad feeling about this. Their paper leads with this abstract they wrote. Modern video encoding standards such as H 2 64 are a marvel of hidden complexity, but with hidden complexity comes hidden security risk. Decoding video in practice means interacting with dedicated hardware accelerators and the proprietary privileged software components used to drive them. The video decoder ecosystem is obscure, opaque, diverse, highly privileged, largely untested and highly exposed, a dangerous combination. We introduce and evaluate H two, I'm sorry. So H 26 forge clever, right? It's, it's, it's the H 2 64 decoder. So H 26 forge a domain specific infrastructure for analyzing, generating and manipulating synt tactically correct, but semantically spec non-compliant video files using H 26 forge. We uncover insecurity in depth across the video decoder ecosystem, including ker memory corruption, bugs in iOS memory, corruption bugs in Firefox and V L C for windows and video accelerator and application processor kernel memory bugs in multiple Android devices.

Okay, so when they're talking about a domain specific infrastructure for analyzing, generating and manipulating synt tactically correct, but semantically spec non-compliant video files, they're saying that because the bugs that might be and turn out to be resident within H 2 64 decoders might be buried deep in the multi-layer decoding process. The much more common and much easier practice of simply fuzzing won't work here. As we know, fuzzing is the common practice of throwing mostly random crap at something, you know, a code deck or an API or whatever to see whether anything that might be sent can result in a crash. And then once the nature of that crash is understood, the next question is whether it might be exploitable to obtain a more useful outcome than a simple crash. The trouble is H 2 64 decoding is so complex that random crap won't make it through the front door.

That's what they meant when they said that their forge would generate sy statistically correct, but semantically spec non-compliant video files, in other words it looks like and is an entirely valid video file. It follows all of the rules and can be processed properly, but it's also a Trojan horse file. It appears completely correct and valid so that it can get into the inner sanctum where the real vulnerabilities may lie. Here's a bit more from their paper to describe and set up the situation and the environment. They explain. Modern video encoding standards are a marvel of hidden complexity as swift on security noted. The video driven applications we take for granted would not have been possible without advances in video compression technology. Not withstanding increases in computational power storage capacity and network bandwidth, but with hidden complexity comes hidden security risk. The H 2 64 specification is 800 pages long and it is d the densest stuff you have ever read and that's 800 pages long despite specifying only how to decode video, not how to encode it.

Because decoding is complex and costly, it is usually dedicated to hardware, video accelerators either on the G P U or in a decade dedicated block on a system, on a chip decoding video in practice means interacting with these privileged hardware components and the privileged software components used to drive them. Usually a system, media server and a kernel driver compared to other types of media that could be processed by self-contained sandbox software libraries, you know, like rendering a webpage, the attack surface for video processing is larger, more privileged. And they said as we explain below, more heterogeneous meaning each instance is different on the basis of a guideline. <Laugh> and I got a kick out of this cuz they're quoting something we talked about on the podcast on the basis of a guideline they call the rule of two. The Chrome developers try to avoid writing code that does yeah, try to avoid avoid writing code that does no more than two of the following.

Parson's untrusted input is written in a memory unsafe language and runs at high privilege, right? One would be not, well, none would be good two one is okay, two but never do all three never parse untrusted input in an an un a memory unsafe language at high privilege. And then these guys go on to say the video processing stack in Chrome violates the rule of two. And so do the comp, the corresponding stacks in other major browsers and in messaging apps because the platform code for driving the video decoding hardware on which they all depend itself violates the rule of two. So right, if you're gonna call upon a component that is violating the rule of two is doing all three of those things and none of which are good, then so are you. Because different hardware video accelerators require different drivers.

The ecosystem of privileged video processing software is highly fragmented. Our analysis they wrote of Linux device trees revealed two dozen accelerator vendors. There's no one single dominant open source software library for security researchers to audit. You know, at like as for example, there was for open ssl, which got audited because there was only one of those. And the features they write that make modern video formats so effective also make it hard to obtain high code coverage testing of video decoding stacks by means of generic tools. Consider H 2 64 the most popular video format today. H 2 64 compresses videos by finding similarities within and across frames. The similarities and differences are sent as entropy and coded syntax elements. These syntax elements are encoded in a context sensitive way. A change in the value of one syntax element completely changes the decoders interpretation of the rest of the bitstream.

Okay, now think about that. A change in the value of one syntax element completely changes the de coder's interpretation of the rests of the bitstream. This means that in working to trigger a flaw, that flaw might only present itself if the decoder is in a particular state, which is determined by everything that has come before. So it's clear why a sophisticated pseudo video file generator had to be built in order to find the bugs which only manifest when all of the planets are in proper alignment elsewhere in their paper. They put their forge into context by explaining how it compares with other tools which fall short that the researchers attempted to use in the past. They said H two 60 Forge maintains the recovered H 2064 syntax elements in memory and allows for the programmatic adjustment of syntax elements while correctly entropy and coding the adjusted values.

No prior tool is suited to this task. Most software that read H 2 64 videos, for example, open H 2 64 and f m Peg focuses on producing an image as quickly as possible so they discard recovered syntax elements once an image has been generated. Tools used to debug video files like L card's stream. I do not allow the programmatic editing of syntax elements. They focus on providing feedback to tune a video. En Coder Forge can be used as a standalone tool that generates random videos for input to a video decoder. It can be programmed to produce proof of concept videos that trigger a specific decoder bug identified by a security researcher. And it can be driven interactively by a researcher when exploring what if scenarios for a partly understood vulnerability. At one point they write, they, they, well they, they begin to explain in some detail about H 2 64 that is, okay, so this is the, the video file format itself and they, they, they say the H 2 64 video codec was standardized in 2003, so it's 20 years old by the International Telecommunication Union, the I T U and the motion pitcher experts group, eg.

Because of this joint effort, this codec bears two names. It's called H 2 64 by the I T U and A V C when provided by eg. Then they explained that we default to H 2 64 when possible. They said the specification des describes how to decode a video leaving encoding strategies up to software and hardware developers. Video encoding is the search problem of finding similarities between and within video frames and turning these similarities into entropy and coded instructions. The H 2 64 speck describes how to recover the instructions and reproduce a picture. Okay? Then they get way into the weeds with the details of H 2 64 encoding. I imagine that Alex Lindsay would probably love it, at least there would be lots of terms that he would've encountered through his years of working with this video format. But getting into that here doesn't serve our purpose of wanting to understand what they found and how they found it.

Suffice to say that they explain about things like why UV color space six 16 by 16 macro blocks, slices, prediction de blocking residues, profiles levels, syntax elements, entropy, encoding encoded value organization, and then they finish by talking about additional H 2 64 features and extensions. It's quite clear that in order to write something like they wrote this H two six forge, which is by the way now posted on GitHub for anyone to experiment with these guys, had to really and truly deeply understand an insanely complex data encoding system that's described by an 800 page specification. Okay, so now let's get to the crux of the matter, which is the decoding pipeline. Had I dragged everyone through that detailed description of H dot two 60 fours components, what I'm gonna share next would actually make some sense, but it's not going to because we don't need to really understand it.

But I still wanna share their overview of the decoding pipeline because everyone will get a good sense of just how insanely complex the world's H 2 64 decoders are. We take 'em for granted. You don't want to have to write one. Here's just one paragraph that will give everyone a sense for the decoding process they wrote. First, the decoder is set up by passing in an S P S and a PPPs with frame and compression related properties. Then the decoder receives the first slice and parses the slice header syntax elements. The decoder then begins a macro block level reconstruction of the image. It then entropy decodes the syntax elements and passes them to either a residue reconstruction path or through a frame prediction path, which with previously decoded frames. Then the predicted frames are combined with the residue passed through a de blocking engine and finally stored in the D P B where the frames can be accessed and presented right piece of cake.

And then it does most of that again for the next frame and so on. And by the way, that D P B stands for decoded picture buffer, which serves as both the output of the decoder and as an image reference for subsequently decoded frames to refer back to. Okay? So by now everyone should have a good sense for what's required to really and truly get to the bottom of vulnerabilities that may exist in any H 2 64 family. Kodak and I have to, to say that I wasn't super happy, was not super happy to read that they'd open sourced all of this work and dropped it onto GitHub. On the one hand, this will make it available to other researchers and also to vendors of these technologies and all that's for the good, but that also means that bad guys also now have it and they might well be more motivated to take advantage of it than anyone else.

Okay, so exactly how vulnerable are we collectively, we who inhabit the world, they explain a a wide range of software systems handle untrusted video files providing a broad attack surface for Kodak bugs. An important observation is that hardware assisted video decoding bypasses the careful sandboxing that is otherwise in place to limit the effects of media decoding bugs. Popular messenger apps will accept video attachments and messages and provide a thumbnail preview notification in the default configuration of many messengers. The video is processed to produce the thumbnail without any user interaction creating a zero click attack surface. There are many examples of video issues on mobile devices. Android has had historical issues in its stagefright library. Remember all that we talked about on the podcast for processing MP4 files, video thumbnailing and decoding cons constitutes an exploitable attack surface in Apples iMessage application. Despite the blast door sandbox, third party messengers can also be affected in September.

Whatsapp disclosed a critical bug in its paring of videos on on Android and iOS. Web browsers have long allowed pages to incorporate video to play through the video H T M L tag leading to multiple vulnerabilities in video decoding. For example, both Chrome and Firefox were affected by a 2015 bug in VP nine parsing. Later we describe a new vulnerability we found in firefox's handling of H 2 64 files. Despite this track record, more video processing attack surface is being exposed to the web platform. Media source extensions, msc, I'm sorry, MSE and Encrypted Media extensions emme have been deployed in major browsers. The Web Codex Extension currently only deployed in Chrome will allow websites direct access to the hardware decoders completely skipping over container format checks. Modern browsers carefully sandbox most kinds of media processing libraries, but they call out to system facilities for video decoding.

Hardware acceleration is more energy efficient. It allows playback of content that requires a hardware route of trust and it allows browsers to benefit from the packet license from the patent licensing fees paid by the hardware suppliers, meaning the browsers don't need to pay the fees because they're not using the technology. The platform is making it the allowing us to have free browsers, video transcoding pipelines such as YouTube and Facebook handle user generated content, which may contain videos that are not spec compliant. This could lead to denial of service information leakage from the execution environment or other process videos and even code execution on their cloud-based platforms. Okay, what about hardware? Video decoding, that's a big issue too. They wrote video decoding and modern systems is accelerated with custom hardware, the media ip, okay, and they use the term IP here a lot. IP as an intellectual property, not an IP address.

The media intellectual property IP included in SOCs. You know, systems on a chip or GPUs is usually licensed from a third party. In one notable example, iPhone SOCs through the A 11 chip include Imagination Technologies D 5,500 media IP as do the systems on a chip in several Android phones. We study with very different kernel drivers. Layered on top. IP vendors build drivers for their hardware video decoders, which are then called by the OS through their own abstraction layer. The drivers will prepare the hardware to receive the encoded buffers through shared memory. While stagefright is Android's media engine, Android uses open max OMX to communicate with hardware drivers. OMX abstracts, the hardware layer from stage from stagefright, allowing for easier integration for custom hardware, video decoders. Other operating systems similarly had their own abstraction layer. The Linux community has support for video decoders through the video for Linux API version two.

Similar to omx, it abstracts the driver. So user space programs do not have to worry about the underlying hardware. Windows relies on DirectX video acceleration 2.0 and Apple uses video toolbox. Intel also has its own Linux abstraction layer called the Video Acceleration api. And similarly, NVIDIA has the video decode and presentation API for Unix, they said we list 25 companies we found that have unique video decode intellectual property ips. Some of these may license from other companies or may produce their own video codec ip. The companies include providers for single board computers, set top boxes, tablets, phones and video conferencing systems. Some video decode IP companies describe providing drivers and models for incorporating the IP into systems on a chip. We highlight all of these companies to showcase the heterogeneity of available hardware, video decoders, and thus the potential for vulnerabilities to exist within or across products.

Okay, and finally in this paper we get to the threat model they wrote in this paper. We assume an adversary who won produces one or more malicious video files and two causes, one or more targets to decode the videos. Delivering videos to the user and having them decoded with or without user interaction is easy to accomplish in many cases. This is the minimal set of capabilities and adversary needs to exploit a vulnerability in decoding software or hardware. For information disclosure attacks, the adversary must be able to read frames of decoded video for malicious videos delivered via the web. For example, this can be accomplished via JavaScript. Okay, so H 26 Forge was written by this team in around 30,000 lines of rust code and has a Python scripting backend for writing video modification scripts. It has three main components, input handling, syntax manipulation, and output handling.

The input handling contains the H 2 64 entropy decoding. Syntax manipulation has functions for modifying recovered syntax. Elements are generating text test videos and output handling has the H 2 64 entropy and coating, which outputs videos. Okay, the best way to describe what they found would be to say, unfortunately, that everywhere they looked, they found problems. Many of them were readily exploitable. And here's some examples. They said, we found two bugs in the Apple D 55 kernel extension. The first bug enables a a partly controlled heap memory overwrite the second bug causes an infinite loop and leads to a kernel panic. These bugs have been confirmed, patched and assigned CVEs by apple. We verified that they can be triggered by a webpage visited in safari through reverse engineering of the two of the H 2 65 decoder in the Apple D 5,500 current extension for iOS 15.5.

We discovered what appeared to be a missing bounds check, potentially leading to a heap overflow in the H 2 65 decode object. To verify this, we modified H two 60 forge with enough H 2 65 tooling to produce a proof of concept video that causes a controlled kernel heap overflow. Unlike the previously described bugs, we were able to trigger this bug only when playing a video not through preview thumbnail generation, meaning that in the other bugs, yes, showing a preview thumbnail was enough to take over the system. Apple assigned this bug CVE 20 22 42 8 50 and patched it in iOS and iPad OS version 16.2 by overriding a pointer with the address of a fake object that itself points to a fake virtual table. We could arrange to have any address of our choosing called in place of a legitimate destructive. We did not attempt to develop an end-to-end exploit chain.

However, Apple's assessment was that this bug, like the first bug may allow and you know, may allow an app to quote, execute arbitrary code with kernel privileges. We tested generic videos on Firefox 100 and discovered an out of ba. Now by when they say generic, they mean videos. They generically developed using their forge and discovered an out of bounds read that causes a crash in the Firefox GPU utility process and a user visible information leak. The issue arises from conflicting frame sizes provided in the MP4 container as well as multiples SPSS across video playback. Note that both the crash and information leak are caused by a single video. To exploit this vulnerability, an attacker has to get the victim to visit a website on a vulnerable Firefox browser. We reported this finding of Mozilla and has has been assigned A C V E and patched in version 1 0 5 on V L C for Windows version 3.0 point 17.

We discovered a use after free vulnerability in f m pegs live AV Kodak that arises when interacting with Microsoft Direct 3D 11 video APIs. We found this by testing generated videos in V L C. The bug is triggered when an S ps change in the middle of the video forces a hardware REIT initialization of live AV Kodak. If exploited an attacker could gain arbitrary code execution with VLCs privileges. We reported the issue of VLC and F m PEG and they have fixed it in both. We tested the videos produced by H two Forge on a variety of Android devices with varying hardware decoders. In doing so, we found issues that span different hardware manufacturers and more serious vulnerabilities in hardware decoders and their associated kernel drivers. To target a breadth of video decode intellectual property, we went with older cheaper systems on a chip.

But note that some of our findings impact newer media tech devices as well and the videos produced by H 24, H 26 Forge can be used to test new and future devices in reporting the problems they discovered. Most mainstream publishers like Apple and Google and Samsung and so on we're quite responsive. But in some cases when they needed to get in touch with a vendor who only OEMs chips to other major customers, they received no answer to their repeated attempts to make those companies aware of the vulnerabilities that exist in their intellectual property. They summarized their work finally by writing. We have described H 26 forge domain specific infrastructure for analyzing, generating and manipulating synt tactically correct, but semantically spec non-compliant video files. Using H 26 Forge, we have discovered and responsibly disclosed multiple memory corruption vulnerabilities in video decoding stacks from multiple vendors. We draw two conclusions from our experience with H two six Forge first domain specific tools like what they created are useful and necessary for improving video decoder security.

Generic fuzzing tools have been used with great success to improve the quality of other kinds of media parsing libraries, but that success has evidently not translated to video decoding. The point being otherwise there wouldn't still be all these problems. Everywhere we looked, they said the bugs we found and described had been present in iOS for a long time. We have tested that. Our proof of concept videos induced kernel panics on devices running iOS 13.3 released back in December of 2019 and iOS 15.6 released recently in July of 2022. Binary analysis suggests that the first bug we identified was present in the kernel as far back as iOS 10, the first release whose kernel binary was distributed unencrypted so they were able to analyze it somewhat. We make H two 60 forge slash h two six forge slash H two six forge under an open source license.

We hope that it will facilitate follow up work, both by academic researchers and by the vendors themselves. I hope so too to improve the software quality of video decoders. Second, finding the video decoder ecosystem is more insecure than previously realized. Platform vendors should urgently consider designs that deep privileged software and hardware components that process untrusted video input again, should urgently prioritize de privilege. Browser vendors have worked to sandbox media decoding libraries as have message app vendors. With the iMessage blast door process being a notable example, mobile OS vendors have also worked to sandbox system media servers. These efforts are undermined by having parsing by parsing video formats in kernel drivers. Our reverse engineering kernel driver, our reverse engineering, sorry of kernel drivers, suggests that current hardware relies on software to parse parameter sets and populate a context structure used by the hardware. In macro block decoding.

It is not clear that it is safe to invoke hardware decoding with a maliciously constructed context structure, which suggests that whatever software component is charged with parsing parameter sets and populating the hardware context will need to be trusted. Whether it is in the kernel or not, it may be worthwhile to rewrite this software component in a memory safe language or to apply formal verification techniques to it. An orthogonal direction for progress, albeit one that will require the support of media IP vendors would redesign the software hardware interface to simplify it. The Linux push for stateless hardware video decoders is a step in this direction. Similarly, encoders that produce outputs that are software decoder friendly, such as some AV one de encoders help reduce the expected complexity of video decoders. Okay, so the entire industry has just received the gift of an extremely impressive, powerful and flexible new tool for generating correctly formatted yet subtly defective test videos for the purpose of finding and performing and perfecting exploitable flaws In pretty much all current H 2 64 video decoders, I remain more than a little bit worried that bad guys are gonna jump on this and use it to locate powerful new vulnerabilities that can be turned into zero click exploits, which they will obviously not be reporting to the device's manufacturer.

The problem is, as always, one of motivation and economics, not much imagination is required to picture the NSO group pouncing on this to enhance their next generation of Pegasus smartphone penetration spyware. And we learned just last week that the NSO group has a couple dozen, also ran wannabe competitors selling essentially, or trying to essentially the same thing. What was it, grease? I think it was, that was like e entertaining bids from some two, a couple dozen of these competitors. Everyone is gonna be in on this that is in, on this H 26 forge tool. And the problem is that they're highly motivated to use it since there's a pot of gold at the end of the development of a new successful exploit. So I wonder who's gonna be that motivated over on the good guy's side, perhaps since the discovery of one of these would be of tremendous value, a substantial bug bounty would be awarded for a powerful zero click remote device takeover exploit.

 At the end of today's show notes, I have the link to the GitHub page, although anybody who heard it can find it. Github.Com/H 26 f o rge e slash h 26 f o r g e. And also a link to their entire paper. Believe, believe it or not, I skipped most of it even while sharing a bunch of it. The link to the PDF is also here and I'm sure you can find it over on H two 60 Forge. So anyway as I said, most of the times that we're talking about a research paper, we're talking about something that ha was found and was fixed and now it's safe to mention it. I guess these guys went ahead because it is probably never gonna be safe to mention this. This is really bad. I mean, we're, we're talking about ev the, when they talk about the, the heterogeneity of this, they're saying that, you know, fixing, it's not possible to fix the H 2 64 decoder because there isn't the H 2 64 decoder.

Everybody's got their own. And so that means everybody's got their own bugs and that means that it's the, it's the guys targeting the attacks that will be able to target a specific device, go find a bug for that device that's known to be vulnerable in that device and use it to exploit that person's handset. As I said, not good, not not good. Steve. does <laugh> play playing a video? Is is that inherent upon any of this? Actually, no. No. So it's, so it's not just a matter of of turn off autoplay. So <laugh>, you know, videos don't play automatically. It's, you have to turn off thumbnails too. And as, as far as I know, that's not even an option. The, the, the exploit, the, the exploit that Apple patched because it was a remote code, execution was a thumbnail in iMessage and just showing the thumbnail was enough to exploit Oh, the, the phone.

That is brutal. Well, okay, so the internet is lost. We're done. We'll just shut it all down. Go back to go back to Costa Rica, Jason. It's, it's better there. <Laugh>. I'll stay a little bit more sane when I'm not exposed to the security news because it could be, don't leave the first week. I ne don't leave till next week. I need you. I won't go back here. I'm gonna be here next week and then I will go promptly back to the beach <laugh> and all the security stuff could happen around around me. Steve, love your breakdowns of this information. I gotta admit sometimes, and especially with this story, trying to follow along, like, okay, you know what, I don't, I don't totally and completely understand this information as much as the people who wrote the report, but I trust you <laugh>. So I'm gonna, and hopefully I've broken it down or enough.

Did you get some absolutely sense for it? Absolutely. And that's what you do so well. So Steve, thank you so much as always for doing this. Anybody who wants to follow Steve online and everything that Steve is up to, just go to You can fi find all sorts of Steve goodness there. Spin Wright, of course, which he talked about some excellent progress with Spin Wright, which is the best mass storage, recovery and maintenance tool out there. You can get your You can find audio and video of this show. Yes, you can find it on our site, but you can also find where you can find transcripts of the show as well. So pretty awesome stuff if you go there. Grc.Com, our website, of course twit tv slash sn for security Now audio and video, like I said, published there all the ways to subscribe.

We do this show live every Tuesday, 4:30 PM Eastern, one 30 pm Pacific. That's 2030 utc. So if you wanna watch live while it's being recorded, you can do that at twit tv slash live or just subscribe, like I said in go to the page or search your pod catcher of choice to do that. I should also mention Club Twit. Incredibly important for us, and, you know, hopefully appealing to you. Twit TV slash club twit. It's a way to get for $8 a month, you know, access to our exclusive discord for club members ad free content, so all of our shows with all the ads removed. You wouldn't even hear this ad if you were a club twit subscriber, and then access to all sorts of extra content that you don't get outside of the club. So twit TV slash club twit. But we've reached the end of this episode of Security Now I'm honored to be here this week and to return next week to hang out with you once again, Steve. So thank you and have a wonderful week. We'll see you next time, Steve. Thanks Jason. All right, bye.

Jonathan Bennett (02:03:47):
Hey, we should talk Linux. It's the operating system that runs the internet, but your game consoles, cell phones, and maybe even the machine on your desk, but you already knew all that. What you may not know is that Twit now is a show dedicated to it, the Untitled Linux Show. Whether you're a Linux Pro, a burgeoning ciit man, or just curious what the big deal is, you should join us on the Club Twit Discord every Saturday afternoon for news analysis and tips to sharpen your Lennox skills. And then make sure you subscribe to the Club twit exclusive Untitled Linux Show. Wait, you're not a Club Twit member yet. We'll go to twit and sign up. Hope to see you there.

... (02:04:29):
Security Now.

All Transcripts posts