Transcripts

Tech News Weekly 284 Transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.

Jason Howell (00:00:00):
Coming up next on Tech News Weekly, it's me, Jason Howell, solo. Mikah is off this week. I hope you're enjoying your time off Mikah. We start with Mia Sato from The Verge. She joins us, talk about generative AI on the voice front ai, Drake AI weekend, and the legal implications of music that's not created by the artist themselves. That's coming up. Also, I talk about how Wikipedia is policing its site from the use of generative ai. Larry maggot from Connect safely.org joins us to talk a little bit about his harrowing virtual kidnapping experience and gives tips on how you can protect yourself from something like that. And Steve Gibson from grc.com talks all about Google's passkey implementation that actually just rolled out a few days ago. All that more. Coming up next on Tech News Weekly

V.O. (00:00:53):
Podcasts you love from people you trust. This is Tweet.

Jason Howell (00:01:02):
This is Tech News Weekly episode 284, recorded Thursday, May 4th, 2023. This episode of Tech News Weekly is brought to you by Delete Me, reduce Enterprise Risk by removing employee personal data from online sources. Protect your employees and your organization from threats ranging from doxing and harassment to social engineering and ransomware by going to join delete me.com/twi tv. And by Collide, collide is a device trust solution that ensures that if a device isn't secure, it can't access your apps, it's zero trust For Okta. Visit collide.com/tnw and book a demo today. And by bit Warden, get the password manager that offers a robust and cost effective solution that drastically increases your chances of staying safe online. Get started with a free trial of a teams or enterprise plan, or get started for free across all devices as an individual user@bitwarden.com slash twi.

V.O. (00:02:01):
Hello

Jason Howell (00:02:01):
And welcome to Tech News Weekly. This is the show where every week we talk to people who are making and breaking the tech news, the biggest tech news stories of the week, and sometimes also tech news stories that aren't the biggest but are just really interesting. I'm Jason Howell. MikMikahah Sergeant is not sitting here next to me, otherwise he would've done part of that intro and you'd hear him already. So but I'm happy he's not here because he's taken some time off. And you know what, we all deserve a little time off from time to time. So, Mikah, I hope you're enjoying yourself. We miss you. We'll have you back next week. All right. That means that I've got a lot of interviews ahead of me. So let's dive right in and start with our first, as AI continues to dominate 2020 threes tech news landscape.

(00:02:42):
I mean, you've certainly, you know, heard it <laugh> a lot on this show and on the network. One thing is clear, generative AI is forcing everyone to rethink things that maybe we took for granted before. Things like copyright, ownership, everything in between. Right? one of the more recent examples of this is the AI voice cloning of hip hop artists, Drake, and other popular artists. The weekend was part of that, that aren't authorized necessarily yet. They still end up with tens of millions of views, tons of listens all over the web. And it's really confusing and interesting to see where this is all heading. So to help make sense of this tangled mess of rights and ownership is Mia Sato from The Verge. And it's great to have you here. Mia. Thank you for joining me.

Mia Sato (00:03:26):
Thanks for having me. I'm excited.

Jason Howell (00:03:28):
Yeah, I am too. And I've been really curious about this particular story. I mean, the, the initial kind of breakthrough of the Drake and weekend story or the song a coup was a couple weeks ago, but this whole idea of AI authorship or AI kind of remixes taking a voice and cloning it and turning it into something new, it's just, it's totally caught my fascination. So I guess let's start with hard on my sleeve, because I feel like AI music has happened prior, and most of the time the results are kind of like head scratching. Like, okay, I guess an AI made that, I guess that's interesting, but nothing quite like hard on my sleeve, which actually got people really excited. Tell us a little bit about that to start off with.

Mia Sato (00:04:17):
Yeah. Yeah. So this song Hard On My Sleeve started circulating a few weeks ago on YouTube, TikTok, and it's sort of like, I would describe it as like a mid Drake in the Weekend song. Like, it's not that good, but yeah, <laugh>, it's convincing enough, you know what I mean? Yeah. I think, I think saying that like it could pass for a Drake song is like kind of insulting, but it was circulating, it sounds just like Drake. I think the weekend part sounds a little less like him, but it sounds just like Drake. And people were freaking out about it. It was convinced, like a convincing impersonation, I would say. And it had like a bunch of views on TikTok. It was on streaming platforms, Spotify, apple Music. It was on YouTube and it eventually did get pulled from streaming and from YouTube and from TikTok. But it gained a lot of steam. And I think that was the first time that listeners were like, pretty freaked out about it. They were, you know, it was like, oh, this is really good. Like, there were a lot of comments that were like, this has to be illegal, you know? Mm-Hmm. <affirmative>. So there was like an automatic understanding that like, this tech is new. Like it's got, it's gotten to a new place and wider accessibility that is like pretty crazy.

Jason Howell (00:05:30):
Yeah. I don't think any of the examples that have happened prior to this had the, had the kind of recipe to do anything other than be a curiosity. This, of course, when you start pulling in, you know, the voices of Drake or the Weekend or, or any like big name kind of modern, you know musician, vocalist, whatever, something that's recognizable instantly, it kind of has the potential and obviously successfully. So to take it to this next level, were people responding to this because they, like, I have to imagine there's at least a small percentage of people that heard this and they were like, oh, it's a new Drake song, you know, or it's a new weekend song. Like, they didn't really think about it much beyond the fact that it is ai, but I'm, I'm sure the majority of people that were responding to this we're responding to kind of the remarkability of the fact that this was a song that they didn't actually do, but it sounds close enough to take take at face value.

Mia Sato (00:06:25):
Yeah. Yeah. And I think that confusion is like a big part of sort of the legal questions that are bubbling up. The weird thing with this was like, it, it seemed well, okay, I have my own thoughts about like what hard of my sleeve actually was. I feel like there are a lot of elements that were pretty fishy, like where it came from and who was behind it. They haven't really done anything since then. It's like kind of strange. Drake didn't post about it, even though the week, the same weekend he was like complaining about someone putting his voice through an AI machine and making him wrap Ice Spice. So it was like, why aren't you mad about this? Or why aren't you saying that you're mad about it? But I think what was freaky was like, oh, this, if you, if someone posted it and was like, this is an unreleased Drake in the weekend, like you know, like a demo, like maybe, you know, maybe, maybe. Yeah. Yeah.

Jason Howell (00:07:15):
Yeah. I think I think a convincing ish enough. And I also, and you know, I don't, I think you and I are on the same page. We're not saying that this is like the most convincing replica of these artists and, and really confusing because it sounded just like, you know, at the top quality, top tier stuff as their, their other material. But I mean, it's close enough to take more seriously than we've heard before. And again, with everything generative ai, where we're at right now is not where we'll be in a year or two. Things will progress, things will get better in these systems, will improve over that time. You mentioned just a few minutes ago that hard on my sleep was actually successfully removed for copyright, but barely in the article, you, you <laugh> were like, it caught through by the skin of its teeth. It was pulled. Talk a little bit about why it was such a, why it possibly could not have been pulled, I guess.

Mia Sato (00:08:06):
Yeah. So one thing that I found really weird about the song was at the beginning it had a Metro Boomin producer tag. And I thought that was very strange for like a song that was purportedly written by like a ghost writer. And the story was that it was written by an industry ghost writer who was like, sick of not making money. So I was like, why would you put a metro booming tag on there if it wasn't produced by Metro Boomin? And this is originally, you know, this is an original song, and that is what got it pulled from YouTube. We don't really know why it was pulled from the streaming services. It kind of came down like all at once. But it stayed up on YouTube for a while, and then finally it was, it was taken down and we did some reporting and we found out that it was the metro booming tag that was the problem. That's a interesting, and so yeah, it's, that's why I mean like, it just barely, right? Mm-Hmm. <affirmative> like barely got that DM a DMCA a take down. But it's weird to me that like, it wasn't re-uploaded without the tag. That would be interesting. Then what happens, this YouTube pull this you know, Google is working on its own generative AI pro projects. So is this, you know, is this a valid copyright claim that even without the tag, that there's something infringing with, with these tools? You know, that's a big question.

Jason Howell (00:09:28):
Yeah. Yeah. I mean that, I think that's a huge question, right? Is, you know, what is, what is said, what is it, what can we say about the likeness of somebody, like obviously the weekender Drake didn't, didn't sanction this, you know, by singing it. They didn't, they didn't grant it any permission whatsoever, but their voice was used on it. Is likeness enough to pull someone and say, Hey, you, you're infringing. And if you're infringing, infringing on what exactly is it the tonality, right? The voice, is it <laugh>? Right? It's so confusing to know exactly where this is, where this is heading. Are we moving toward a future where someone's voice, whether it's used or replicated, might be enough to earn an owner writes to a song if they didn't create it?

Mia Sato (00:10:17):
It's weird because it's like, you can't copyright your voice. You can't copyright your style or your flow, or the way Drake says something like, that's not a copyright problem. Right? And that's why the song is so deeply weird. It's like, well, there's no work that it was copying. But yeah, there are, there is some case law about these issues. I was as I was reporting the story, I was, I learned about a couple cases. There was one involving Bet Midler and she sued for, for getting the rights to use one of her songs, but she didn't wanna perform it in the commercial. And they hired someone who sounded like Bette Midler to sing it. And she won that case. She won that case. Okay. That's

Jason Howell (00:10:57):
Interesting. That's fascinating. Yeah.

Mia Sato (00:10:59):
Yeah. And then there's another just like perfect case involving Vanna White from Wheel of Fortune. And she sued Samsung for literally using, building a robot, dressing it up in a blonde wig and a ballgown and necklaces and jewelry, and standing it in front of a game show board that looks like Wheel of Fortune. And she won that case also. It was like, this is part of my identity, even though you didn't use my name, my face, this robot, Vanna White is enough to sort of jog people's memories or identify her as, oh, that's Vanna White in this ad that's supposed to be Vanna White.

Jason Howell (00:11:36):
Oh, wow. Yeah. That, that kind of makes it seem, and, you know, and, and this of course, that's, that's the physical representation, albeit as a robot this is a robot voice <laugh> really at its core, you know, so it's very, very similar. There's a lot of similarities between these things. I think what's interesting to me is how, you know, artists will choose to react to this as we go forward. Because, I mean, we've seen this in, in the realm of digital photography as well. There are some artists that are like, you know what, we gotta stop this because it's stealing our work. It's gonna put people outta jobs and blah, blah, blah. And then you've got the other artists that are like, wait a minute, this is just another tool that we can use to make better art. Let's embrace it. You know, it's kinda like the, the Napster example where the record industry really wanted to close down, you know, shut down Napster because they were the problem. The problem wasn't Napster, the problem in the, you know, in reality was just that there was a new technology, a new way of experiencing music that was open, and you can't put that genie back in the bottle. How are artists responding to this?

Mia Sato (00:12:43):
I would say there's like a range of reactions. I think you see sort of similar reactions as, as you said, visual artists kind of worrying about how these models are being trained on their work, what, you know, elements of their work is visible in this output of the models of the mis of the tools. But there are also artists, and like I said, Drake was also like mad about this Ice spice thing. But there are also artists who have been embracing them for a while. There's an artist named Holly Herndon, and she has I think like two or three years ago, a while ago, she created a voice model trained on her vocals. And there's a website that you can go to and upload short audio clips, and it will spit back a recording of whatever you uploaded, sung in her voice.

(00:13:29):
Hmm. not everyone is able to profit off of it. She has sort of a whole system for doing that, but it's, you know, she kind of saw, I think that this technology was rapidly improving and would soon be accessible to all sorts of people. And then the big one, I also wrote about this at The Verge was Grimes tweeted last week. I think that she would split royalties 50 50 with anyone who was using her voice, a voice model, trained on her work and whatever. Like, I think she, the benchmark she said was like, if it's viral, we will like split profits with you. Which that's not really clear to me what that means. Yeah, exactly. But she has set up, it looks like some sort of system where you can upload stuff and create, you know, new recordings based on, based on this model that she owns. So yeah, it's a range of things. I think some smaller artists too that I've talked with are like thinking through how these tools can be used for their work, how, you know, could we create demos and use it sort of as a proof of, proof of concept for, for big musicians. This is what it would sound like if we had you on this track. So there's a lot of interesting ideas floating around, I think. Yeah.

Jason Howell (00:14:38):
Yeah. No question. And then one, one one question that I have as far as tools are concerned, we're so used to hearing about chat G P T when it comes to written ai, we're used to hearing about you stable diffusion and Mid Journey and those tools. When it comes to visual, a generative ai, what are the tools that are being used here for this kind of, I mean, voice cloning is essentially what it is, but I feel like these tools get less, less light shined on them. It's not quite as sexy in the technology world, I suppose, or I don't know what it is, but do you, are you familiar with those tools?

Mia Sato (00:15:15):
There are a bunch of them. And it seems kind of like they really vary in quality mm-hmm. <Affirmative>. So sometimes, you know, the models will be better than others. I spoke with someone who basically goes on Discord. There are several Discord servers that are all dedicated to people uploading their models that they've made of different artists, different people, and then anyone is free to use them. On an episode of The Verge Cast, last week they used this website called Uber Duck, and it looks pretty like, you know, clunky. But they have a bunch of different models. I think they have a Grimes thing happening right now, and it's just, you know, we'll see how long that stays up. Mm-Hmm. <affirmative>, but there's stuff, you know, there's a bunch of different things

Jason Howell (00:15:58):
Yeah. Of co Oh, of course there are. And then finally, as far as the artist who right now are, you know putting out these AI generated likenesses of, you know, of, of famous, you know, musicians and then racking up tens of millions of views and, and and streams and everything like that, is there money actually being generated there? Are there owners of that music actually making money when something like that goes viral on Spotify and it, you know, it has Drake's voice on it.

Mia Sato (00:16:32):
Yeah. I don't think I've seen like, hard evidence that people are making bank doing this. I did talk to someone who is making AI covers, and he also makes his own music. He's also a rapper himself. Right. And he was able to like, translate all the viral attention on his AI covers into his own music. And like, you know, his subscribers doubled and people were streaming his music and really liked it, which is like a very funny twist to this, I think. Yeah. and like the, the ghost writer Hard of my Sleeve song, that was up for a few days at least, I think until before it was pulled. So, and it had, you know, hundreds of thousands of streams. So yeah, I don't know that there's really like, evidence that this is like a huge monetary threat so far. Maybe we just haven't seen one of these songs truly breakthrough that like radios are playing it. But I think there is sort of an ecosystem already popping up of like, we're gonna teach you how to do this for the low low price of like, you know, 4 99 or something. I've seen a couple things like people trying to sell these sorts of classes, how to, you know, here's how to Make Your Own Drake Drake song. So we'll see. I better

Jason Howell (00:17:37):
Believe it. Yeah. You better believe it. I'm sure there's, there's, I'm sure if you go on a YouTube and you do a search, how can I create an ai, you know, Drake song? I'm sure you're probably gonna get hundreds of results at this point. Someone's making money out there. Yeah. Whether they're making it on the actual songs or how to teach you how to do it, someone's making money. Well Mia Saddo really appreciate you taking time and talking about this. I find this topic endlessly fascinating, and I'm really curious to see kind of how it develops over time. Of course, you wrote an excellent piece for The Verge, so people should check that out. If people wanna find you online, where are they gonna find you? Blue Sky, Twitter, Mastodon, I don't even know anymore. <Laugh>.

Mia Sato (00:18:16):
Yeah, I'm on all the platforms, unfortunately. <Laugh>, I'm on Twitter, I'm on Blue Sky, I'm on Instagram. I'm on Mastodon, though. I'm not very active. But yeah, you can find me all over.

Jason Howell (00:18:24):
Right on. Mia Sato, thank you so much, Mia. It's a pleasure. Thanks. I'll talk to you soon. Thanks.

Mia Sato (00:18:29):
Bye.

Jason Howell (00:18:29):
Bye. Alright, coming up we are gonna continue down the realm of ai. AI in one way or another has something to do with almost all of the stories not the last story. So, at least if you, if you don't like hearing about ai, you got the last story to look forward to then. But I promise you it's all very interesting stuff. So that's coming up next, how AI is impacting Wikipedia. Right? Very interesting stuff to take a look at there. But first, this episode of Tech News Weekly is brought to you by Delete Me. Are you dedicated to protecting the data inside your company's network? Well, what about the executive and employee data outside of the network? That's super important as well, specifically designed for the enterprise. Delete Me for Business makes it simple to remove executive and employee personal data available on the Open Web.

(00:19:20):
As you may know, bad actors, they utilize publicly available data online in social engineering attacks. And this includes data from sources like Data Brokers. This data is being effectively weaponized against executives and employees in ways security professionals may actually be overlooking. It's easy to access to these employee personal data pieces when they happen to be out there online. It's just super simple to find them right. The vulnerable data leads to potential harm. Things like doxing harassment, social engineering, ransomware attacks, all of that increases with every volatile moment. So, who's impacted by this data being exposed? Well, turns out everybody, executives and board members, they're targeted and harassed online by cyber criminals, by ex-employees who use their families' personal data to get to them. Executives have a 30% higher p i i exposure risk than the average employee. Public facing employees might actually have their home addresses and affiliations exposed online by angry customers, other bad actors as well.

(00:20:26):
And then individual contributors. Things like personal email addresses, mobile phone numbers, those are all used to socially engineer their way into enterprise systems. Delete Me actively Monitors four and then removes personally identifiable information for your employees to reduce enterprise risk so you can protect yourself and your employees and reduce risk when you're using. Delete Me in just five easy steps. First, employees, executives, and board members. Complete a quick signup, then delete me scans for exposed personal information out there on the web. Opt out and removal requests. Then begin, that fires off right to kind of pull out any of this data that's found to remove it. Essentially, initial privacy report is then shared an ongoing reporting initiated. So you can see the privacy report, you can see what's out there and on an ongoing basis, this is remediated. And then Delete Me provides continuous privacy, protection and service all year long.

(00:21:28):
Delete Me is awesome. You know, when they first came on as a sponsor, I re I remember looking at the service and being like, what? I'm, I'm a public person. Like I've been doing podcasting for, gosh, at this point, almost 20 years. Like I'm out there in the public, like, what do I wanna remove? Like, I've got nothing to remove. But then I thought about it, I was like, well, wait a minute. Like, you know, I've got a family, I've got, I've got a home address. Like, there are things that I don't necessarily want everyone to have access to and delete Me. Was able to find them. I, I looked on the report and I was amazed at how many different places this kind of information just appears, you know, and has collected over, over decades at this point. It's really kind of crazy.

(00:22:10):
So they make it really easy to remediate all of that, protect your employees and your organization by removing their personal data from online sources by going to join delete me.com/twi tv. That's join delete me.com/twit tv. And we thank Delete Me for their support of Tech News Weekly. Alright, so AI and Wikipedia Wikipedia is not the only, you know, company that's having to to look at the potential influence and impact of generative ai. And, but yet Wikipedia is filled with information that is generated by humans. And it's turning out to be the case that Wikipedia is facing some of the similar kind of questions and potential pitfalls that a lot of journalistic outfits are. They realize, and we've talked about this in in previous weeks, that using generative AI can help when it comes to creating text for the site or writing an article.

(00:23:13):
But as we know, it's never that easy and it's, it's never that perfect or without error. Vice wrote earlier this week about the many volunteers who maintain Wikipedia and this new onslaught of AI generated content that's actually hitting the site. Wikipedia, of course, relies on these contributors for the content that hits the site to be accurate, for it to be vetted. That's super important. But contributors are torn on what to do in light of these new tools. And you know, AI text, often it can seem accurate, right? Like, you get that AI text spit out from the tool and you read through it and you're like, man, this is super convincing. But it's, it does a really great job of making it seem that way, but it's not necessarily always steeped. In fact you, and you can call that whatever you want.

(00:24:02):
Some people call this hallucinations. I know Jeff Jarvis has a term for this that is not hallucination, cuz he says it humanizes the ai, which is not the case. AI is not human. So, you know, is it that machines are misinterpreting the words, whatever the case may be, the systems have shown that they are prone to adding false information to citing locations for those facts that don't even exist in some cases. So can these systems actually discern fact and fiction? And I mean, you know, even humans have a hard time discerning fact and fiction sometimes, let alone a machine that you just kind of let loose on this massive mound of data and say, come back with something that's factual. You're not necessarily gonna get that. You know? And not to mention these systems have been confirmed. You know, there's, there's biases that make their way in whether intentional or not, that can be communicated in the text they create.

(00:24:56):
So, like I said, it really mirrors the challenge that newsrooms are having right now when they're facing their own attempts to use ai to generate stories for the site. They can create this text, but a human editor is then definitely needed to lay their eyes on the text that was created and really scrutinize what that output is. It's not a short, you know, it feels like it's a shortcut, but there's still plenty of work that is put into it if you're doing it, in my opinion, if you're doing it properly to scru scrutinize and vet that output. Cuz otherwise you're just, you're handing over the keys and you don't really know that things are factual. You don't know that, you know, it's, it's being accurate that it isn't somehow infused with some sort of bias that sometimes can be difficult to detect or discern even, but it's there and it influences the words that you see.

(00:25:55):
So, in other words, perhaps maybe good for an initial draft, but something, you know, that's, that's good for print. We certainly aren't there yet when it comes to these things. So, as for Wikipedia, you know, they rely on the factual nature of information on their site. That's the entire point, is that you go to Wikipedia and we assume that there is a community there that has a vested interest in making sure that any information that hits Wikipedia is factual and has, and is cited as such. And that there are other community members there that are always kind of tuned into the information that hits the site so that there's multiple points and multiple perspectives to look at it and point out causes for concern. Another challenge for Wikipedia is the training of large language models. And I didn't even think about this part, but at the core of Wikipedia is, you know, it's, it's an open site.

(00:26:52):
It's, it's open access. The core of its foundation is all about openness to everything that it offers and its contents and all that. But feeding the entirety of Wikipedia into large language models can potentially lead to a whole host of unknown risks. You know, for, for private companies like Open ai, as they use that data for their own means, that goes kind of counter to the open nature of Wikipedia, not to mention having the potential of becoming one giant feedback loop of data within Wikipedia being at least partially generated by AI and then fed back into AI to generate even more. Like, at what point does that loop, you know, end it just kind of seems like continuous. And oh, as I say that out loud, it's really confusing. I don't know what that leads to. You know, does that lead to, you know, Noel <laugh> at some point?

(00:27:44):
Like it just breaks itself? I'm not sure, but as of now, there is a draft policy that they're basically working on, Wikipedia is working on for the site where they do require intext attribution for AI generated content. So if something's hitting the site and it's AI generated, they have to attribute that within the text so that, you know, the idea of banning AI generated text entirely. I don't know, it kind of feels like a fantasy to a certain degree. As we've talked about all these other facets, all these other directions that, that AI is impacting, there's the immediate reaction, the kind of the tech panic or the moral panic as Jeff Jarvis would say on this week in Google response of, nah, this is no good. We've gotta shut it off and, and make it not work anymore and not allow it onto these systems and everything, isn't it?

(00:28:37):
I mean, I just, I I feel like the, the cat's out of the bag at this point. So how do you work with it? How do you create systems that that can work alongside it in a fair and meaningful way that works for everyone instead of just closing our eyes and plugging our ears and saying, la la la so that we don't hear <laugh> the threat that's coming. So really interesting story. Check it out on Vice. I hadn't really considered the aspects of generative ai, how that how that funnels into what we see as a trustworthy source of information that is Wikipedia. So, really cool stuff. Definitely check that out. All right, coming up we're gonna speak with Larry Maga who has been on on the network so many times. This time though, he had a really like scary experience, virtual kidnapping and social engineering, which we were just talking about a few minutes ago.

(00:29:38):
Larry has a story to share with us and some insight into what's driving that and everything. And we're gonna get to that in a moment. But first, let's take a second to thank the sponsor of this episode. And that is Collide. Collide is a device trust solution that ensures unsecured devices can't access your apps. It just cuts 'em off. Collide is some big news. If you're an Okta user, collide can get your entire fleet to 100% compliance Collide patches, one of the major holes in zero trust architecture, that is device compliance. When, when you think about it, your identity provider only lets known devices log into your apps. But just because, or just in the apps in general. But just because a device is known doesn't mean it's in a secure state, right? It might be a known device and still be insecure.

(00:30:27):
In fact, plenty of the devices in your fleet probably shouldn't be trusted. Maybe they're running an out of date OS version, or maybe they've got unencrypted credentials lying around whole host of scenarios that you would want that out there, right? If a device isn't compliant or isn't running the Collide agent, it can't access the organization's SaaS, apps or other resources. Plain and simple, the device user can't log into your company's cloud apps until they've fixed the problem on their end. And that's really all there is to it. So, in other words, a device is gonna be blocked if an employee doesn't have an UpToDate browser is one example. Using end user remediation helps drive your fleet to 100% compliance without overwhelming your IT team. It just kind of happens naturally over time. Without Collide IT teams have no way to solve these compliance issues or stop insecure devices from logging in.

(00:31:20):
With Collide, you can set and enforce compliance across your entire fleet. That's Mac, windows, and Linux Collide is unique in that it makes device compliance part of the authentication process. So when a user logs in with Okta collide, then alerts them to those compliance issues and then prevents unsecured devices from logging in. So, like I said, it does it over time. It does it in the moment, it's security you can feel good about because Collide puts transparency and respect for users at the center of their product. So to sum it up, collide method means fewer support tickets, less frustration, and most importantly, 100% fleet compliance. You can visit Collide today, go to collide.com/tnw to learn more. You can also book a demo while you're there. That's K O L I D e collide.com/tnw. And we thank them for their support of Tech News Weekly. All right, so some scams are more than just scams.

(00:32:20):
They're downright terrifying, right? Like, you get the, you get the, the, the email about the crypto, you know, the crypto deal to your inbox. And that's not gonna scare you as much as something like virtual kidnapping when you're a victim. It's, it's, and that's what makes it so effective. And so I thought we would get on Larry Maggot, c e o and co-founder of Connect safely.org. Also columnist for the Mercury News who wrote about this. You can find a few of his articles that we're gonna talk about today on both sites. He actually recently had his own experience with a virtual kidnapping scam, and it's just frightening. But welcome to the show, Larry, to thank you coming on to talk about it.

Larry Magid (00:33:04):
Thanks so much, Jason. You know, I'm a little embarrassed by this and I actually for a moment thought maybe I shouldn't write about it, because after all, should I really admit that someone who has written countless articles about cybersecurity and given speeches about scams and probably talked about 'em on, on this network at some point? Sure. You know, it is a little embarrassing. But the reality is, as one, a security expert told me, these guys are very, very, very good at social engineering. And they used a series of techniques to not necessarily convince me that my wife had been kidnapped, but at least plant enough doubt in my mind that I not only took it seriously, but the emotional brain, the one that in some ways is stronger than my intellectual brain, began to go into some very dark places as this person was interacting with me in ways that, again, frightened me. So if, if you'd like, I can just briefly tell you the story about what happened a couple of Tuesdays ago,

Jason Howell (00:34:02):
Absolutely wanna hear it. But before, before that, I do wanna thank you for writing about it, because I think that things like this don't, you know, have the potential to not get talked about. Yeah. Because of that like, shameful aspect of like, oh, well, I should have known better. But the reality is, when we, when, when the scammers are able to tap into your emotional brain, it puts all the things, all the training, all the learning, all the thought, you know, and I, I know better out the window because our, you know, it's, it's like fight or flight takes over. So you're, you're per, it's perfectly understandable that this would happen to anybody and not just you. Right? So anyways, yeah. Set the

Larry Magid (00:34:39):
Scene. Yeah. Well, the other reality is 40 years of writing about technology, some of the most effective articles I've written are when I admit, admit a mistakes. Like one time I wrote an article about how I called tech support and then realized that I had unplugged the device. I mean, dumb me, but lots of people make those mistakes. Yes, it makes people feel better when an expert admits that they fall for these. But what happened that day was the phone rings around, I don't know, 1130 in the morning. Oh, first thing, my wife goes to San Francisco. We live in the, in the Palo Alto area. She rarely goes to San Francisco. She rarely leaves a house. Frankly, we, neither one of us, you know, leave the house that often. And when we do, she's usually not too far away. I know where she is, but at this particular time, she was in the city visiting a friend.

(00:35:21):
Okay? a call comes in around 1130 and the first thing I heard is a crying, screaming, hysterical woman on the phone. Now, I, it wasn't, I definitely think it wasn't my wife's voice, but it, it could have been. I mean, even though that's the way she prevents herself ever, you know, all I kept saying is, but it came, oh, I forgot to say it came from a phone number that was nearly identical to her number. It wasn't exactly her number, but it was the same. So area code, the same prefix, a couple of the digits were the same. So I assumed it was her. I didn't look carefully enough to determine that it wasn't. I hear the crying. I said, Patty, what's the matter? What's going on? What's happening? And, and she just keeps crying and I say, wow, whatever happened must be horrible, because she's hysterical.

(00:36:06):
And then the on comes another, a male voice who says he's a police officer. And I, and I immediately said, what's wrong? He said, well, I'll tell you in a minute. I said, no, come on, tell me what's wrong. I'm desperate to know what's wrong. And he says, I'll tell you, but I need to identify you first. I said, well, okay, sometimes cops do that. So he asked my name and I gave it to him. He asked to confirm Patty's name. I gave it to him. Cuz again, sometimes cops do things like that. And then he said, I'm actually not a police officer. I'm a member of a drug cartel and we have your wife, we've taken your wife kidnapped. And he made some cockamamie, stormy story about how she happened upon a drug deal. And she interrupted it and it caused him to lose money.

(00:36:47):
And I don't know at this point. All I know is it, I was beginning to panic cuz I hear the voice, I saw the phone number, I knew she was in San Francisco. I think he repeated that she was in San Francisco. I'm not sure how he knew that. Maybe I said something, I'm not sure. So anyway, the call goes on and on, but immediately, as soon as I realized that something's going on, I put him on speaker and I dial nine one one on my other phone. Fortunately, I have a desk phone as well as a smartphone cell phone. So the nine one one operator was able to hear the entire call and she actually put a police officer on the line as well during the call. Super smart. And during the call, he makes a number of demands, one of which is that I get in my car right away, right now, get in your car.

(00:37:30):
And I, I said, well, I'm, I'm getting, gimme a second. I've gotta get my keys. And I don't, you know, stuff like that. He wants to know what kind of car I have. And he also tells me to drive to a parking lot, a Walmart parking lot in San Jose, which didn't make much sense because if she had been kidnapped, I thought in San Francisco, but okay. Not that far away was $5,000 in cash, which also struck me as a very low amount of money for you know, somebody in the position I was in. But okay, that's what he was asking for. Now, of course, I never had any intention of, of doing, of complying because I knew that paying ransom, you know, first of all, there was a, I wasn't sure it was real. Mm-Hmm. <affirmative>, I mean, I, my my emotional side believed it, but intellectually I was still functioning mm-hmm.

(00:38:15):
<Affirmative> but in, in a, you know, it's just something I probably wouldn't do. So, you know, I say probably because one never knows what they would do if it was a real situation. But I did call nine one one. The call went on for about 11 minutes because I was able to see the call history. And then at one point, the 9 1 1 operator asked me what was her phone number? And I didn't want this guy to know I was on the phone with nine one one and he had all the, by the way, said, don't call anybody. Don't say anything. Don't call the police. Anything you do is gonna cause harm to your wife. So of course I didn't let him know I was on the phone, but I whispered into the phone her phone number I think he heard me cuz he kept saying, who are you talking to in a very aggressive way.

(00:38:56):
And I said, oh, I'm not talking to anybody. I said, I'm just really nervous. I talked to myself and I'm nervous, I'm sorry, you know. But I think at some point he got wind or figured out that I was onto him, or, or at least I wasn't gonna comply. And he hangs up. Hmm. And at that point, I spoke to the 9 1 1 operator and she said, actually a police officer is on his way. A cop arrives. He tells me he thinks it might be a scam, but he says, we don't know for sure we're taking it seriously. And he said, did you call your wife? And I actually hadn't called her yet because I figured the 9 1 1 people had, and they had, and she didn't answer. But then I called still no answer. A few minutes later I called again and thank God I got an answer and she was fine. So that was the story. And all I can tell you is that, you know, as you pointed out, it's one thing for somebody to try to scam you out of money you can kinda laugh that off. Or even if you take it seriously, it's just money and money is important, but it's not nearly as important to the loved one. Yeah. And when, when they, when they get you on something like that, it just, it it impacted me in ways that I wouldn't have expected.

Jason Howell (00:39:59):
Yeah. And I'm sure, you know, you wrote your first article a week ago and then we have a follow up article, which we'll talk about in a second cuz it kind of adds a definitely a deeper kind of technological aspect to this. But I'm sure you've, you've, you know, probably talked and and heard from a lot of people in the last week, like, how common are scams of this particular type? Like I don't know that I've known anyone that's encountered something like this, but it sounds frightening.

Larry Magid (00:40:25):
Well, of course, after the incident I started doing research, talked to experts and did a lot of online research. They're, they're relatively common. I mean, they're not as common as, as many of the scams. And then some articles actually say, and I don't know if this is true, but it's an interesting anecdote, is they actually sometimes come from Mexican prisons. I don't know how Mexican prisoners get their hands on the phones and all the things they, we need to carry out the scam. But apparently that's a thing that, that it happens from Mexicans prisons. I have no idea where this person was located. I also don't know, had I shown up with my 5k at the yeah. Walmart, what would've happened? Would there have been a mule? Would he had an accomplice? Or one, one expert said what he might have done is sent you into the Walmart to wire the money.

(00:41:09):
Cuz Walmart has a, a way to do that at every Walmart. And he may have demanded more money. So any, any, I I'm not sure where they were located, but it very well may have been, it probably was outside the us Yeah. But, you know, but yeah, I heard from a lot of people. The other thing I heard from one expert who wasn't able, I'm not able to identify him just cuz of it, the job he holds. He said that the way that which guy operated, he actually broke down to me this scenario. So this first thing scare you, right? My wife, the call from my wife's phone, I think my wife's voice. I think then the police officer, the authority figure to kind of get you to trust this authority. Then shifting over to the fear with the more fear and intimidation as the criminal, as the, as in his case, claiming to be a part of a drug cartel, and then asking me to get in my car and say, why did he want me in my car?

(00:41:58):
Yeah. I said, well, maybe, you know, to start driving. But that's to begin the compliance process. So once you kind of get into it, there's a psychological compulsion to finish. Yeah. You know, to, to almost be part of the whole thing. You wanna resolve this. And so that was part of that demand. So, and of course it's pretty obviously why he wouldn't want me to talk to anybody, but it was a very, you know, some of it was crude and not effective, but, but in total it, it, it got to me, I have to admit, although it didn't get to me in enough to actually, you know, show out the money, but enough to scare me.

Jason Howell (00:42:28):
Yeah. Yeah. I mean, and that, that leaves an impression and, you know, kind of lasting damage <laugh> Oh,

Larry Magid (00:42:34):
As I point out, it may have done more damage Yeah. At least temporarily short term than if I had come up with, you know, I had to part with money. Money is replaceable. The Yeah. Which the, the trauma that you get from this is, is something that's serious.

Jason Howell (00:42:47):
Yeah. And indeed writing

Larry Magid (00:42:47):
About the helped talking to people helped for sure. What bothers me.

Jason Howell (00:42:51):
Well, yes. And so then today you also kind of expanded on this idea and you published an article can be Found in the Mercury News can also be found@connectsafely.org that looks at this through the lens of voice cloning and ai. Actually, the first story in today's show is about voice cloning as related to musicians and copyright and that sort of stuff. This is like same technology, totally different direction. Talk about how this kind of technology can, I mean, take something bad and make it even worse. I mean, I think people can probably use their imagination, but this is not good stuff going forward.

Larry Magid (00:43:25):
Right. Well, there's something, a couple of articles that I, I read in the brief one from CNN where a woman got a call from what sounded like her daughter and she said it sounded exactly like a daughter, same intonation and saying, you know, mom, a bad guy has me help me, help me. She was savvy enough to, you know, eventually figure it out and not, not pay a ransom. But then there was another story about a 39 year old man whose elderly parents got a call that sounded exactly like him and they were convinced it was him. They heard his voice and they did fall for it. And, and man managed to scrounge up thousands of dollars that they paid to this, to this scammer. And I was thinking about this as technology gets better. I'm sure Jason, you've played around with chat G T P and other interactive chat agents.

(00:44:09):
I can imagine it at first, I think what is is he's taking a snippet of your voice in creating a sentence or two that sounds like the like the loved one. But I can envision within a very short time being able to carry out a conversation Yeah. With your virtual loved one. You know, where they can, they can ask me. So one of the things I talk about in the article that's currently on the cover of Connect Safely and and on the Mercury News are things you can do Yeah. To verify whether or not this really is your loved one. Like in advance come up with a code word, or if you don't have a code word, say, you know, what kind of car does Uncle Joe drive? Or where did we go on vacation last year? But ask your loved one. You know, ask to speak to your loved one, and you may be speaking to a bot that you think is your loved one, but ask them questions that only your real loved one would know. So, you know, this is one of the, I love ai. I'm, I'm a huge fan of what, what potential it has, but this is one of the many things we have to worry about as an unintended consequence. The research did speak done to, to promote AI right now.

Jason Howell (00:45:12):
Yes, indeed. And creating systems and beha and habits and behavior that we can use in, you know, in, in situations to kind of counteract this, you know, the potential of these of these systems like ai, generative generated AI doing these things that we're not used to. Right. Like, it's like we have to retrain ourselves to look at these things differently.

Larry Magid (00:45:34):
And, and by the way, the the number one security fly, I heard you do a, a spot a minute ago for like, some technology that can help protect your devices. I of course that's externally important. I have that on all my devices. But the most important computer flaw that you have to deal with is the one that runs in the software. Yeah. Up in your brain. Yeah. It's, it's the social engineering is probably responsible for more hacks, if you wanna call it that, than any form of computer vulnerability. Yeah. Because if you can get the person to override whatever technology they have to protect themselves, there's very little you can do no matter how great your, your security systems are on your devices.

Jason Howell (00:46:08):
Yeah, absolutely. And that, that emotional core within us is, is like the hack in and of itself. If you can trigger the emotional core, you've got access to our, to us, you know, and I'm

Larry Magid (00:46:20):
Gonna have to talk to my friend Peter Norton. Maybe Norton can come up with an antivirus that runs in your brain. <Laugh>.

Jason Howell (00:46:25):
There you go. Alright, well keep us posted on that. Larry, I'm sorry that this happened to you. I'm really happy that you wrote about it just because the more people know about this the less likely it is to impact other people the way it did you. But I'm really happy everything's okay as well. So thank you for coming on. People who wanna read everything that you've written about this and more can go to connect safely.org of course. Can also go to the Mercury News and if people wanna follow you online, Larry, are you doing Blue Sky Twitter

Larry Magid (00:46:57):
Mask on? I'm doing a bunch of stuff, but yeah, I'm still doing Twitter <laugh>. I still drive a Tesla, but I wanna put a bumper sticker in the back that says I bought it before even Yeah, I

Jason Howell (00:47:04):
Know, right? I'm sure that bumper sticker exists. <Laugh>,

Larry Magid (00:47:07):
Right? No, you can get me an at Larry Maggot wherever, wherever you are. And by the way, Jason, thanks so much for helping me amplify this story. Yeah. Cuz I do think it's important I be on the lookout.

Jason Howell (00:47:18):
Completely agree. And love all the work that you do with Connect Safely. So thank you Larry. Thank you. Really great. Thank you to talk with you. Thank you. And we'll talk with you soon. Bye. Take care of yourself. All right. Coming up the fun continues. We got Steve Gibson actually joining me to talk about Google Pass keys. I'm, I'm already like on the boat with, with the Google Pass keys thing, so we're gonna talk with Steve about it. Should I be? Well he's gonna answer those questions that's coming up. But first, this episode of Tech News Weekly is brought to you by Bit Warden. Bit Warden is the only open source cross platform password manager that can be used at home. It can be used at work on the go is trusted by millions. Even Steve Gibson, who's coming up next has switched over to Bit Warden with Bit Warden.

(00:48:03):
All of the data on in your vault is end, end-to-end encrypted, not just your passwords. So you can protect your data. You protect your privacy when you're using Bit Warden by adding security to your passwords with really strong randomly generated passwords for every account. Not just, you know, not just the same password across all accounts. You know, you want a different one for every one of them. And the longer it is, the more complicated and and unique it is, the better go further with the username generator that allows you to create unique usernames for each account or even use any of the five integrated email alias services. Bit Warden offers so much more than just being a password vault. Bit Warden has new features to announce their latest release. If you hadn't heard about them already, pretty cool stuff. There will now be an alert when Bit Warden's autofill detects a different U r i than the saved vault item, such as when an iframe is used for the login process.

(00:48:59):
So you know where you're actually logging into new users who create their accounts on mobile apps, browser extensions and desktop apps can now check known data breaches for their perspective master password. That's super useful. Logging in with a device is now available for additional clients. So login requests can also be initiated from browser extensions, from mobile apps and from desktop apps. And then starting later this month, the bit warden application will begin alerting users at their KDF iterations are lower than the recommended default of 600,000 for pbk DF two. Argon two ID is also an optional alternative kdf for users seeking specialized protection. And then finally, a stronger master password has a higher impact on security than K D F iterations. So, you know, as is always the case, you should have a really long, strong and unique master password. You're gonna get the best protection.

(00:49:56):
When you do that. You can share private data with coworkers securely. You can do that across departments across the entire company. They've got fully customizable and adaptive plans as well. Bit warden's team organization option is $3 a month per user. Their enterprise organization plan is just $5 a month per user. And then individuals can always use the basic free account for an unlimited number of passwords. That's free for the basic account, unlimited passwords, but you can upgrade any time to a premium that's less than a dollar a month. Or you can bring the whole family over with their family organization option to give up to six users premium features for only $3 and 33 cents a month. That's what we have in the Howell household. At TWiT, we are fans of password managers. Bit Warden is the only open source cross platform password manager that can be used at home.

(00:50:48):
It can be used on the go or at work. It's trusted by millions of individuals, teams, and organizations worldwide. Get started with a free trial of a teams or enterprise plan or you can get started for free across all devices as an individual user at bit warden.com/twit. That's bit warden.com/twit. And we thank them for their support of Tech News Weekly. All right. Kind of, you know, similar to passwords, right? But this is different, right? This is passkey aimed to be kind of like the next generation, the next step perhaps away from passwords and towards a more secure future, less complicated, although sometimes less complicated and more secure, don't necessarily travel together. But anyways are we witnessing the end of passwords right now? I don't know. Earlier this week Google launched its support for pass keys. I'm on board. I've signed up.

(00:51:47):
I have thoughts. I imagine Steve Gibson has thoughts as well, security. Now, Steve Gibson joins me. Welcome Steve. Hey guys, great to be with you. Yeah, good to, good to get you on and good to to get you to talk about Pasky a little bit, cuz when this news launched, you know, we've been hearing about Pasky for a while and it's one thing to hear about this and kind of all that it professes to the technology professes to be able to accomplish because replacing passwords is a big promise. But that's certainly how Google is, is spinning this. And I guess before we get into how Google is implementing this, let's talk about Paske in general. Explain how they work and, and I don't know if you know the initial speck is any different from the way that Google has rolled it out.

(00:52:38):
I know Apple rolled it out as well, but what can you tell us about paske in general? Okay, so I'm glad you asked. 50 years ago 19 73, 50 years ago, I was at uc, Berkeley, and in the computer science building there were scattered around these Hazeltine C R T terminals. And as somebody with an account, you would sit down in front of it and you would log in, you'd enter your username, and then you would enter your password. And in 50 years until now, nothing has changed. So what was happening on a controlled data corporation, you know, c d c 6,700 mainframe 50 years ago, is that when I entered my username, it would look up in the username, file my account, and then when I gave it my password, it would look up to see if the password matched. And if so, it decided, oh, that's Steve, who's, you know, gonna do something again in 50 years.

(00:53:59):
That's the way everything has continued to work, even today, with a few exceptions, which we're gonna talk about, which as past key support begins to be adopted, nothing has changed. Now we got a little bit smarter because, you know, back when Yahoo was having a breach and actually losing everyone's passwords, exposing the passwords, we thought, okay, that's not a good idea to actually store the passwords. Let's store the hash of the passwords. And so that's where password hashing came in. So that instead of, instead of actually storing the password and doing a com a text comparison to see if the strings were the same, we would hash them, which was w w which, so where the hash is sort of this gobbly gook, which is, it is derived from the password, but it's not the password itself. So, but you could still compare them and if they match, you decide, okay, that's, that's the right person.

(00:55:00):
Okay, so the, the, the, the most important thing to understand about PAs keys is that is not what the user sees it is, it is what the server that you're logging into and the client that you have are doing together. And, and this wasn't possible 50 years ago because what I was sitting at the Hazel team terminal, that the, I didn't have a computer I was in front of. I had a terminal that was just, you know, hooked with wires to the computer. Now, none of us are law, are using terminals, we're using computers. It's a smartphone or a PC or a Mac or whatever. So we have a computer that we're using at our end, which allows a much more powerful communication to occur. Instead of just sending a username and a password, we could do crypto stuff. And so, even though it's taken like a long time, and even though we've had, we've been using smartphones and computers for, you know, decades, finally, we are making a fundamental improvement in this, in the conversation between the, the, the thing we're logging into the server, the website, whatever, and us.

(00:56:33):
And so here's the difference. Everything until now has been about giving the server a secret to keep. When we gave it a password that was our secret password, and we didn't want it to lose it. You know, we wanted, didn't want it to disclose it. So it, we were trusting it to keep our secret for us. And to a similar degree, somewhat less so. But when we give it a hash, we're we're giving it, again, we're giving it a secret to keep that, that, you know, that is a version of our password. And there are hash cracking systems mm-hmm. <Affirmative> that can reverse the hash to go backwards and figure out what your password was. So we're, so we're still giving computers, remote computers, secrets to keep. And that's why you can't use the same password at multiple sites, right? Because if one site loses it, and it can be, and it can be figured out, then, then that secret can be used elsewhere. That's what changes. Pass keys completely forever changes that with the new system, which involves crypto, which requires that you have a computer on your side to work with the server using what's called public key cryptography.

(00:57:57):
Once you have registered your passkey with the server, and it doesn't ever get the passkey, that's the first thing. The in, in, in old school passwords, you're basically giving the server a secret in with the new system, you are creating a way for, for it to verify the remote server to verify you have the secret, the secret never leaves you, you keep it in an enclave or, or whatever. It never leaves all you're all, you're giving the remote site, the, the, the website is the ability to verify and, and real quickly it, it generates a random number, which, which is never sent to anybody before it sends it to you. You sign that random number with your, with your pass key, which is a secret, and you send back the sig, the, this, the signature, it verifies the signature. That's all it can do. It can't sign.

(00:59:12):
It can only verify. And when you think about that, it completely solves the problem. Every time you want to authenticate a a, a big random number, which has never, ever been used before. I mean, it could just be a, a, a, a, a big number that counts upwards. It doesn't even have to be secret. It's just has to be unique. So you just use a counter, because if it keeps counting up, it's never gonna do the same thing twice and gives it to you. You sign it and you send back the signature. It verifies that the signature is correct using the, the its side of the pass key. And then it says, okay, only the person who has the, the, the who's holding the secret could have signed this random number that has, we've never asked anybody to sign before. And if the signature's correct cryptographically, it's gotta be him.

(01:00:12):
And that's it. So now on top of that is the, is what we see. So, so that's the underlying technology. And, and here's the other important thing to understand what 10 sites support it today. Like this is the problem Yeah. Is that it is a fundamental change to everything, which is like on the internet today. So apparently I was, I was watching this week in Google Yes. Google yesterday, where, where, where Leo and the gang were talking about Pasky because of this announcement. And apparently Apple doesn't support it yet, even though they're, you know, a supporter of it, as is Microsoft and, and, and Google. And, but PayPal supports it and they're, you know, they're a handful of sites. So, so the, I think the most frustrating thing is that unlike the use of password managers, which were, you know, candy coating around the old school username and password system, they just automated that approach from 50 years ago.

(01:01:20):
Paske is new, but that's, it also means everybody has to change. And so, you know, the good news is not everyone's gonna have to write their own server side support for it. Hmm. As new Adi as new versions of servers come out, it'll have pass keys built in and over time as they get upgraded, that'll begin appearing. But you know, it's not tomorrow. It's, you know, you know, as we know, this sort of change is difficult. It's time consuming, it's gonna take a while. So, so the, the, the nothing really dramatically changes on the user side. And Google and Microsoft and Apple are selling this as, oh, you no longer need a password. Well, for us you don't, you, you really don't. You really don't now, right? You pick up your phone and look at it and it goes, oh, that's Steve, or you, you, you, you put your thumb the button and it's like, oh, okay, that's his thumb.

(01:02:22):
And, and then that can unlock your password manager. In fact, just yesterday, dash lane announced, whoa, we're gonna use biometrics to unlock dash lane password manager, so you don't need a password anymore. Okay. And it, and it works with everything because it's still old school password management. Right? Right. So, so this is good, but it's premature because you can't use it anywhere that you might want to. But this is the way big changes occur, right? They don't occur overnight, they incur incrementally. And if Google and Apple and Microsoft hadn't done this, we would've never gotten there. Right? Right. But so they have done it. And so e everyone's gonna jump up and down and, and there's, there's I think over@passkey.directory is a site where you can see, you know, the 10 sites which support it today. And there's also a page for voting for sites you wish would support it.

(01:03:21):
And everybody that, you know, all the popular sites have every site votes Yeah. Wishing for them. Yeah. And so someday we'll get it. And when we do, it's gonna be good. Because what this solves is it solves that we are no longer giving websites. We log into secrets to keep. So a, and it, and you no longer need to invent a new password for every site. All of that does go away. So you use your existing authentication to register your pass key with a site, and then that's all you need. So, you know, a and you don't need to cre be constantly creating new random gibberish or have your secret formula for how to create passwords for a site or any of that stuff. So eventually it's gonna be a really good thing. And our grandchildren will will be saying, wait what are passwords?

(01:04:23):
Yeah, that's so dumb. <Laugh>, why would you ever do that? <Laugh>, P A S S W O R D, that was your pa that was how you got into your account. Why'd you do that? Mon monkey 1, 2, 3. That's stupid <laugh>. Well, one question I have about this. So I set it up on my account, you know, on my Google account. So now, you know, my, my laptop is registered, my computer in the other office, my, when I signed in, it gave me this list of all of my, you know, Android devices that I've had like the past three or four years. It said, these are all pass keys. I was like, oh, I did, I did nothing to like authorize those to be past, past keys. But I guess, you know, I, I don't know why, why it chose to do that. But so that was a little interesting.

(01:05:05):
When it comes to o o when it comes to signing into a site or a service using my Google authentication, does the pass key then translate into that? Or is that separate? Or do we know that's separate? That's separate. That, that, that's separate. And the, one of the sort of the, the disappointing things about the approach, you know that in many of your viewers may know that I spent seven years developing this thing called squirrel mm-hmm. <Affirmative>, which is the same thing that is, it is fundamentally the same technology, which is one of the reasons I understand it so well as I wrote it mine only had one identity, one key, and each site received a variation of it that was separate, which prevented multi-site linkages things. So I came up with a different system where you only had to create one key, and it could be used everywhere.

(01:06:03):
The past key system is, it's just gonna be key mania. You already have that big list. Every one of your Androids devices has a different pass key. And that's, and so you can see them and you can curate them, and you can delete them if you don't want them anymore. But that's the problem, is that like the, well, that's one of the problems. The other problem is cross ecosystem. Apple has said all of the, they will use their, the, the iCloud key chain and the iOS technology to synchronize the pasche among devices. So in the Apple world, multiple devices will all be using the same pass key it looks like. But I'm not sure that's the case. We'll have to see how that develops. Again, th this is one of the reasons why I'm, I've been so careful to separate that the technology, which I first described from the implementation mm-hmm.

(01:07:06):
<Affirmative>, because you could have one pass key shared among devices, or you could have every device creates its own pass key, which appears to be the route that Google has taken for whatever reason, maybe so that you can delete a pass key from a device, but then your other devices still have their own mm-hmm. <Affirmative>, because if you only had one pass key shared among devices, see? And so, and, and this is the point that all the websites out there will know you by that, by it sort of indirectly by that pass key. So maybe it's better to have, give every device its own pass key. Now, you do then have to register each of those devices inde independently with every website that you go to. But you can do that too. There is a system where the, where the site can display a QR code and you allow the phone to see the QR code, they do perform the authentication.

(01:08:08):
And then either you are, you're using your phone to log in to that site once or the, because the, all of the web browsers now also support pass keys. You can, you can have your web browser create, it's another pass key, which it will then store so that now you can just log in with your web browser assuming that you authenticate to it. So, so this sounds confusing that you're getting the message. Yeah. Because you know, it is, it, it's like we have this new technology. The, the, the, the idea that all sites can do is verify your identity. That's the good news. What we do with it then is kind of gonna have to evolve over time. Mm-Hmm. <affirmative>. Yep. And it's I think it's, you know, it's important for companies like, like Google and for Apple and Microsoft, like you said, to put their support behind this.

(01:09:08):
Hopefully that's like that's a, that's that's a signal to others that it's okay. It's some incentive. Yes, some incentive. But I think we're a long way away from any sort of mass adoption. Like, like you say, <laugh>, the, the, the, I mean the, the, the, the, the saddest thing is when, you know, it's like the end of passwords. Yeah, yeah. Four, five sites. Okay. but you still need 'em for everything else. And Well, and who knows how long we're gonna need them for everything else. There will always be those stragglers. I'm, I'm guessing those sites that never, you know, update. Jason, I guarantee you there are routers in closets that have not had their firmware updated in 15 years mm-hmm. <Affirmative>, which says, we are never gonna get rid of passwords there. I mean, it is, there will be, as long as there's a server somewhere that hasn't been updated that no one cared about updating, you know, it's, you're still gonna have to use a username and password mm-hmm.

(01:10:07):
<Affirmative> to talk to that. Mm-Hmm. <affirmative>, you know, so I mean, it, it to, I don't mean to splash cold water on this. Oh, I just, my screen blanker just <laugh> kicked on. I don't mean to splash cold water on everybody, but you know, this is the, this is the beginning of a big good thing, but it's just the beginning <laugh>. It's, it's just the beginning of a long de, decades long, you know, March. And so when you go to sites, you'll begin to see over time, oh, look, they can, they'll support PAs keys so I can, I can a, you know, create a cast key for the, for the site. Yeah. I'm, you know, I'm not in a big hurry because it seems like, well, let's, I'm off, first of all, I'm not an early adopter. I'm sitting in front of Windows seven right now, which works, works just fine.

(01:11:03):
Although I do have my routers updated. <Laugh> <laugh>. Yeah. I, I think with this could sort of all take a while to settle down and, and see how things work. One of the, one of the most annoying problems is that while Apple will synchronize among their devices, and apparently Google will be doing some sort of backup to the, to Google cloud. There's no, and Microsoft will, you know, have its thing, we are in this walled garden mode. There's no indication that they're gonna be providing mm-hmm. <Affirmative>, you know, cross ecosystem synchronization. Yeah. They don't want that. Right. They'd rather keep you within their own world. So, so that, what that means is there will be this proliferation of pass keys, every device, every different system. And so, so one of the reasons that servers are gonna be like, it, it, it's gonna be a big change for them, is right now you have an account, a pass, a username, a password.

(01:12:07):
Well, in order for pass keys to work, the, the servers are gonna have to be able to hold any number of passkey identities per user, because every device that a user uses potentially has its own passkey. Mm-Hmm. <affirmative>. So it's a, it's a big change on, as we say, on the back end, on a mountain, on server side mountain of information right there as well. Wow. well, I'm still still very eager to see how this, you haven't totally doused me with, you know, with water and put the fire out, but it's a little dimmer, maybe a little bit dimmer, but I mean, regardless, I'm using it because I've, I've signed up for it. So at least there's that, like, that interaction with it now. It's kinda like that Jason, we're in the stage of getting used to it. You're an early adopter. Yeah.

(01:12:56):
And we need you to be out there. If, if there's some arrows in your back, that's okay. You've got some friends who will, who will pull 'em out. All right. And, you know, you just keep trucking forward. Thank you Steve. Steve, always great to get the chance to talk to you. Usually it's when I fill in for Leo on your show, so it's nice to get you on my show for a change. Steve Gibson find anytime, find him@grc.com. Of course security Now, TWiT tv slash sn. I'm sure you're gonna be talking about this next Tuesday on security now. So Steve, absolutely, always a pleasure. Thank you. And have a wonderful afternoon. Appreciate you. All right. Take care. And we have reached the end of this episode of Tech News Weekly. We do this show every Thursday so you can subscribe to it and then you don't even have to worry about what day it comes out cuz it's just gonna be delivered to you via the magic of rss.

(01:13:46):
That's TWiT tv slash tnw. Also don't forget club TWiT TWiT tv slash club TWiT. If you want all our shows ad free, you can check out Club TWiT for just $7 a month that gets you every TWiT show with no ads, gets you exclusive TWI plus feed content, extra content, pre and post show discussions. We've got hands-on Windows, hands-on Mac the Untitled Linnux show, Stacy's Book Club, home Theater Geeks, all of these extra shows, these interviews that Aunt Pruit is, is managing for the platform. It's just, it's incredible. We've got so much stuff going on right now, as well as members only Discord channel. That's twit.tv/club TWiT. You can the, there's also family plans. So just go there, TWiTter tv slash club twi. You can see all the information and subscribe and you help us directly as a network when you do that. We really, really appreciate it. Big thanks to John, John Burke, everyone here at the studio behind the scenes helping us do this show each and every week. Without you, we would not have a show. And without you dear listener and viewer, we would not have a show either. So thank you for watching. Thank you even more for subscribing and we'll see you next time on Tech News Weekly. Bye everybody.

Ant Pruitt (01:15:02):
Hey, what's going on everybody? I am Ed Pruit and I am the host of Hands On Photography here on TWiT tv. I know you got yourself a fancy smartphone, you got yourself a fancy camera, but your pictures are still lacking. Can't quite figure out what the heck shutter speed means. Watch my show. I got you covered. Want to know more about just the i i o and Exposure Triangle in general? Yeah, I got you covered. Or if you got all of that down, you want to get into lighting, you know, making things look better by changing the lights around you. I got you covered on that too. So check us out each and every Thursday here on the network or the twit tv slash hot and subscribe today.

 

All Transcripts posts