Tech News Weekly 310 Transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.

0:00:00 - Jason Howell
Coming up next on Tech News Weekly. I'm Jason Howell. I talk with Eric Geller from the Messenger. We talk all about President Biden's artificial intelligence executive order. What's in there and specifically, how does it relate to security with AI?

0:00:16 - Mikah Sargent
And I am Mikah Sargent. I talk to Reed Albergotti of Semafor about Microsoft's breakthrough research into smaller large-language models.

0:00:25 - Jason Howell
I read a pretty interesting article by Taylor Lorenz. Usually when she writes something that's pretty interesting, focuses on where younger adults are getting their news and it's not the traditional outlets that you're probably used to hearing.

0:00:39 - Mikah Sargent
And an unfortunate story reveals that the law needs to catch up with AI. After students in a New Jersey high school discovered photos that were faked of them had been shared with the classmates. These were nude photos, unfortunately. So we learn about where the law is and kind of where it needs to go next, all that coming up on Tech News Weekly.

0:01:13 - Jason Howell
This is Tech News Weekly Episode 310, recorded Thursday, november 2nd 2023,. Understanding Biden's AI executive order.

0:01:23 - Mikah Sargent
This episode of Tech News Weekly is brought to you by ExpressVPN. Stop letting strangers invade your online privacy. Protect yourself at

0:01:35 - Jason Howell
And by our friends ITProTV, now ACILearning. It's skills are outdated, in about 18 months. Launch or advance your career today with quality, affordable, entertaining training Individuals. Use code TWIT30 for 30% off standard or premium individual ITPro membership at

0:01:56 - Mikah Sargent
And, by Bitwarden, get the open source password manager that can help you stay safe online. Get started with a free Teams or Enterprise plan trial, or get started for free across all devices as an individual user at Hello and welcome to Tech News Weekly, the show where every week we talk to and about the people making and breaking the tech news. I am one of your hosts, Mikah Sargent, and I'm the other guy.

0:02:22 - Jason Howell
Jason Howell here to talk a little AI. Actually, today's show has a good amount of AI.

0:02:27 - Mikah Sargent
It does as it often does these days, because the world has a good amount of AI, right, that's true.

0:02:33 - Jason Howell
Soon, this planet is going to be completely like the core of planet Earth will be filled with AI Basically where we're headed. You may have asked for AI regulation. Maybe you didn't, but if you did, you got it at least sort of about as close to AI regulation here in the US as we've gotten so far. President Biden issued a pretty big executive order on artificial intelligence. It aims to address many of the risks Well, hopefully, I think, still, you know, encouraging some sort of animation in the space here in the US and there was a big focus also on security. So we've got Eric Geller, who wrote about the 111 page executive order for the messenger, here to talk all about it.

Welcome, eric, it's good to see you. Good to see you too. Thanks for having me. Yeah, absolutely so. Before we talk about security, let's maybe just you know get some sort of the standing on the basics here. White House says this is, quote the most sweeping actions ever taken to protect Americans from the potential risks of AI systems. What are some of these risks that they're calling out here? That they're really focusing on 111 pages is a lot of ground to cover, but what's your thoughts there?

0:03:50 - Eric Geller
It's a wide range of risks.

So we're talking about things like hackers potentially using AI to improve their cyber attacks, which is something that I focused on.

But we're also talking about things like protecting the privacy of folks data when it's getting used to train these algorithms, and we're talking about things like algorithmic bias, so making sure that, if you're using AI to recommend medical treatments, that you're training that AI on a wide set of patient experiences so that it's not recommending something that is not going to work for you depending on your ethnicity, or something like that. So AI is just now emerging as an issue that, yes, it has a lot of potential, but it could also sow a lot of chaos in a lot of different areas, and one of the things that the executive order tries to do is hit each of those in a different section and talk about how the administration wants different agencies to work on, not necessarily solving those problems. I don't think the AI EO contemplates solving all of these problems. What it does address is let's get our hands around the scope of the problem and from there we can work with the companies to figure out specific solutions.

0:04:54 - Jason Howell
Yeah, and this really isn't the first step that we've seen the White House kind of focus its attention on AI in. It was just a year ago that they laid out its AI Bill of Rights. I don't know really what came of that. I'm sure that probably directly informed what we're seeing here and, like I said, 111 pages it's no small. You know little. You know a couple of page memo that has been written up here. What is the size of this? Tell us about how the US government is seeing AI, and I mean, maybe that's even more evolved than it was even a year ago. But what does that tell us?

0:05:34 - Eric Geller
It tells me that they understand the folks in the White House understand that this is a problem that is going to crop up on many different fronts. You can't just look at this in the cyber security context, although that's kind of what I focused on in my story. You also have to look at it in terms of bias. You have to look at it in terms of privacy. There are different sort of permutations of the AI threat and the EO again is trying to say look, we understand that these problems are going to crop up in multiple different areas. We're trying to kickstart work on each of those areas, but this is just the beginning.

0:06:05 - Jason Howell
All right, so you've mentioned that you focused largely on the security aspect, which is a piece of this and we can talk about you know other pieces here in a second, but let's focus on security, to kind of start off with plays a very big part of kind of the directive, of what we're seeing here. How exactly does this executive order address the use of AI for things like hacks, for exploitation? Does it have any sort of consequence tied into it? Or, like you said earlier, is it just really about raising awareness in regards to security?

0:06:38 - Eric Geller
So it kicks off a pretty ambitious project to get all the different parts of the government that sort of watch over what they call critical infrastructure so that's our hospitals, that's our power grid, that's our water facilities and integrate AI security guidance into how they talk to these different companies and say here's what you need to be doing, here's how we're going to assess what you're doing Right now. Ai is not a big part of that. This executive order is kicking off a process of integrating AI into that kind of oversight and, eventually, regulation. We're not there yet, but it will eventually lead to, at different agencies, regulation on hey, how are you using AI and also, how are you protecting yourself from AI. So that's one big thing.

The other big thing is when companies test large language models, when they train them, they are going to have to provide reports to the government about the results of their security tests.

So most of the major companies have actually voluntarily pledged to the White House we will test our large language models to see how hackers could exploit them.

We will pretend to be hackers it's called red teaming and we will try to find problems and if we find them, we'll fix them. This executive order says if you do this in the United States of America, you must tell us the results of those tests, and it also, interestingly, requires reporting by the cloud service companies. So think Amazon, think Google Whenever somebody's buying space on one of these cloud platforms to test these AI systems which you need to do because they are so computing intensive if you're buying space to do that, amazon or Google or whoever has to tell the government, hey, we had a customer sign up because they want to do some AI testing. So that's a way for the government to try to get a sense of how many different people out there are trying to do this, because the first thing, if you want to regulate, you have to understand the space. So these are efforts to try to get again get their arms around the scope of the activity and then get their arms around the scope of the problem.

0:08:36 - Jason Howell
From what you just said. It makes me think like the pessimistic view of this is that an AI company can. Let me try and re-synthesize you mentioned. If they do these red teaming tests, then they have to share that with the government. Is there any possibility that they just don't do the red teaming test because they don't want to share this kind of information? They'd really probably be shooting themselves in the foot by doing that as well. But is that a possibility here? Or are they saying you must test these things in these ways and when you do, you have to share that information with us?

0:09:12 - Eric Geller
It's a really interesting combination of voluntary and mandatory. So the major companies have already promised voluntarily that they will do these tests and the executive order says you have to share the tests with us. So the Biden administration, the White House, is getting around having to say we are going to make you do these tests, but only because they've already gotten promises from the major players to do the tests anyway. Now, that doesn't account for the entire. That doesn't sort of cover the waterfront of anybody who could do AI research. But right now, because you need so much computing power, the major players represent the vast majority of activity in this space and at various White House meetings over the past few months they have committed already to doing those tests.

0:09:55 - Jason Howell
Is there any limit to or not limit? But is there any kind of horizon or threshold there's the word that I'm looking for threshold that a company like this has to pass before it's required to do this. In other words, the lower scale entrance in this space versus the really mega ones with the vast warehouses, full servers, that are driving all this compute power. Are they less likely to fall into these requirements versus the really big players?

0:10:27 - Eric Geller
Yes, there are provisions in the executive order that sort of define how big does your activity have to be before you have to report. So it's going to encompass most of what we think about when we think about AI. So, training the next version of chat, GPT that's definitely going to be big enough in terms of the resources that are devoted to it, that their open AI is going to have to report and they have. As I said, they were one of the companies that promised to do this voluntarily, so they are going to have to share the results of what they're doing with the White House. So it does encompass pretty much everybody who's doing what we think of as large scale AI. But there is a threshold that you have to clear in order to be covered by this.

0:11:04 - Jason Howell
Yeah, it's so interesting, I think companies at that scale. They've been saying for a while like, yes, come on, regulate us. Right, we invite regulation. But then there's also the perspective or the opinion that they're doing that because they can benefit from the regulatory capture. They're already really big, do you think they're? I mean, I guess this is you know. What do you think versus what do you know? But are we getting any reaction from them about these rules and do we have any sense of how they actually feel? Like is this not the kind of regulation they were looking for? If we had to guess?

0:11:41 - Eric Geller
I think these companies, particularly the major ones that have already made these promises to do this red teaming, they recognize that it's better for their business model to be seen as cooperating with the White House, to be seen as putting their best foot forward. They don't want to get the reputation as the bad guy in the AI ecosystem that is off in the corner doing something secret and who knows how it could be exploited, because that doesn't look good for them as a company that's trying to say we're out here trying to make society better, and I think the White House has capitalized on that. They understand that these companies it's not just the technology, it's also the story that they sell, and the White House is saying hey, if you're selling the story of using AI to make the world a better place, why don't you do these basic common sense things to reduce bias, to reduce security risks and along the way, we will sort of bring you closer to us and get you more on board for the kinds of regulations that we're talking about.

0:12:35 - Jason Howell
Yeah, yeah, okay, that's good, that's positive. What about privacy? Cause this, I mean and, by the way, this executive order touches on a whole lot of other angles and aspects of AI than we're gonna talk about today but I think privacy, kind of you know, often is hand in hand with security. What does this address about the privacy of people who interact with AI, about their data, the data sets, that sort of stuff?

0:12:58 - Eric Geller
So one of the things I think is really interesting in here is the executive order requires every agency to think about how is it buying Americans personal data from commercial companies, data brokers.

So that could be things like the location of your phone based on where it's pinging cell towers. This is something that law enforcement agencies have been doing for years to try to get around warrant requirements for government searches. They just buy your data and they can see where you've gone. Now, when you think about AI and how it can process that data in ways that we've never seen before, that opens up a lot of potential for violating people's privacy Because, again, this information was not collected pursuant to the Fourth Amendment. So the executive order says rethink your purchase of this data, rethink the privacy safeguards that you are putting around this data. Again, this is the kind of thing that is, who knows how it's gonna be enforced? Who knows how it's gonna be implemented?

But that's a step by the White House to say we understand that AI takes things that aren't huge problems already and makes them even bigger. Or it takes things that are huge problems and brings them to a scale we've never thought of before. So that's where you see the privacy and AI kind of interact. We've got this data. Right now we can't make great use of it, or I should say previously to AI, we couldn't make great use of it. We're already starting to see agencies use AI to understand it better, and I should say there is a flip side to that, particularly when it comes to cybersecurity. The government is using AI to try to understand cyber threats in a way that humans simply couldn't put those pieces together. So there are benefits as well to that power of AI.

0:14:29 - Jason Howell
Yeah, no question. No question at all. I totally agree. Eric Geller writes for the Messenger, wrote up a really great piece that we are, of course, obviously talking about right now that you should all go over there and read and understand a little bit more about this executive order. It would be really interesting to kind of see how this develops, what kind of actual impact this has in the coming months on the companies that we talk about so much in regards to artificial intelligence. So, eric, thank you so much for taking time today with us. If people wanna follow you online, work and they find you.

0:14:59 - Eric Geller
Yeah, you can follow me on Twitter or X, I guess I should call it. I'm Eric Geller, and you can find me at ericjgellercom as well.

0:15:06 - Jason Howell
Right on. Thank you again, eric. It's a pleasure we'll talk to you soon.

0:15:10 - Mikah Sargent
Thanks All right, bye, all right. Thanks so much, eric. Up next, microsoft is making breakthroughs in oxymorons small LLMs, that's, small, large language models. But first, this episode of Tech News Weekly is brought to you by ExpressVPN.

Now, did you ever read the fine print that appears when you start browsing in incognito mode? It says that your activity might still be visible to your employer, to your school or your internet service provider. So how can it even be called incognito? If you want to really stop people from seeing the sites you visit, you need to do what I do and use ExpressVPN. Think about all the times you've used Wi-Fi at a coffee shop or a hotel, or maybe even at your parents' house. Without ExpressVPN, every site you visit could be logged by the admin of that network, and that's still true even when you're in incognito mode. I mean, do you really want your parents to see what you've been looking at? No, that's a question you have to answer. What's more, your home internet provider can also see and record your browsing data, and in the US they're legally allowed to sell that data to advertisers.

Expressvpn is an app that encrypts all of your network data and reroutes it through a network of secure servers so that your private online activity stays just that private. Expressvpn works on all of your devices and it's very easy to use. The app literally has one button. You tap it to connect and your browsing activity is secure from prying eyes. I have been using ExpressVPN for years now and it is the only VPN that I recommend to other people, not because they're a sponsor on the show, but because it's genuinely the VPN you should be using. It is a trusted VPN. It is constantly checked to make sure that it remains a trusted VPN. They've got a lot of information about how they do not log your data and have proven time and time again that they do not log your data, and that's something that I just can't say for other VPNs out there. Plus, that one big button I press to make it all work is quite handy and quite nice, and I often forget I've got it turned on on my devices because of how quick the connections are, regardless of where I happen to be. So stop letting strangers invade your online privacy. Protect yourself at You can use our link at to get three extra months free with a one-year package that's to learn more. Our thanks to ExpressVPN for sponsoring this week's episode of Tech News Weekly.

All right, we are back from the break and let's talk about oxymorons. No, really, we're talking about Microsoft's research into AI and, in particular, in small AI models, because I think that's gonna be the future. Joining us to talk about this breakthrough is Reed Albergotti from Semafor. Welcome back to the show, reed. Thanks for having me. Good to see you. Yeah, good to see you too. So let's kick things off by telling us a bit about Microsoft's recent breakthrough with their small AI model Believe it's at PHY or FI 1.5, and how it compares to OpenAI's GPT-4 in terms of its capabilities.

0:18:27 - Reed Albergotti
Yeah, I mean I think they call it PHY based on the conversations I had with the researchers, who shared some pretty interesting exclusive information on this breakthrough, which is that PHY 1.5 is now multimodal, which means it can look at images and tell you sort of what's happening in an image. That's a capability that pretty recently was added to GPT-4, albeit on a much larger scale, and what these people at Microsoft Research and other companies have been doing is looking at these gigantic models, these foundation models like GPT-4, and essentially trying to understand how they learned what they learned, and then doing that on a much smaller scale but in a much more targeted way, so they can get some of the same what they call reasoning capabilities in a much more efficient package. In a way, that sort of these models can be run on laptops, if you want to, whereas GPT-4 has to be run on these gigantic servers with graphics processing units and, as your listeners probably know, there's this huge shortage of those, so they're running up against some infrastructure problems.

0:19:41 - Mikah Sargent
Absolutely, and so when we talk about so GPT-4 and chat GPT just recently went multimodal and as provided that capability of using sort of different modes of AI. So tell us about what it means for this sort of smaller AI model to have gone multimodal and is it significant that it's been able to do that?

0:20:12 - Reed Albergotti
I think it's really significant. I mean, the researchers actually seemed somewhat surprised that they were able to do this, and with so little data. You sort of think about images. You think about terabytes of data, millions of images to get these models to learn, and I think they did it with 30,000 images. They said they were able to do this and it just added almost it was negligible a couple of million parameters, which when you're talking about GPT-4, that's a scale of they have about 1.7 trillion parameters. So it's really like no difference. And what's so fascinating why they're really shocked by this is that there is such a huge size difference. If you put it into distance, in terms of distance, then GPT-4 would be like the Empire State Building and 5.1.5 would be like a sub sandwich. It's just like a foot long. I mean, it's a huge difference. So I think what's sort of interesting about it also is that they've what they're learning about how large language models learn. In this process they're really making some breakthroughs.

0:21:27 - Mikah Sargent
So I loved that comparison that you drew there. That was really helpful for me to kind of understand the size difference in terms of the parameter count. And so, with that in mind, why is it better, maybe that it's using fewer parameters, and kind of, how does that impact companies who are making the choice between something like GPT-4 and then this 5.1.5, the Empire State Building versus the foot long? Why would a company want to choose one or the other?

0:22:02 - Reed Albergotti
Yeah, it's interesting, I mean, when you and I mess around with chat, gpt, we're asking these really broad questions about all sorts of things. But when you look at what companies are trying to do with these things is they're really trying to focus those models down on the data that they have at their company, right? So they wanna run all these corporate documents through a model and then the model was sort of and then prompt it and say, okay, what's my sales forecast for next year? You know I'm just making that up, but that is incredibly expensive.

I talked to Data IQ about this, which is a AI company that sort of sits in the middle of these interfaces between the GPT-4 kind of APIs and these corporate IT departments and they can kind of see what's happening, and they said you can spend on GPT-4 up to $5 for a single query just because of all the data that's going back and forth.

Now that's not happening on a daily basis, but on average it's something about 10 cents a query, which really adds up when you're looking at like a gigantic company. So there's a lot of cost associated with just using these models. So if you can basically take the reasoning engine, the part of the model that you really need to look at your corporate data and make that really small and efficient, maybe even run it locally on your own computers. You can just save a ton of money and also have it be sort of faster and maybe even more secure. So there's a lot of just practical benefits for the enterprise in shrinking this stuff down. And shrinking down is probably the wrong term because that's not really what they're doing, but sort of creating small models with the same capabilities.

0:23:50 - Mikah Sargent
Now, one of the things that's kind of, I think, at the heart of this is that the researchers are clearly looking at a way to kind of switch from this huge system to this smaller system and we see the kind of business implications of that. But could you talk about kind of the democratization, the societal benefits of smaller AI models, outside of just the sort of what consumerist approach to things?

0:24:27 - Reed Albergotti
Yeah, I mean I think you saw this a lot in the recent discussion about the Joe Biden executive order on AI right, where people are saying, well, it may sort of have a cooling off effect on research, and what they're really talking about is like this kind of move toward trusting these huge companies with massive AI models away from sort of the research that's happening and you're seeing the fruits of that on these repositories like Huggingface.

In order for this to be fully explored and all the capabilities and all the promise of AI to be fully explored, I think you're gonna need these small models where people have the ability to kind of tinker at home. If you look at just like the history of the internet and computers, I mean, it's been the people in their garage who have come up with these interesting innovations, and so I think you kind of need to see this happen. Until we have abundant clean energy and as many GPUs as we want, which may be a ways away you're gonna need these sort of more efficient and open source models. I should mention this 5.1.5 and lots of others like it are also open source.

0:25:54 - Mikah Sargent
Yeah, one of the things that you mentioned in the piece is kind of how they can be used in conjunction with larger models and how they can maybe be tasked together. Tell us a little bit more about that in particular.

0:26:10 - Reed Albergotti
Yeah, I mean, I think and we should also just say that these large models are not going away.

I mean, the big ones are always gonna be sort of the cutting edge on the frontier and there's no.

You know, gpt-4 is much more capable than any of these small models, just because it has a much bigger breadth of knowledge. But I think what researchers are realizing I mean this is already happening on a practical level, even with the GPT-4, these big companies are routing queries so that they answer them with the most efficient models. So, really like GPT-4, if you really think about it, is actually many models within a model and they're not running all of those parameters all the time. They're still huge and they're still running a lot. But I think people are figuring out how to say, okay, in a single query, part of that can be answered maybe by really small and targeted, very specialized, tailored models, and then other parts can just be, can go to the big, expensive ones and you can cut down on the cost that way. So I think that is really the future and researchers are even talking about, like where you might have just many, many small models running all the time within these corporate structures.

0:27:35 - Mikah Sargent
And then. So something that you may not have the answer to this, but something that I was kind of thinking about as I was going through this is, of course, we saw Microsoft make a huge investment in open AI behind GPT-4 and chat GPT and tie in that large language model in many of its products now, and that continues to roll out. We're seeing Windows get the co-pilot system and even in GitHub with co-pilot it's everywhere right? What I was curious about is if the research that's being done by Microsoft researchers, if this small large language model, is separate from the research and the work that open AI is doing, or if they took what's being done with open AI at open AI and sort of pulled from that to make FI. Does that make sense? I know it's kind of a messy question, but ultimately I'm curious if this was born of its own or if it was Microsoft researchers, now that they are working with open AI, being able to kind of go further with it.

0:28:54 - Reed Albergotti
Yeah, I mean it is. Yeah, I see where you're getting at. I mean, yes, and this is Microsoft Research, which is a separate department within Microsoft that does its own independent research. Oftentimes that has little to do with the company, with the actual business that Microsoft's running, but obviously this AI is priority number one at Microsoft. So the whole company's looking at the results of this stuff and paying attention.

I think actually, microsoft and open AI are working pretty closely together on this stuff and I think there's actually kind of a symbiotic relationship where these models get bigger and then researchers figure out and this has been going on for a long time, by the way, in the field of AI the researchers kind of look at these models and then figure out, okay, well, how are these models working, how do they learn, and then sort of transfer that knowledge back into the small models and then the stuff they learn during that process actually becomes helpful in the next iteration of the large models. So it's kind of this knowledge going back and forth. In fact, in 5.1.5, the researchers used synthetic data created by GPT-4 to train the small models, and we're actually going to have a story out tomorrow that goes into more detail on that, because it was a little bit. You know there was so much there that I decided to kind of break this apart, so we'll get into more of the details there.

But I think that, yeah, I mean there's a tendency to look at this and go well, wait a minute, like Microsoft's you know, supposed to be in partnership with open AI, really they own a huge part of the company and here they're developing stuff that looks like it might be sort of an end around that technology. I don't think that's the right way to look at this. I think, you know, we're sort of moving in this in this general direction of creation. I think that, like you know, open AI's models are the cutting edge right. They're the industry standard. They have a huge market share of, you know, people using this technology and I don't think that's going to just end anytime soon. I mean, they're working on making their own models really efficient. So this is really kind of more about like what the industry will look like in the future and less about like who's going to win out in the short term if that makes sense.

0:31:29 - Mikah Sargent
Yeah, that was a great answer to that question. The last question I have for you the article mentions the potential dangers of small AI models, highlighted by AI critic Max Tegmark. I will say that I felt it was a little bit extreme as a comparison, but I was hoping you could kind of expand on those concerns and talk about, maybe help me make sense of, what seemed like an outsize reaction.

0:31:58 - Reed Albergotti
Yeah, no, I understand that. I mean that came really from another conversation I had with Max Tegmark a while ago maybe a month ago, I think it was a six month anniversary of this letter that he wrote about pausing, asking for a pause in AI development, and I asked him a question about it. It was like well, so much AI research now is like going into making these models smaller. And I said is that kind of comforting to you? Because they're not. So much of the concern, I think, is that these things are going to get bigger and scale and eventually become AGI and that will threaten humanity. And he said oh, no, on the contrary, that scares me even more because that's like saying if you put a nuclear bomb in a suitcase, are you less scared?

I think that he's talking about something that's far off into the future and I don't have the ability to know I don't think anybody really does whether that's a real concern or not, but that is the kind of thing people are thinking of. So if these small models get so capable, or if they're able to mimic whatever is in the large models, then eventually, when you get to an AGI capable large model, then it stands to reason, researchers will figure out how to put that into a very small package, and then it kind of like proliferates around the world and there's nothing you can do about it. Right, I think you can't just pull the plug on the data center.

0:33:36 - Mikah Sargent
So if it's in large, then yeah, then the one thing that everybody's accessing. You could pull the plug on that. But if everybody's got one, okay, that makes sense. I get that now. That makes a lot more sense.

0:33:49 - Reed Albergotti
But again, I think we're talking about something that's so theoretical. At this point I don't wanna dismiss these concerns, but I mean, we don't know, there are arguments against that ever happening and I think I don't wanna be alarmist, but I think it's something we try to do in semaphores, like give different perspectives, and so I thought that was an interesting it's at least a thought provoking take on this otherwise more story about the practical realities of how to run, how to scale AI today.

0:34:27 - Mikah Sargent
Yes, it absolutely provoked thoughts from me, so it was a good thing to include. I wanna thank you so much for taking the time today to kind of walk us through everything that Microsoft is working on in terms of these small AI models. Everybody, of course, should head over to semaphore to check out your exclusive interview to get more information about it, but if folks wanna follow you online to keep up with what you're doing, is there a place they can go to do that?

0:34:54 - Reed Albergotti
Yeah, I mean, I'm on xcom, which is a social media site. I don't know if you've heard of it at Read Albergotti and Threads. You know I'm trying to get into that, so follow me. Follow me in those places on the web as well, please. Awesome, thank you so much. Thank you, it was fun.

0:35:16 - Jason Howell
All right, coming up a report that shows where young adults are going for their news and surprise, at least according to the report, it's not the legacy media that we are used to. We're gonna talk a little bit about that. It's very interesting stuff. But first, this episode of Tech News Weekly is brought to you by our friends ITProTV, now ACI Learning our listeners, I mean, if you've been with us for a long time, you know the name ITProTV, one of our trusted sponsors for the last decade as part of ACI Learning, itprotv. Now ITPro has elevated their highly entertaining, bingeable short format content with over 7,200 hours to choose from. New episodes added daily so you always get that new, that fresh, fresh content to learn from. ACI Learning's personal account managers will be with you every step of the way so you can fortify your expertise with access to self-paced IT training videos. You've got interactive practice labs, so it's not just watching videos, it's getting involved with those labs and certification practice tests which will challenge your knowledge. One user shares excellent resource, not just for theory, but labs incorporated within the subscription. It's fantastic. Highly recommend the resource in top class instructors. That's their quote. Don't miss ACI Learning's practice labs where you can test and experiment before deploying new apps or updates, all without compromising your live system. Msps, you're gonna love that, and you can retake practice IT certification tests so you're confident when you actually sit down for the real exam. Aci Learning brings you IT practice exam questions from Microsoft, comptia, ec Council, pmi and many more. You can access every vendor and skill that you actually need to advance your IT career in one place. Aci Learning is the only official video training for CompTIA, in fact. Or check out their Microsoft IT training. They've got Cisco training, linux training, apple training, security, cloud, and the list goes on. Learn IT, pass your certs, get your dream job. That seems like a pretty awesome roadmap right there. Or, if you're ready to bring your group along, head over to our special link and fill out the form for your team. Twit listeners actually receive at least 20% off an IT pro enterprise solution and can reach all the way up to 65% for volume discounts, depending on the number of seats that you need. So learn more about ACI Learning's premium training options across audit, it and cybersecurity readiness. Just visit and for individuals, you can use code twit 30. You'll get 30% off a standard or premium individual IT pro membership. That's We thank them for their support, their long time support of what we do here at Twit. Thank you, ACILearning All right.

One of my favorite journalists online she's Taylor Lorenz is just awesome. I love the way she, I love the topics that she finds and decides to drill into, because often they kind of take me by surprise or I'm like, oh, this is just a world that I'm not super exposed to and it's nice to kind of go down that road with her and her writing. She did that here, wrote a piece for the Washington Post about shifting attitudes around news authority with younger people today, specifically kind of like the younger adults. She writes that the economics of journalism have shifted very heavily in the last half of the years, especially big surprise Many younger. Do we still call them netizens? Is netizens still a word?

Have you never heard netizens on the net Netizen it's such an old word.

Anyway, this shows you which generation I am involved in. They're getting their news not from traditional journalism outlets, but from places like TikTok, youtube and Instagram, and there's actually a report earlier this year by Reuters Institute for the study of journalism that demonstrated this. It's called the Digital News Report 2023. I think it came out in like June sometime this summer. 94,000 adults, 46 national markets, were part of this report, including the USA. One in five adults under 24 use TikTok as a source for news and that's actually up 5% from 2022. So that's a number that continues to rise, getting your news from a source like TikTok. The report says that it's really driven by a desire for and this is a quote from the report more accessible, informal and entertaining news formats, often delivered by influencers rather than journalists. Also, those who would agree with this sentiment note that the news feels more relevant to them.

So, they feel kind of like a deeper connection to the relevancy around this and really like, as I read through Taylor's piece, it's not like I haven't heard this before I've heard that there are people that really kind of considers places like TikTok and YouTube to be the places where they get their news, and I've certainly experienced times maybe less so now, but times where I'd say a large part of my awareness around news came from Twitter. Yeah, yeah, yeah, absolutely. I'm still going to read the actual reporting on some of these legacy news sites, headlines you're finding.

But how I'm learning about them is directly related to Twitter, or has been largely, but in this case it's a little different, right? Yes, these people are going to places like Twitter because they connect on a certain level with the person delivering the news. Really kind of has something to do with the kind of the niche quality of what platforms like this allow for. They actually have an example of a gentleman who graduated in the field of journalism, studied journalism, but still went to TikTok because he realized that there was a certain demographic that he wasn't finding news for that demographic that he could provide, and so then that demographic finds him and they see in him kind of their own need and they can kind of connect with that Right. So now it's possible for creators to really kind of niche or niche, whatever you want to call it niche down and still find exactly the audience that matches their own niche, and often in ways that traditional mainstream media and journalism isn't very good at or doesn't represent largely.

0:42:24 - Mikah Sargent
Yeah, you think about. So. I'm a classically trained journalist and the whole point of that style of journalism is to be neutral in presentation in all aspects, so that it is available to everyone. But in being available to everyone, you do end up maybe icing out some folks who feel like they're not being talked to, and so I understand that Like broadest strokes.

0:42:58 - Jason Howell
Yeah, yeah, exactly.

0:42:59 - Mikah Sargent
If you're going with the lowest common denominator, then you're not reaching maybe, and so, if you can, I liked that this the individual you mentioned, also a classically trained journalist who went into this more. So I like the idea of just a random person who goes and reads a couple of things and then they somehow amass a following in there, sharing, because then you don't have, I think, the ethics in place, you don't have the skills, the necessary skills in place, which I know sounds kind of gatekeep-y, but at the same time, that worries me, because I have seen family members and friends who will oh, did you hear about this? Yeah, I heard about that, and here's why it's not true. And here are seven different ways that I can show you that it's not true. But you heard it from this person or this thing and you just ran with it and thought that it was true. So I can understand why there would be some concerns about this.

0:44:03 - Jason Howell
Yeah, I think what was really interesting to me as I read through this because I realized, at least for myself, I can totally understand why this happens and I could see it for myself too, even though I am of an older generation. I do have a respect and I don't know what the word is, but I do respect the role of ethics and journalism and I do trust that largely mainstream journalism, that hopefully their goal is to do things without a lot of extra external influence. And I realize a lot of people don't trust mainstream media and there have been a number of things that have happened in the last half of the years to give them that distrust, let alone prior to that. And then you've got those born after the year 2000,.

They have a much different experience right. Their life experience involves the internet, involves going to these places and following people, like when I think about my experience growing up and looking at celebrity what is celebrity? And always thinking like that's someone way out there that everybody adores and everybody loves because they're really good and they're unattainable, they're unapproachable. I could never get close to that if I wanted to. And nowadays that's different right. Nowadays it's pockets of celebrity. It's pockets of celebrity and they're totally accessible. It's kind of built into the foundation of what it means now to be on these platforms. You want to pull people in. How you do that, you create a community that like-minded people together, you foster that, and so I can understand why those born after the year 2000 might feel less of an attachment or association with faceless journalistic outlets that have been around for a hundred years, versus this person that I like, totally like.

0:46:01 - Mikah Sargent
I watch them and I feel like they get me and I've got a parasocial relationship with them.

0:46:06 - Jason Howell
Totally, totally so. I totally get it. We're seeing the impact here, with legacy news organizations falling apart, vice and these are actually vice Buzzfeed, gokker these are like yeah, those are not old school.

Yeah, these are new school legacy, not old school legacy, and yet they're crumbling. But, like you said, I think the flip side here is that it is easier in this regard. Call it gatekeeping, call it whatever, but those ethical things, those boundaries, are there for a reason they're there to protect, and this does come at the cost of a higher likelihood of things like misinformation, disinformation, that echo chamber quality, that trap that people can fall into. I'm sure Jeff Jarvis would have a ton to say about this, but I thought it was a really compelling piece to get a perspective, because it's easy to read it and think like, oh, the kids these days they're getting their news from TikTok. That's just ridiculous. How stupid, yeah. And I don't think that's. I think that that can be true. There can be people that you probably don't want to get your news from, and I think that times are just changing and the kids more people born after 2000, have a different understanding of what they trust, and I think that's evolving and it's changing.

0:47:31 - Mikah Sargent
And ultimately there's nothing we can do to change what they're like. No, it's just. Yeah. I think it's better to try to understand it and provide the necessary skills and knowledge to be mindful about the sources, but to sort of rail against the machine or whatever. It's not gonna make a difference anyway, because that's where they will continue to. So it's more important that maybe we uplift those creators who are doing it right in the classical sense, with the ethics involved, than to say, no, it's so stupid that you're getting your news there.

0:48:12 - Jason Howell
I mean, there's just so much independent journalism depending on how you define journalism so much independent stuff happening right now, or seeing more and more of it, even these big journalists that have been at a big place, like the Washington Post or the New York Times or whatever, and they decide no, you know what I'm gonna go right here.

I'm gonna create my pocket and the people that care about me are gonna follow me to my pocket, and they already trust me, or they learned over time that they could trust me and we're gonna be over here. In our cozy pocket, in our cozy pocket. So anyways, interesting read, well worth reading, and big props to Taylor Lorenz, who always makes me think.

0:48:54 - Mikah Sargent
Alrighty up. Next we're heading back into AI, this time with a troubling story in New Jersey. Before we do, though, I do wanna take a moment to tell you about our next sponsor, which is Bitwarden. We're bringing you this episode of Tech News Weekly. Bitwarden is the only open source cross-platform password manager that you can trust. Security Now's Steve Gibson has even switched over, and with Bitwarden, all of the data in your vault is end to end encrypted, not just the passwords. Bitwarden protects you by creating unique usernames and adding strong, randomly generated passwords for each account. Plus, you can use any of their six integrated email alias services. You can log into Bitwarden and decrypt your vault after using SSO on a registered, trusted device. No master password in that case is needed. This new solution makes it even easier for enterprise users to stay safe and secure. With Bitwarden, you can transparently view all of Bitwarden's code. It's available on GitHub. On top of being public to the world, bitwarden also has professional third party audits performed yearly, and the results get published on their website. You can share private data securely with coworkers across departments or the entire company, with fully customizable and adaptive plans. There's Bitwarden's Teams organization, which is $3 per month per user, and their enterprise organization plan is just $5 per month per user. Individuals get Bitwarden's basic free account with unlimited passwords and now includes hardware security keys or pass keys as a form of two factor authentication. You can get a premium account for less than a dollar a month or bring the whole family with their family organization option to give up to six users premium features for just $3 and 33 cents a month.

Bitwarden's 2024 developer survey polled more than 600 developers to understand how they perceive and implement security best practices. This poll revealed 60% of developers manage 100 plus secrets, 65% practice hard coding secrets in source code and 55% keep secrets in clear text. Prevail 30% of sensitive data in generative AI platforms potentially risks involving developer secrets, 24% risk privileged credentials, 28% risk customer information and more 91% of developers undergo security training annually. Yet more than a fifth engage in risky behavior, such as using public computers to access work data and networks. So yeah, that developer survey showed that maybe developers aren't being as safe as they should be. Bitwarden can help you out with that. At TWiT, we are fans of password managers. You can get started with Bitwarden's free trial of a Teams or Enterprise plan, or get started for free across all devices as an individual user at That's Thank you, bitwarden, for sponsoring this week's episode of Tech News Weekly.

So at a school in New Jersey it's called Westfield High School, a number of girls who went to the school sort of started their day and they saw a lot of the boys at the school whispering amongst one another I believe these were sophomore students whispering amongst one another and for several days continuing to do so. Finally, three or four days later, one of the girls was able to get the story out of one of the boys, who said that photos of them, of the girls, had been shared in group chats among the boys. These photos used the real faces of the female students, of the girl students, and used AI to generate nude bodies for these real students. This is not the first time that this has happened, but it is kind of quickly growing into because of the proliferation of AI and the apparent access to tools out there to do this. It is quickly growing into something that states are having to look at because of the sort of legal implications involved.

So one thing that's important to understand is tools like Adobe's Firefly or Doll E from OpenAI. They have a lot of protections in place that keep you from being able to do any of this stuff, and sometimes even more so than you might imagine. I was using Adobe Firefly and I was trying to generate an artwork for a little project that I was working on and even mentioning. It was some sort of fantasy character to have a sword, depending on how you worded it. It would not let you because it could tend towards violence, and so there are lots of protections in place to make sure that these tools are not generating stuff that would be harmful to the company but also could harm someone in the end.

Unfortunately, a lot of the tools that are being used right now, many of them are open source. We just talked to Reed Albergotti about these smaller, large language models that are multimodal, so would work with images that are open source, so someone could take that open source tool and tie it into something they're using. There are loads of these sort of face swap tools and also clothes removing tools that exist on the internet, and you can access them from something as low powered as a smartphone. What I found shocking and I, of course, can't speak to the accuracy of this, but this is according to an image detection firm called Sensiti AI. Sensiti AI says that more than 90% of the false imagery that is out there these deep fakes are porn, are pornographic. So the Wall Street Journal, in this report, is talking about how we are seeing.

The other day, anthony generated AI Santa Leo and it looked incredible right and I've used it to make my Chihuahua look like it has a human body and is like an old school French king or something. We see that. But that is such a small percentage of how these tools are being used. According to Sensiti AI and there was another firm that they quoted that the larger use of this is pornographic in nature and it's something that a lot of groups are working on trying to find a solution to.

It's also part of what the Biden administration is working on with that executive order. In fact, part of the order talks about child sexual abuse material being of one of the things that AI needs to not allow to happen. What's more with this is that it, on its face, would, in theory, be a serious crime because it is a child. These are children who are being depicted nude, but because the parts the nude parts of the photos are AI generated, it does leave this gray area in law For us, for a human being, we can say yes, absolutely that is wrong, and hear all the reasons why, but when it comes to actual legal, it has to follow the law.

0:56:46 - Jason Howell
You know what I mean, absolutely.

0:56:47 - Mikah Sargent
And so there are lots of states, and even the federal government that is trying to kind of figure out how this is going to look, what this means, what it is. According to the Wall Street Journal, virginia, california, minnesota and New York have outlawed the distribution of faked porn or given victims the right to sue its creators in civil court. So in both cases, or in either case, there is some sort of retribution, but in other states that may not be the case. In New Jersey there's a state senator who is looking into do we have a law that's in the works right now? Is there a law that already exists that has little bits that could be used for something like this, or am I going to have to draft a bill to make this a law that will protect it?

And it has resulted in the students themselves kind of talked about this and they said what really is upsetting about this is that they didn't realize that they were going to school with someone who would do something that violates them so much that they couldn't feel comfortable at school, just existing, and that, you know, I can remember being in high school and I can remember hearing of a couple of instances where some horrible person took what was a true, real photo of someone that was sent to them and then spread it around right and that was handled immediately. It was taken care of, it was handled. But I can't imagine for these students, who didn't even take a photo like that in the first place, to then, you know, I guess what I'm saying is like you would maybe have some level of preparedness in the instance that something like that was out there, but Because you knew it existed in the first place.

Yeah, because you knew it existed in the first place. But to, just because you have a profile photo on your Instagram account or you post your Instagram account, someone taking your photograph and putting it on this body, that's I just can't imagine. And so a lot of them have or several of them have, according to one of the students completely deleted their social media accounts. Others have, you know, done group counseling and they're kind of working through that. As this goes on, the Wall Street Journal also says is that there has been some precedent in other places.

In April, a 22-year-old Long Island band was sentenced to six months in jail for creating and posting faked images depicting women from his old school, along with personally identifying information. The original photos were taken when the women were in middle school and in high school, and so it's unclear, you know, maybe the photos were, that was, the photos used to kind of create the new photos, and they were generated as adults. But because the photos were taken at a time when they were not and in the case of these you know individuals being depicted then the individual ended up going to jail for, or sentenced to jail for, six months. So, ultimately, I think what this story is about is that AI is moving very quickly and, unfortunately, people are using it in ways that they absolutely should not be and we need the law, needs to catch up, frankly.

1:00:30 - Jason Howell
Yeah, you know I find myself having an emotional reaction to this story just because I do have I can only imagine you know kids similar in age to this and I can't even imagine having you know being presented with this and you know kids that age they're man junior, high and high school is just so filled with moments of shame and not good enough and judgments and all these things. It's so hard to then be presented with something like this that even an adult could would have an incredibly difficult time maneuvering through something like this.

1:01:10 - Eric Geller
Absolutely, you know you said therapy.

1:01:12 - Jason Howell
It's like, yeah, at the very beginning, you, something like this happens, like I don't know that, you, there's any way to not. One of the victims in here said at first I cried and then I decided I should not be sad, I should be mad and should advocate for myself and the other victims which I don't know. I read that I'm like, yes, but man, it's got.

it's got to be really hard to like be able to compartmentalize that because that's true, right, Like that isn't to you, Like the reality is that isn't to you. In that image, that is your face and everything else is fake. So it's not actually you. But how do you remove, you know, your emotional, like shock and attachment to knowing that this thing exists, knowing that there are people at the school that you share space with. That's the thing that didn't you know that? Don't consider kind of the, the kind of the moral boundaries that are crossed in doing something like this, I would think you would not feel safe anymore.

Oh, I imagine. So this is the kind of thing, you know that has kids changing schools, you know, or worse, I hate to say. And then yet I'm also presented with something that Leo you know had said just this week, which is kind of his epiphany around AI. He said it yesterday on this week in Google which is that the enemy is not the technology, because we've always had the technology to do these things, we've always had Photoshop. It's always been well.

1:02:42 - Mikah Sargent
we haven't always had Photoshop but we've had it for very long time.

1:02:46 - Jason Howell
And someone who knew what they were doing with Photoshop could very easily do exactly this. Absolutely, it's the people and it's the people behind it. And how do we educate the people to know that you, that this is territory that you don't trade, that this is wrong? Yeah, yeah, yeah, I totally, I have an emotional reaction to this story. It's I don't want. I feel so bad for anyone who encounters this, and especially children. Absolutely, oh, my goodness, that is just awful, yeah, so there you go, this Wall Street Journal story, definitely worth reading to get a sense of kind of the directions that this technology can be taken. I mean, there's a, there's a million different direction, bad directions, that AI can be taken. As with any technology, you know, at the end of the day, it's the intentions of the people behind it. What is their intention, that that is the real big problem.

1:03:41 - Mikah Sargent
Yeah, that's from Julie Jargon over on the Wall Street Journal. Yeah.

1:03:45 - Jason Howell
Thank you for bringing that. We have reached the end of this episode of Tech News Weekly. If you want to support us, you can do so by subscribing to this show. That is the, I'd say, one of the most, if not the most, important thing you can do to support and let you know the folks here at Twitter know. Hey, I like Tech News Weekly and I want to keep receiving it. So twittertv slash TNW subscribe. You'll get the podcast every week and we will have smiles on our face. Thank you.

1:04:14 - Mikah Sargent
Another way that you can support the network is by joining club twit at twittv slash club twit. When you join the club, starting at $7 a month or $84 a year, you will get every single Twitch show with no ads. It's just the content, because you, in effect, are the sponsor of the show, right? So you'll have your own special feeds for every single show. You'll also gain access to the members only twit plus bonus fee that has extra content you won't find anywhere else behind the scenes before the show, after the show, special club twit events, including the recent escape box that we did, and access to the members only discord server, a fun place to go to chat with your fellow club twit members and also those of us here at twit. It's a pretty active community and we streamed the different club twit events live there as well, so you could have tuned in to watch the escape box at that time. Again, that's $7 a month, $84 a year at twittv slash club twit. Along with that, you'll also gain access to some club twit exclusive shows.

There's the untitled Linux show, which is, as you might imagine, a show all about Linux. There's Hands on Windows, which is a short format show from Paul Therot that covers Windows Tips and Tricks. There's Hands on Mac, which is a short format show that covers Apple Tips and Tricks. There is the Home Theater Geeks program, which is Scott Wilkinson's show where he talks all things home theater, including interviews, reviews, questions answered. It's great for all of that stuff to deal with the Home Theater and of course, you can also watch AI Inside, which will be a little bit later today Sounds like a good show.

Yeah, it is from Jason Howell. Again, all at twittv slash club twit. So we would love to have you join the club, if you haven't yet, and provide some support that way. If you want to follow me online on all of the places I'm never posting, I am at Mikah Sargent, or you can head to chiwawacoffee that's C-H-I-H-U-A, h-u-acoffee, where I've got links to the places that you can find the stuff that I'm doing. You can watch iOS today on Tuesdays with Throse-Marie Orchard and me, Mikah Sargent, where we cover all things Apple save for the Mac it's kind of all of their mobile platforms. You can also watch Ask the Tech Guys on Sunday, where we take Leo Laporte and I take your questions live on air and do our best to answer them. And hands on Mac will come out later today for those of you who are in the club, so you can also watch that. Jason Howell, what about you?

1:06:51 - Jason Howell
Yeah, well, I'm trying out a new thing. So go to raygunfun. See it rhymes. It's easy-ish to remember raygunfun. You'll find all the ways that you can follow me online. It's just easier just to go raygunfun or say it out loud. At least say it out loud once. You'll have fun when you say it out loud. It's just enjoyable. Thanks to everybody who helped us do the show each and every week John Ashley, john Slenina, berg, mcquinn, sometimes Anthony Nielsen. We've got everybody involved with this show, including you. You without you we wouldn't have a show. So thank you for watching. We'll see you next time on Tech News Weekly. Bye, everybody.

1:07:31 - Lou Maresca
Come join us on This Week in Enterprise Tech. Tech expert co-hosts and I talk about the enterprise world and we're joined by industry professionals and trailblazers like CEOs, cios, ctos, cisos every acronym role plus IT pros and marketeers. We talk about technology, software plus services, security you name it. Everything under the sun. You know what? I learn something each and every week and I bet you you will too. So definitely join us and, of course, check out the website and click on This Week in Enterprise Tech. Subscribe today.

All Transcripts posts